id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,457,042 | https://en.wikipedia.org/wiki/Dynactin | Dynactin is a 23 subunit protein complex that acts as a co-factor for the microtubule motor cytoplasmic dynein-1. It is built around a short filament of actin related protein-1 (Arp1).
Discovery
Dynactin was identified as an activity that allowed purified cytoplasmic dynein to move membrane vesicles along microtubules in vitro. It was shown to be a multiprotein complex and named "dynactin" because of its role in dynein activation.
The main features of dynactin were visualized by quick-freeze, deep-etch, rotary shadow electron microscopy. It appears as a short filament, 37-nm in length, which resembles F-actin, plus a thinner, laterally oriented arm. Antibody labelling was used to map the location of the dynactin subunits.
Structure
Dynactin consists of three major structural domains: (1) sidearm-shoulder: DCTN1/p150Glued, DCTN2/p50/dynamitin, DCTN3/p24/p22;(2)the Arp1 filament: ACTR1A/Arp1/centractin, actin, CapZ; and (3) the pointed end complex: Actr10/Arp11, DCTN4/p62, DCTN5/p25, and DCTN6/p27.
A 4Å cryo-EM structure of dynactin revealed that its filament contains eight Arp1 molecules, one β-actin and one Arp11. In the pointed end complex p62/DCTN4 binds to Arp11 and β-actin and p25 and p27 bind both p62 and Arp11. At the barbed end the capping protein (CapZαβ) binds the Arp1 filament in the same way that it binds actin, although with more charge complementarity, explaining why it binds dynactin more tightly than actin.
The shoulder contains two copies of p150Glued/DCTN1, four copies of p50/DCTN2 and two copies of p24/DCTN3. These proteins form long bundles of alpha helices, which wrap over each other and contact the Arp1 filament. The N-termini of p50/DCTN2 emerge from the shoulder and coat the filament, providing a mechanism for controlling the filament length. The C-termini of the p150Glued/DCTN1 dimer are embedded in the shoulder, whereas the N-terminal 1227 amino acids form the projecting arm. The arm consists of an N-terminal CAPGly domain which can bind the C-terminal tails of microtubules and the microtubule plus end binding protein EB1. This is followed by a basic region, also involved in microtubule binding, a folded-back coiled coil (CC1), the intercoiled domain (ICD) and a second coiled coil domain (CC2). The p150Glued arm can dock into against the side of the Arp1 filament and pointed end complex.
DCTN2 (dynamitin) is also involved in anchoring microtubules to centrosomes and may play a role in synapse formation during brain development. Arp1 has been suggested as the domain for dynactin binding to membrane vesicles (such as Golgi or late endosome) through its association with β-spectrin. The pointed end complex (PEC) has been shown to be involved in selective cargo binding. PEC subunits p62/DCTN4 and Arp11/Actr10 are essential for dynactin complex integrity and dynactin/dynein targeting to the nuclear envelope before mitosis. Actr10 along with Drp1 (Dynamin related protein 1) have been documented as vital to the attachment of mitochondria to the dynactin complex. Dynactin p25/DCTN5 and p27/DCTN6 are not essential for dynactin complex integrity, but are required for early and recycling endosome transport during the interphase and regulation of the spindle assembly checkpoint in mitosis.
Interaction with dynein
Dynein and dynactin were reported to interact directly by the binding of dynein intermediate chains with p150Glued. The affinity of this interaction is around 3.5μM. Dynein and dynactin do not run together in a sucrose gradient, but can be induced to form a tight complex in the presence of the N-terminal 400 amino acids of Bicaudal D2 (BICD2), a cargo adaptor that links dynein and dynactin to Golgi derived vesicles. In the presence of BICD2, dynactin binds to dynein and activates it to move for long distances along microtubules.
A cryo-EM structure of dynein, dynactin and BICD2 showed that the BICD2 coiled coil runs along the dynactin filament. The tail of dynein also binds to the Arp1 filament, sitting in the equivalent site that myosin uses to bind actin. The contacts between the dynein tail and dynactin all involve BICD, explaining why it is needed to bring them together. The dynein/dynactin/BICD2 (DDB) complex has also been observed, by negative stain EM, on microtubules. This shows that the cargo (Rab6) binding end of BICD2 extends out through the pointed end complex at the opposite end away from the dynein motor domains.
Functions
Dynactin is often essential for dynein activity and can be thought of as a "dynein receptor" that modulates binding of dynein to cell organelles which are to be transported along microtubules.
Dynactin also enhances the processivity of cytoplasmic dynein and kinesin-2 motors.
Dynactin is involved in various processes like chromosome alignment and spindle organization in cell division. Dynactin contributes to mitotic spindle pole focusing through its binding to nuclear mitotic apparatus protein (NuMA). Dynactin also targets to the kinetochore through binding between DCTN2/dynamitin and zw10 and has a role in mitotic spindle checkpoint inactivation. During prometaphase, dynactin also helps target polo-like kinase 1 (Plk1) to kinetochores through cyclin dependent kinase 1 (Cdk1)-phosphorylated DCTN6/p27, which is involved in proper microtubule-kinetochore attachment and recruitment of spindle assembly checkpoint protein Mad1. In addition, dynactin has been shown to play an essential role in maintaining nuclear position in Drosophila, zebrafish or in different fungi. Dynein and dynactin concentrate on the nuclear envelope during the prophase and facilitate nuclear envelope breakdown via its DCTN4/p62 and Arp11 subunits.
Dynactin is also required for microtubule anchoring at centrosomes and centrosome integrity. Destabilization of the centrosomal pool of dynactin also causes abnormal G1 centriole separation and delayed entry into S phase, suggesting that dynactin contributes to the recruitment of important cell cycle regulators to centrosomes. In addition to transport of various organelles in the cytoplasm, dynactin also links kinesin II to organelles.
See also
Motor protein
Dynein
DCTN1
Centractin
References
Further reading
Protein families
Motor proteins | Dynactin | [
"Chemistry",
"Biology"
] | 1,698 | [
"Molecular machines",
"Protein families",
"Motor proteins",
"Protein classification"
] |
14,457,045 | https://en.wikipedia.org/wiki/Ald%20%28unit%29 | Ald is an obsolete Mongolian measure equal to the average human male's armspan (length between a male's outstretched arms). An ald is therefore approximately equal to .
See also
Mongolian units
Culture of Mongolia
Units of length
Human-based units of measurement
Obsolete units of measurement | Ald (unit) | [
"Mathematics"
] | 59 | [
"Obsolete units of measurement",
"Quantity",
"Units of measurement",
"Units of length"
] |
14,457,331 | https://en.wikipedia.org/wiki/Lead%E2%80%93lead%20dating | Lead–lead dating is a method for dating geological samples, normally based on 'whole-rock' samples of material such as granite. For most dating requirements it has been superseded by uranium–lead dating (U–Pb dating), but in certain specialized situations (such as dating meteorites and the age of the Earth) it is more important than U–Pb dating.
Decay equations for common Pb–Pb dating
Three stable "daughter" Pb isotopes result from the radioactive decay of uranium and thorium in nature; they are 206Pb, 207Pb, and 208Pb. 204Pb is the only non-radiogenic lead isotope, therefore is not one of the daughter isotopes. These daughter isotopes are the final decay products of U and Th radioactive decay chains beginning from 238U (half-life 4.5 Gy), 235U (half-life 0.70 Gy) and 232Th (half-life 14 Gy) respectively. With the progress of time, the final decay product accumulates as the parent isotope decays at a constant rate. This shifts the ratio of radiogenic Pb versus non-radiogenic 204Pb (207Pb/204Pb or 206Pb/204Pb) in favor of radiogenic 207Pb or 206Pb. This can be expressed by the following decay equations:
where the subscripts P and I refer to present-day and initial Pb isotope ratios, λ235 and λ238 are decay constants for 235U and 238U, and t is the age.
The concept of common Pb–Pb dating (also referred to as whole rock lead isotope dating) was deduced through mathematical manipulation of the above equations. It was established by dividing the first equation above by the second, under the assumption that the U/Pb system was undisturbed. This rearranged equation formed:
where the factor of 137.88 is the present-day 238U/235U ratio. As evident by the equation, initial Pb isotope ratios, as well as the age of the system are the two factors which determine the present day Pb isotope compositions. If the sample behaved as a closed system then graphing the difference between the present and initial ratios of 207Pb/204Pb versus 206Pb/204Pb should produce a straight line. The distance the point moves along this line is dependent on the U/Pb ratio, whereas the slope of the line depends on the time since Earth's formation. This was first established by Nier et al., 1941.
The development of the Geochron database
The development of the Geochron database was mainly attributed to Clair Cameron Patterson’s application of PbPb dating on meteorites in 1956. The Pb ratios of three stony and two iron meteorites were measured. The dating of meteorites would then help Patterson in determining not only the age of these meteorites but also the age of Earth's formation. By dating meteorites Patterson was directly dating the age of various planetesimals. Assuming the process of elemental differentiation is identical on Earth as it is on other planets, the core of these planetesimals would be depleted of uranium and thorium, while the crust and mantle would contain higher U/Pb ratios. As planetesimals collided, various fragments were scattered and produced meteorites. Iron meteorites were identified as pieces of the core, while stony meteorites were segments of the mantle and crustal units of these various planetesimals.
Samples of iron meteorite from Canyon Diablo (Meteor Crater) Arizona were found to have the least radiogenic composition of any material in the solar system. The U/Pb ratio was so low that no radiogenic decay was detected in the isotopic composition. As illustrated in figure 1, this point defines the lower (left) end of the isochron. Therefore, troilite found in Canyon Diablo represents the primeval lead isotope composition of the solar system, dating back to .
Stony meteorites however, exhibited very high 207Pb/204Pb versus 206Pb/204Pb ratios, indicating that these samples came from the crust or mantle of the planetesimal. Together, these samples define an isochron, whose slope gives the age of meteorites as 4.55 Byr.
Patterson also analyzed terrestrial sediment collected from the ocean floor, which was believed to be representative of the Bulk Earth composition. Because the isotope composition of this sample plotted on the meteorite isochron, it suggested that earth had the same age and origin as meteorites, therefore solving the age of the Earth and giving rise to the name 'geochron'.
Lead isotope isochron diagram used by C. C. Patterson to determine the age of the Earth in 1956. Animation shows progressive growth over 4550 million years (Myr) of the lead isotope ratios for two stony meteorites (Nuevo Laredo and Forest City) from initial lead isotope ratios matching those of the Canyon Diablo iron meteorite.
Precise Pb–Pb dating of meteorites
Chondrules and calcium–aluminium-rich inclusions (CAIs) are spherical particles that make up chondritic meteorites and are believed to be the oldest objects in the Solar System. Hence precise dating of these objects is important to constrain the early evolution of the Solar System and the age of the Earth. The U–Pb dating method can yield the most precise ages for early Solar System objects due to the optimal half-life of 238U. However, the absence of zircon or other uranium-rich minerals in chondrites, and the presence of initial non-radiogenic Pb (common Pb), rules out direct use of the U–Pb concordia method. Therefore, the most precise dating method for these meteorites is the Pb–Pb method, which allows a correction for common Pb.
When the abundance of 204Pb is relatively low, this isotope has larger measurement errors than the other Pb isotopes, leading to very strong correlation of errors between the measured ratios. This makes it difficult to determine the analytical uncertainty on the age. To avoid this problem, researchers developed an 'alternative Pb–Pb isochron diagram' (see figure) with reduced error correlation between the measured ratios. In this diagram the 204Pb/206Pb ratio (the reciprocal of the normal ratio) is plotted on the x-axis, so that a point on the y axis (zero 204Pb/206Pb) would have infinitely radiogenic Pb. The ratio plotted on this axis is the 207Pb/206Pb ratio, corresponding to the slope of a normal Pb/Pb isochron, which yields the age. The most accurate ages are produced by samples near the y-axis, which was achieved by step-wise leaching and analysis of the samples.
Previously, when applying the alternative Pb–Pb isochron diagram, the 238U/235U isotope ratios were assumed to be invariant among meteoritic material. However, it has been shown that 238U/235U ratios are variable among meteoritic material. To accommodate this, U-corrected Pb–Pb dating analysis is used to generate ages for the oldest solid material in the Solar System using a revised 238U/235U value of 137.786 ± 0.013 to represent the mean 238U/235U isotope ratio in bulk inner Solar System materials.
The result of U-corrected Pb–Pb dating has produced ages of 4567.35 ± 0.28 My for CAIs (A) and chondrules with ages between 4567.32 ± 0.42 and 4564.71 ± 0.30 My (B and C) (see figure). This supports the idea that CAIs crystallization and chondrule formation occurred around the same time during the formation of the solar system. However, chondrules continued to form for approximately 3 My after CAIs. Hence the best age for the original formation of the Solar System is 4567.7 My. This date also represents the time of initiation of planetary accretion. Successive collisions between accreted bodies led to the formation of larger and larger planetesimals, finally forming the Earth–Moon system in a giant impact event.
The age difference between CAIs and chondrules measured in these studies verifies the chronology of the early Solar System derived from extinct short-lived nuclide methods such as 26Al–26Mg, thus improving our understanding of the development of the Solar System and the formation of the Earth.
References
External links
Geochronology and Isotopes Data Portal
Radiometric dating | Lead–lead dating | [
"Chemistry"
] | 1,803 | [
"Radiometric dating",
"Radioactivity"
] |
14,457,354 | https://en.wikipedia.org/wiki/Zeeman%E2%80%93Doppler%20imaging | In astrophysics, Zeeman–Doppler imaging is a tomographic technique dedicated to the cartography of stellar magnetic fields, as well as surface brightness or spots and temperature distributions.
This method makes use of the ability of magnetic fields to polarize the light emitted (or absorbed) in spectral lines formed in the stellar atmosphere (the Zeeman effect). The periodic modulation of Zeeman signatures during the stellar rotation is employed to make an iterative reconstruction of the vectorial magnetic field at stellar surface.
The method was first proposed by Marsh and Horne in 1988, as a way to interpret the emission line variations of cataclysmic variable stars. This techniques is based on the principle of maximum entropy image reconstruction; it yields the simplest magnetic field geometry (as a spherical harmonics expansion) among the various solutions compatible with the data.
This technique is the first to enable the reconstruction of the vectorial magnetic geometry of stars similar to the Sun. It now enables systematic studies of stellar magnetism and provides insights into the geometry of large arches formed by magnetic fields above stellar surfaces. To collect the observations related to Zeeman-Doppler Imaging, astronomers use stellar spectropolarimeters like ESPaDOnS at CFHT on Mauna Kea (Hawaii), HARPSpol at the ESO's 3.6m telescope (La Silla Observatory, Chile), as well as NARVAL at Bernard Lyot Telescope (Pic du Midi de Bigorre, France).
The technique is very reliable, as the reconstruction of the magnetic field maps with different algorithms yield almost identical results, even with poorly sampled data sets. It makes use of high-resolution time-series spectropolarimetric observations (Stokes parameter spectra). It has however been shown, from both numerical simulations and observations, that the magnetic field strength and complexity is underestimated if no linear polarization spectra is available from observations. Since linear polarization signatures are weaker compared circular polarization their detections are not as reliable, particularly for cool stars. Therefore, the observations are normally limited to only Stokes IV parameters. With more modern spectropolarimeters such as the recently installed SPIRou at CFHT and CRIRES+ at the Very Large Telescope (Chile) the sensitivity to linear polarization will increase, allowing for more detailed studies of cool stars in the future.
References
External links
Zeeman-Doppler Imaging
Stellar tomography: when medical imaging helps astronomy
Recent examples of using Zeeman-Doppler Imaging
Astrophysics
Spectroscopy | Zeeman–Doppler imaging | [
"Physics",
"Chemistry",
"Astronomy"
] | 513 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Astrophysics",
"Spectroscopy",
"Astronomical sub-disciplines"
] |
14,457,415 | https://en.wikipedia.org/wiki/Manufacturer%27s%20empty%20weight | In aviation, manufacturer's empty weight (MEW) (also known as manufacturer's weight empty (MWE)) is the weight of the aircraft "as built" and includes the weight of the structure, power plant, furnishings, installations, systems, and other equipment that are considered an integral part of an aircraft before additional operator items are added for operation.
Basic aircraft empty weight is essentially the same and excludes any baggage, passengers, or usable fuel. Some manufacturers define this empty weight as including optional equipment, i.e. GPS units, cargo baskets, or spotlights.
Specification MEW
This is the MEW quoted in the manufacturer's standard specification documents and is the aircraft standard basic dry weight upon which all other standard specifications and aircraft performance are based by the manufacturer.
The Specification MEW includes the weight of:
Airframe structure – primary and secondary structures (fuselage, wing, tail, control surfaces, nacelles, landing gear).
Powerplant.
Auxiliary power unit (APU).
Systems (instruments, navigation, hydraulics, pneumatics, fuel systems (but not fuel itself), electrical system, electronics, fixed furnishings (but not operator specific), air conditioning, anti-ice system, etc.).
Fixed equipment and services considered an integral part of the aircraft.
Fixed ballast (if present).
Closed system fluids (such as hydraulic fluids).
For small aircraft, the MEW may include unusable fuel and oil.
The Specification MEW excludes the weight of:
All fuel (both usable and unusable).
Potable water, anti-ice, and chemicals in toilets.
Engine oil and APU oil.
All specification items, selections, and installations which are non-basic (i.e. optional selections).
Customer specific selections, installations, and options.
Operator/operating items.
Removable equipment and services.
Payload.
For small aircraft, the specification MEW is known as the standard empty weight (or standard weight empty).
See also
Aircraft gross weight
Operating empty weight
References
External links
Aircraft weight and balance
Aircraft weight measurements | Manufacturer's empty weight | [
"Physics",
"Engineering"
] | 425 | [
"Aircraft weight measurements",
"Mass",
"Matter",
"Aerospace engineering"
] |
14,457,458 | https://en.wikipedia.org/wiki/Women%20in%20medicine | The presence of women in medicine, particularly in the practicing fields of surgery and as physicians, has been traced to the earliest of history. Women have historically had lower participation levels in medical fields compared to men with occupancy rates varying by race, socioeconomic status, and geography.
Women's informal practice of medicine in roles such as caregivers, or as allied health professionals, has been widespread. Since the start of the 20th century, most countries of the world provide women with access to medical education. Not all countries ensure equal employment opportunities, and gender equality has yet to be achieved within medical specialties and around the world.
History
Ancient medicine
The involvement of women in the field of medicine has been recorded in several early civilizations. An Egyptian of the Old Kingdom of Egypt, Peseshet, described in an inscription as "lady overseer of the female physicians", is the earliest woman named in the history of science. Ubartum lived around 2050 BC in Mesopotamia and came from a family of several physicians. Agamede was cited by Homer as a healer in ancient Greece before the Trojan War. Agnodice was the first female physician to practice legally in 4th century BC Athens. Metrodora was a physician and generally regarded as the first female medical writer. Her book, On the Diseases and Cures of Women, was the oldest medical book written by a female and was referenced by many other female physicians. She credited much of her writings to the ideologies of Hippocrates.
Medieval Europe
During the Middle Ages, convents were a centralized place of education for women, and some of these communities provided opportunities for women to contribute to scholarly research. An example is the German abbess Hildegard of Bingen, whose prolific writings include treatments of various scientific subjects, including medicine, botany and natural history (–58). She is considered Germany's first female physician.
Women in the Middle Ages participated in healing techniques and several capacities in medicine and medical education. Women occupied select ranks of medical personnel during the period. They worked as herbalists, midwives, surgeons, barber-surgeons, nurses, and traditional empirics. Women healers treated most patients, not limiting themselves to treating solely women. The names of 24 women described as surgeons in Naples, Italy between 1273 and 1410 have been recorded, and references have been found to 15 women practitioners, most of them Jewish and none described as midwives, in Frankfurt, Germany between 1387 and 1497. The earliest known English women doctors, Solicita and Matilda Ford, date to the late twelfth century; they were referred to as medica, a term for trained physicians.
Women also engaged in midwifery and healing arts without having their activities recorded in written records, and practiced in rural areas or where there was little access to medical care. Society in the Middle Ages limited women's role as physician. Once universities established faculties of medicine during the thirteenth century, women were excluded from advanced medical education. Licensure began to require clerical vows for which women were ineligible, and healing as a profession became male-dominated.
In many occasions, women had to fight against accusation of illegal practice done by males, putting into question their motives. If they were not accused of malpractice, then women were considered "witches" by both clerical and civil authorities. Surgeons and barber-surgeons were often organized into guilds, which could hold out longer against the pressures of licensure. Like other guilds, a number of the barber-surgeon guilds allowed the daughters and wives of their members to take up membership in the guild, generally after the man's death. Katherine "la surgiene" of London, daughter of Thomas the surgeon and sister of William the Surgeon, belonged to a guild in 1286. Documentation of female members in the guilds of Lincoln, Norwich, Dublin and York continue until late in the period.
Midwives, those who assisted pregnant women through childbirth and some aftercare, included only women. Midwives constituted roughly one third of female medical practitioners. Men did not involve themselves in women's medical care; women did not involve themselves in men's health care. The southern Italian coastal town of Salerno was a center of medical education and practice in the 12th century. In Salerno the physician Trota of Salerno compiled a number of her medical practices in several written collections. One work on women's medicine that was associated with her, the ('On Treatments for Women') formed the core of what came to be known as the Trotula ensemble, a compendium of three texts that circulated throughout medieval Europe. Trota herself gained a reputation that spread as far as France and England. There are also references in the writings of other Salernitan physicians to the ('Salernitan women'), which give some idea of local empirical practices.
Dorotea Bucca, an Italian physician, was chair of philosophy and medicine at the University of Bologna for over forty years from 1390. Other Italian women whose contributions in medicine have been recorded include Abella, Jacqueline Felice de Almania, Alessandra Giliani, Rebecca de Guarna, Margarita, Mercuriade (14th century), Constance Calenda, Clarice di Durisio (15th century), Constanza, Maria Incarnata and Thomasia de Mattio.
Medieval Islamic world
For the medieval Islamic world, little information is known about female medical practitioners although it is likely that women were regularly involved in medical practice in some capacity. Male medical writers refer to the presence of female practitioners (a ṭabība) in describing certain procedures or situations. The late-10th to early-11th century Andalusi physician and surgeon al-Zahrawi wrote that certain medical procedures were difficult for male doctors practicing on female patients because of the need to touch the genitalia. The male practitioner was required to either find a female doctor who could perform the procedure, or a eunuch physician, or a midwife who took instruction from the male surgeon. The existence of female practitioners can be inferred, albeit not explicitly, through direct evidence. Midwives played a prominent role in the delivery of women's healthcare. For these practitioners, there is more detailed information, both in terms of the prestige of their craft (ibn Khaldun calls it a noble craft, "something necessary in civilization") and in terms of biographical information on historic women. To date, no known medical treatise written by a woman in the medieval Islamic world has been identified.
Western medicine in China
Traditional Chinese medicine based on the use of herbal medicine, acupuncture, massage and other forms of therapy has been practiced in China for thousands of years. Western medicine was introduced to China in the 19th century, mainly by medical missionaries sent from various Christian mission organizations, such as the London Missionary Society (Britain), the Methodist Church (Britain) and the Presbyterian Church (US). Benjamin Hobson (1816–1873), a medical missionary sent by the London Missionary Society in 1839, set up the Wai Ai Clinic () in Guangzhou, China. The Hong Kong College of Medicine for Chinese () was founded in 1887 by the London Missionary Society, with its first graduate (in 1892) being Sun Yat-sen ().
Due to the social custom that men and women should not be near to one another, Chinese women were reluctant to be treated by Western male doctors. This resulted in a need for female doctors. One of these was Sigourney Trask of the Methodist Episcopal Church, who set-up a hospital in Fuzhou during the mid-19th century. Trask also arranged for a local girl, Hü King Eng, to study medicine at Ohio Wesleyan Female College, with the intention that Hü would return to practise western medicine in Fuzhou. After graduation, Hü became the resident physician at Fuzhou's Woolston Memorial Hospital in 1899 and trained several female physicians. Another female medical missionary Mary H. Fulton (1854–1927) was sent by the Foreign Missions Board of the Presbyterian Church (US) to found the first medical college for women in China. Known as the Hackett Medical College for Women (), this college was located in Guangzhou, China, and was enabled by a large donation from Edward A. K. Hackett (1851–1916) of Indiana. The college was dedicated in 1902 and offered a four-year curriculum. By 1915, there were more than 60 students, mostly in residence. Most students became Christians, due to the influence of Fulton. The college was aimed at the spreading of Christianity and modern medicine and the elevation of Chinese women's social status. The graduates of this college included Chau Lee-sun (, 1890–1979) and Wong Yuen-hing (), both of whom graduated in the late 1910s and then practiced medicine in the hospitals in Guangdong province.
Midwifery in 18th-century America
During this era, the majority of American women whether European or African American, childbirth was considered a female event where female friends, relatives, and the local midwife gathered to support the birthing mother. Midwives gained their knowledge through experience and apprenticeship. Out of the different occupations women took on around this time, midwifery was one of the highest-paying industries. In the 18th century, households tended to have an abundance of children largely in part to having hired help and diminished mortality rates. Despite the high chance of complications in labor, American midwife Martha Ballard, specifically, had high success rates in delivering healthy babies to healthy mothers.
Women's health movement, 1970s
The 1970s marked an increase of women entering and graduating from medical school in the United States. From 1930 to 1970, a period of 40 years, around 14,000 women graduated from medical school. From 1970 to 1980, a period of 10 years, over 20,000 women graduated from medical school. This increase of women in the medical field was due to both political and cultural changes. Two laws in the U.S. lifted restrictions for women in the medical field – Title IX of the Higher Education Act Amendments of 1972 and the Public Health Service Act of 1975, banning discrimination on grounds of gender. In November 1970, the Assembly of the Association of American Medical Colleges rallied for equal rights in the medical field.
Throughout the decade women's ideas about themselves and their relation to the medical field were shifting due to the women's feminist movement. A sharp increase of women in the medical field led to developments in doctor-patient relationships, changes in terminology and theory. One area of medical practice that was challenged and changed was gynecology. Author Wendy Kline noted that "to ensure that young brides were ready for the wedding night, [doctors] used the pelvic exam as a form of sex instruction."
With higher numbers of women enrolled in medical school, medical practices like gynecology were challenged and subsequently altered. In 1972, the University of Iowa Medical School instituted a new training program for pelvic and breast examinations. Students would act both as the doctor and the patient, allowing each student to understand the procedure and create a more gentle, respectful examination. With changes in ideologies and practices throughout the 70s, by 1980 over 75 schools had adopted this new method.
Along with women entering the medical field and feminist rights movement, came along the women's health movement which sought alternative methods of health care for women. This came through the creation of self-help books, most notably Our Bodies, Ourselves: A Book by and for Women. This book gave women a "manual" to help understand their body. It challenged hospital treatment, and doctors' practices. Aside from self-help books, many help centres were opened: birth centres run by midwives, safe abortion centres, and classes for educating women on their bodies, all with the aim of providing non-judgmental care for women. The women's health movement, along with women involved in the medical field, opened the doors for research and awareness for female illness like breast cancer and cervical cancer.
Scholars in the history of medicine had developed some study of women in the field—biographies of pioneering women physicians were common prior to the 1960s—and study of women in medicine took particular root with the advent of the women's movement in the 1960s, and in conjunction with the women's health movement.
Modern medicine
In 1540, Henry VIII of England granted the charter for the Company of Barber-Surgeons; while this led to the specialization of healthcare professions (i.e. surgeons and barbers), women were barred from professional practice. Women did continue to practice during this time without formal training or recognition in England and eventually North America for the next several centuries.
Women's participation in the medical professions was generally limited by legal and social practices during the decades while medicine was professionalizing. Women openly practiced medicine in the allied health professions (nursing, midwifery, etc.), and throughout the nineteenth and twentieth centuries, women made significant gains in access to medical education and medical work through much of the world. These gains were sometimes tempered by setbacks; for instance, Mary Roth Walsh documented a decline in women physicians in the US in the first half of the twentieth century, such that there were fewer women physicians in 1950 than there were in 1900. Through the latter half of the twentieth century, women made gains generally across the board. In the United States, for instance, women were 9% of total US medical school enrollment in 1969; this had increased to 20% in 1976. By 1985, women constituted 16% of practicing American physicians.
At the beginning of the 21st century in industrialized nations, women have made significant gains, but have yet to achieve parity throughout the medical profession. Women have achieved parity in medical school in some industrialized countries, since 2003 forming the majority of the United States medical school applicants. In 2007–2008, women accounted for 49% of medical school applicants and 48.3% of those accepted. According to the Association of American Medical Colleges (AAMC) 48.4% (8,396) of medical degrees awarded in the US in 2010–2011 were earned by women, an increase from 26.8% in 1982–1983. While more women are taking part in the medical field, a 2013–2014 study reported that there are significantly fewer women in leadership positions within the academic realm of medicine. This study found that women accounted for 16% of deans, 21% of the professors, and 38% of faculty, as compared to their male counterparts.
The practice of medicine remains disproportionately male overall. In industrialized nations, the recent parity in gender of medical students has not yet trickled into parity in practice. In many developing nations, neither medical school nor practice approach gender parity. Moreover, there are skews within the medical profession: some medical specialties, such as surgery, are significantly male-dominated, while other specialties are significantly female-dominated, or are becoming so. For example, in the United States, female physicians outnumber male physicians in pediatrics and female residents outnumber male residents in family medicine, obstetrics and gynecology, pathology, and psychiatry. In several different areas of medicine (general practice, medical specialties, surgical specialties) and in various roles, medical professionals tend to overestimate women's true representation, and this correlates with a decreased willingness to support gender-based initiatives among men, impeding further progress towards gender parity.
Women continue to dominate in nursing. In 2000, 94.6% of registered nurses in the United States were women. In health care professions as a whole in the US, women numbered approximately 14.8 million, as of 2011.
Biomedical research and academic medical professions—i.e., faculty at medical schools—are also disproportionately male. Research on this issue, called the "leaky pipeline" by the National Institutes of Health and other researchers, shows that while women have achieved parity with men in entering graduate school, a variety of discrimination causes them to drop out at each stage in the academic pipeline: graduate school, postdoc, faculty positions, achieving tenure; and, ultimately, in receiving recognition for groundbreaking work.
Glass ceiling
The "glass ceiling" is a metaphor to convey the undefined obstacles that women and minorities face in the workplace. Female physicians of the late 19th-century faced discrimination in many forms due to the prevailing Victorian era attitude that the ideal woman be demure, display a gentle demeanor, act submissively, and enjoy a perceived form of power that should be exercised over and from within the home. Medical degrees were difficult for women to earn, and once practicing, discrimination from landlords for medical offices, left female physicians to set up their practices on "Scab Row" or "bachelor's apartments."
The Journal of Women's Health surveyed physician mothers and their physician daughters to analyze the effect that discrimination and harassment have on the individual and their career. This study included 84% of physician mothers that graduated medical school prior to 1970, with the majority of these physicians graduating in the 1950s and 1960s. The authors of this study stated that discrimination in the medical field persisted after the title VII discrimination legislation was passed in 1965. This was the case until 1970, when the National Organization for Women (NOW) filed a class action lawsuit against all medical schools in the United States. By 1975, the number of women in medicine had nearly tripled, and has continued to grow. By 2005, more than 25% of physicians and around 50% of medical school students were women. The increase of women in medicine also came with an increase of women identifying as a racial/ethnic minority, yet this population is still largely underrepresented in comparison to the general population of the medical field.
Within this specific study, 22% of physician mothers and 24% of physician daughters identified themselves as being an ethnic minority. These women reported experiencing instances of exclusion from career opportunities as a result of their race and gender. According to this article, females tend to have lessened confidence in their abilities as a doctor, yet their performance is equivalent to that of their male counterparts. This study also commented on the impact of power dynamics within medical school, which is established as a hierarchy that ultimately shapes the educational experience. Instances of sexual harassment attribute to the high attrition rates of females in the STEM fields.
Competition between midwifery and obstetrics
A shift from women midwifery to male obstetrics occurs in the growth of medical practices such as the founding of the American Medical Association. Instead of assisting labor in the basis of an emergency, doctors took over the delivery of babies completely; putting midwifery second. This is an example of the growing sense of competition between male physicians and female midwives as a rise in obstetrics took hold. The education of women on the basis of midwifery was stunted by both physicians and public-health reformers, driving midwifery to be seen as out of practice. Societal roles also played a fact in the downfall of the practice in midwifery because women were unable to obtain the education needed for licensing and once married, women were to embrace a domestic lifestyle. In 2018, there were 11,826 certified nurse midwives (CNMs). In 2019 there were 42,720 active physicians in Obstetrics and Gynecology.
Outside of the United States, midwifery is still practiced in several countries such as in Africa. The first school of midwives in Africa was supposedly founded by Dr. Ernst Rodenwalt in Togo in 1912. In comparison, The Juba College of Nursing and Midwifery in South Sudan (a country that gained its independence in 2011) graduated its first class of students in 2013.
Women's contributions to medicine
Historical women's medical schools
When women were routinely forbidden from medical school, they sought to form their own medical schools.
New England Female Medical College, Boston, founded in 1848.
Woman's Medical College of Pennsylvania (founded 1850 as Female Medical College of Pennsylvania)
London School of Medicine for Women (founded 1874 by Sophia Jex-Blake)
Edinburgh School of Medicine for Women (founded 1886 by Sophia Jex-Blake)
First Pavlov State Medical University of St. Petersburg (founded 1897 as Female Medical University)
Tokyo Women's Medical University (founded 1900 by Yoshioka Yayoi)
Hackett Medical College for Women, Guangzhou, China, founded in 1902 by Presbyterian Church (USA).
Historical hospitals with significant female involvement
Woman's Hospital of Philadelphia, founded in 1861, provided clinical experience for Woman's Medical College of Pennsylvania students
New England Hospital for Women and Children (now called Dimock Community Health Center), founded in 1862 by women doctors "for the exclusive use of women and children"
New Hospital for Women (founded in the 1870s by Elizabeth Garrett Anderson and run largely by women, for women)
South London Hospital for Women and Children (founded 1912 by Eleanor Davies-Colley and Maud Chadburn; closed 1984; employed an all-woman staff)
Pioneering women in early modern medicine
18th century
Madeleine-Françoise Calais ( – fl. 1740) was a pioneer who is referred to as the first female dentist in France.
Dorothea Erxleben (1715–1762) was the first female doctor in Germany and the first woman worldwide to be granted an MD by a university.
Salomée Halpir (1718 – after 1763) was a Polish medic and oculist who is often referred to as the first female doctor from the Grand Duchy of Lithuania.
19th century
Lovisa Årberg (1801–1881) was the first female doctor and surgeon in Sweden; whereas, Amalia Assur (1803–1889) was the first female dentist in Sweden and possibly Europe.
Marie Durocher (1809–1893) was a Brazilian obstetrician, midwife and physician. She is considered the first female doctor in Brazil and the Americas.
Ann Preston (1813–1872) was the first female to become the dean of a medical school [Woman's Medical College of Pennsylvania (WMCP)] in 1866.
Elizabeth Blackwell (1821–1910), who was England-born, was the first woman to graduate from medical school in the United States. She obtained her MD in 1849 from Geneva College, New York City.
Rebecca Lee Crumpler, (1831–1895) became the first African American female physician in the United States in 1864 upon being awarded her M.D. by New England Female Medical College in Boston.
Lucy Hobbs Taylor (1833–1910) was the first female dentist in the United States.
Elizabeth Garrett Anderson (1836–1917) was a pioneering feminist in Britain who became the first female doctor in the United Kingdom in 1865 and a co-founder of London School of Medicine for Women.
Madeleine Brès (1839–1925) was the first female medical doctor in France.
Sophia Jex-Blake (1840–1912) was an English physician, feminist and teacher who was the first woman to practice medicine in Scotland in 1878.
Sophia Bambridge (1841–1910) was the first female doctor in American Samoa.
Frances Hoggan (1843–1927) became the first female doctor in Wales in 1870. She was also the first British woman to receive a doctorate in medicine (1870).
Eliza Walker Dunbar (1845-1925) was the first woman in the UK to be appointed as a House Surgeon with responsibilities over male doctors (1874) and the first to receive a UK medical licence by examination (1877).
Jennie Kidd Trout (1841–1921) was the first woman in Canada to become a licensed medical doctor in March 1875.
Rosina Heikel (1842–1929) was a feminist and the first female physician in Finland (1878), as well as in the Nordic countries.
Isala Van Diest (7 May 1842 – 6 February 1916) was the first female medical doctor and the first female university graduate in Belgium.
Nadezhda Suslova (1843–1918), a graduate of Zurich University, was the first female doctor in Russia
Edith Pechey-Phipson (1845–1908) was a pioneering English doctor in India. She received her MD in 1877 from the University of Bern and Licentiate in Midwifery in 1877 at the Royal College of Physicians of Ireland.
Mary Scharlieb (1845–1930) was a pioneer British female physician, as she was the first woman to be elected to the honorary visiting staff of a hospital in the United Kingdom.
Vilma Hugonnai (1847–1922) was the first female doctor in Hungary. She studied medicine in Zürich and received her degree in 1879. However, she had to work as a midwife until 1897 when the Hungarian authorities finally accepted her degree. Hugonnai then started her own medical practice.
Margaret Cleaves (1848–1917) was a pioneering doctor in brachytherapy who obtained her M.D. in 1873. She was the first female appointed to the University of Iowa Medical Department's examining committee in 1885.
Anastasia Golovina, also known as Anastassya Nikolau Berladsky-Golovina, and Atanasya Golovina (1850–1933), was the first female doctor in Bulgaria.
Ogino Ginko (1851–1913) was the first licensed and practicing female physician of Western medicine in Japan.
Bohuslava Kecková (1854–1911), first Bohemian (Czech) woman to obtain a medical degree in 1880 from University of Zurich.
Aletta Jacobs (1854–1929) was the first woman to complete a university course in the Netherlands and the first female doctor in the country.
Hope Bridges Adams Lehmann (1855–1916) was the first female general practitioner and gynecologist in Munich, Germany.
Grace Cadell (1855–1918) and Marion Gilchrist (1864–1952) were the first women to qualify as doctors in Scotland respectively in 1891 and 1894.
Draga Ljočić-Milošević (1855–1926) was a feminist activist and the first female physician in Serbia. She graduated from Zurich University in 1879
Henriette Saloz-Joudra (1855–1928) successfully defended a doctoral thesis in cardiology at the University of Geneva in June 1883.
Ana Galvis Hotz (1855–1934) was the first female doctor in Colombia. She was also the first Colombian woman (and first woman from Latin America) to obtain a medical degree.
Constance Stone (1856–1902) was the first woman to practice medicine in Australia.
Dolors Aleu i Riera (1857–1913) was the first female medical doctor in Spain when she started practicing medicine in 1879.
Maria Cuțarida-Crătunescu (1857–1919) was the first female doctor in Romania.
Lilian Welsh (1858–1938) was the first woman full professor at Goucher College.
Sonia Belkind (1858–1943), who was Russian-born, was the first female doctor in Palestine.
Isabel Cobb (1858–1947), who earned her M.D. in 1892, was Cherokee and the first woman physician in Indian territory. She was also an alumnus of Woman's Medical College of Pennsylvania.
Matilde Montoya (1859–1939) became the first female physician in Mexico in 1887.
Kadambini Ganguly (1861–1923) was the first Indian woman to obtain a medical degree in India upon graduating from the Calcutta Medical College in 1886.
Elsie Inglis (1864–1917), born in India, was a pioneering Scottish doctor and suffragist who obtained her MD at Edinburgh School of Medicine for Women and worked at Rotunda Hospital, Dublin.
Annie Lowrie Alexander (1864–1929) was the first licensed female physician in the Southern United States
Emily Charlotte Thomson (1864–1955) was one of the first women admitted to professional medical societies in Scotland and co-founded the Dundee Women's Hospital in 1896.
Anandi Gopal Joshi (1865–1887), the first Indian woman to obtain a medical degree having graduated from the Woman's Medical College of Pennsylvania in 1886.
Susan La Flesche Picotte (1865–1915) was the first Native American woman to obtain a medical degree.
Sofia Okunevska (1865–1926) was the first Ukrainian female doctor.
Mary Josephine Hannan (1865–1935) was the first Irishwoman to graduate with the following credentials: LRCPI & SI and LM.
Marie Spångberg Holth (1865–1942) was the first woman doctor in Norway after graduating in medicine from the Royal Frederiks University of Christiania in 1893.
Anne Walter Fearn (1865–1938) practiced as a medical doctor in Shanghai, China, for almost 40 years.
Eloísa Díaz (1866–1950) became the first female doctor in Chile upon graduating from the Universidad de Chile on 27 December 1886. She obtained her degree on 3 January 1887.
Merbai Ardesir Vakil (1868–1941) was an Indian physician and the first Asian woman to graduate from a Scottish university.
Eva Jellett (1868–1958), first woman to graduate from Trinity College Dublin with a medical degree in 1905.
Bertha E. Reynolds (1868–1961) was among the first women licensed to practice medicine in Wisconsin (serving the rural communities of Lone Rock and Avoca).
Emma K. Willits (1869–1965) was believed to be only the third woman to specialize in surgery and the first to head a Department of General Surgery at Children's Hospital in San Francisco, 1921–1934.
Alice Hamilton (1869–1970) was an American physician, research scientist, and author who is best known as a leading expert in the field of occupational health and a pioneer in the field of industrial toxicology. She was also the first woman appointed to the faculty of Harvard University.
Vera Gedroitz (1870–1932) was the first female professor of surgery in the world, as well as the first female military surgeon in Russia.
Maria Montessori (1870–1952), renowned educator and one of the first female medical doctors in Italy.
Milica Šviglin Čavov (b. unknown, circa 1870s) was the first Croatian female doctor. She graduated from the Medical School in Zürich in 1893, but was not allowed to work in Croatia.
Florence Sabin (1871–1953) was the first woman elected to the United States National Academy of Sciences.
Yoshioka Yayoi (1871–1959), one of the first women to gain a medical degree in Japan; founded a medical school for women in 1900.
Hannah Myrick (1871–1973) had helped to introduce the use of X-rays at the New England Hospital for Women and Children.
Laura Esther Rodriguez Dulanto (1872–1919) was the first female doctor in Peru upon obtaining her medical degree.
Marie Equi (1872–1952) was an American doctor and activist for women's access to birth control and abortion.
Fannie Almara Quain (1874–1950) was the first woman born in North Dakota to earn a doctor of medicine degree.
Karola Maier Milobar (born 1876) became the first female physician to practice in Croatia in 1906.
Bertha De Vriese (1877–1958) was the first Belgian woman to obtain a medical degree from Ghent University.
Selma Feldbach (1878–1924) was the first Estonian woman to become a medical doctor.
Andrea Evangelina Rodríguez Perozo (1879–1947) was the first female medical school graduate in the Dominican Republic.
Alice Mary Barry (1880–1955) was a doctor and the first woman nominated fellow of the Royal College of Physicians of Ireland.
Ernestina Paper (b. unknown, circa mid–1800s) was the first Italian woman to receive an advanced degree (in medicine) in 1877.
Doctor Ethel Constance Cousins (1882–1944) and nurse Elizabeth Brodie were the first European women admitted to Bhutan in 1918 as part of a missionary effort to curtail a cholera outbreak.
Muthulakshmi Reddi (1886–1968) was one of the early female medical doctors in India and a major social reformer.
María Elisa Rivera Díaz (1887–1981) (1909), Ana Janer (1909), Palmira Gatell (1910), and Dolores Piñero (1892–1975) (1913) were the first women to earn a medical degree in Puerto Rico. María Elisa Rivera Díaz and Ana Janer graduated in the same medical school class in 1909 and thus could both be considered the first female Puerto Rican physicians.
Anna Petronella van Heerden (1887–1975) was the first Afrikaner woman to qualify as a medical doctor in South Africa. Her thesis, which she obtained a doctorate on in 1923, was the first medical thesis written in Afrikaans.
Matilde Hidalgo (1889–1974) was the first female doctor in Ecuador.
Johanna Hellman (1889–1982) was a German physician who specialized in surgery, and the first woman to be a member of the German Society for Surgery.
Sun Chau Lee (, 1890–1979) was one of the first female Chinese doctors of Western medicine in China.
Mabel Wolff (1890–1981) and her sister Gertrude L. Wolff developed the first midwifery training school in Sudan in 1930. Mastura Khidir, one of the original students, was awarded a medal from King George V in 1945 for being the last surviving midwife from the first graduating class.
Mary Hearn (1891–1969) was a gynaecologist and first woman fellow of the Royal College of Physicians of Ireland.
Concepción Palacios Herrera (1893–1981) was the first female physician in Nicaragua.
Evelyn Totenhofer (1894–1977) became the first (female) resident nurse for Pitcairn Islands in 1944.
Jane Cummins (1899–1982), who possessed a DMRE and DTM&H, was an officer in the WRAF.
Irene Condachi (1899–1970), who earned her M.D. in 1927, was one of only two practicing female doctors in Malta during World War II.
Ah-hsin Tsai (1899–1990) was colonial Taiwan's first female physician.
20th and 21st centuries
Ana Aslan (1897–1988) was a Romanian biologist and physician, specialist in gerontology, academician from 1974 and the director of the National Institute of Geriatrics and Gerontology (1958–1988).
Marguerite Champendal (1870–1928) was the first woman from Geneva to earn her M.D. at the University of Geneva in 1900.
Emily Siedeberg (1873–1968) became the first female doctor in New Zealand in 1896. Ellen Dougherty (1844–1919) became New Zealand's first registered nurse in 1902 whereas Akenehi Hei (1878–1910) was the first Māori female to qualify as a nurse in 1908 in New Zealand.
Yu Meide (1874–1960) became the first Chinese Western medicine female doctor in Macau when she started a medical practice in 1906.
Oból Voansnac and Sofie Lyberth were the first Western-educated Greenlandic women to train as midwives in Greenland sometime in the early 20th century.
Lilian Grandin (1876–1924) was the first female doctor in Jersey. In 1907, Eleanor Diaper became the first nurse to work as a district nurse in Jersey.
Grace Pepe Malemo Haleck (1894–1987), Initia Taveuveu and Feiloa'iga Iosefa became the first qualified female nurses in American Samoa upon completing their training in 1916.
Dorothy Pantin (1896–1985) was the first woman doctor and surgeon of the Isle of Man.
Deaconess Mette Cathrine Thomsen was the first trained female nurse to work in the Faroe Islands from 1897 to 1915.
Eshba Dominika Fominichna (born 1897) became the first female doctor in Abkhazia after having returned from earning her medical degree in 1925 at the Baku State University.
Safiye Ali (1894–1952) was the first Turkish woman to have obtained a medical degree.
Damaye Soumah Cissé, mother of the renowned educator and politician Jeanne Martin Cissé (1926–2017), was one of the first midwives in Guinea.
Josephine Rera (1903–1987) was the first woman doctor in Borough Park and Bensonhurst, Brooklyn in New York City. She received the American Medical Association commendation for 50th Year in Practice. Rera graduated in 1926 with an M.D. diploma at the New York Homeopathic Medical College and Flower Hospital (now the New York Medical College in Valhalla, New York).
Lai Po-cheun was the first female to study and graduate as a medical student at the Hong Kong University during the 1920s.
Fatma bint Saada Nassor Lamki became the first female doctor in Zanzibar sometime during the 1920s.
Kornelija Sertić (1897–1988) was the first woman to graduate from the Medical School in Zagreb (which occurred in 1923).
Agnes Yewande Savage (1906–1964) was the first woman in West Africa to qualify in medicine
Joan Refshauge (1906–1979) was the first female doctor appointed to Papua New Guinea by the Australian government in 1947.
Henriette Bùi Quang Chiêu (1906–2012) was the first female doctor in Vietnam.
Sophie Redmond (1907–1955) became the first female doctor in Suriname after graduating from medical school in 1935.
Alma Dea Morani (1907–2001) was the first woman admitted to the American Society of Plastic and Reconstructive Surgeons.
Yvonne Sylvain (1907–1989) was the first female doctor in Haiti. She was the first woman accepted into the medical school of the University of Haiti, and earned her medical degree there in 1940.
Virginia Apgar (1909–1974), significant work in anesthesiology and teratology; founded field of neonatology; first woman granted full professorship at Columbia University College of Physicians & Surgeons.
Pearl Dunlevy (1909–2002) was a physician and epidemiologist and the first female president of the Biological Society of the Royal College of Surgeons of Ireland.
Isobel Addey Tate (1875–1917) was one of the first women to die while serving as a doctor overseas during World War I.
Beatrice Emmeline Simmons, a missionary and nurse, was the first Caucasian (female) formally trained in a health care profession to settle as an educator in Kiribati in 1910.
Elizabeth Abimbola Awoliyi (1910–1971) was the first female physician in Nigeria.
Badri Teymourtash (1911–1989) was the first Iranian female dentist, who received her higher education in Belgium.
Andréa de Balmann (1911–2007) was the first female doctor in French Polynesia.
Jane Elizabeth Hodgson (1915–2006) was a pioneering provider of reproductive healthcare for women and advocate for women's rights.
Matilda J. Clerk (1916–1984) was the first Ghanaian woman to win a scholarship for university education abroad and the second Ghanaian woman to become a physician. She was also the first woman to obtain a postgraduate diploma in colonial Ghana and West Africa.
Mary Malahele-Xakana (1917–1982) was the first black woman to register as a medical doctor in South Africa (in 1947).
Susan Gyankorama De-Graft Johnson (1917–1985) was the first woman to qualify as a physician in colonial Ghana.
Fatima Al-Zayani (1918–1982) became the first qualified female nurse in Bahrain in 1941. In 1969, Sadeeqa Ali Al-Awadi became the first female doctor in Bahrain upon her graduating from medical school.
Kakish Ryskulova (1918–2018) was the first woman from Kyrgyzstan to qualify as a surgeon.
Salma Ismail (1918–2014) was the first Malay woman to qualify as a doctor.
Katherine Burdon, wife of the then-government administrator, was among the women formally registered as midwives for St. Kitts and Anguilla in 1920.
Ogotu Head (1920–2001) was the first female nursing graduate from Niue after having completed her training in Samoa in 1939.
Ethna Gaffney (1920–2011) was the first female RCSI Professor of Chemistry.
Estela Gavidia (b. unknown, circa 1920) was the first woman to graduate as a doctor in El Salvador, which occurred in 1945.
Gabriela Valenzuela and Froilana Mereles were the first women to graduate with a medical degree in Paraguay in 1924. Valenzuela, however, is considered Paraguay's first practicing female doctor.
Augusta Jawara (1924–1981) was the first woman from The Gambia to qualify as a state certified midwife in 1953. She completed her training in England.
Kula Fiaola (1924–2003) became the first qualified (female) nurse in Tokelau in 1951.
Barbara Ball (1924–2011) was the first female doctor in Bermuda after having started her practice in 1949.
Margery Clare McKinnon (1924–2014) became the first female doctor in Norfolk Island around 1955.
Jean Lenore Harney (1925–2020) was the first female doctor from St. Kitts, Nevis and Anguilla to study medicine at the United Kingdom's Liverpool University ()
Kapelwa Sikota (1928–2006) became the first registered nurse in Zambia in 1952.
Mary Grant (1928–2016) was the third Ghanaian woman to qualify in medicine
Daphne Steele (1929–2004), a nurse from Guyana, became the first Black Matron in the National Health Service in 1964.
Josephine Nambooze (born 1930) started her practice as the first female doctor in Uganda in 1962. Selina Rwashana was the first psychiatric nurse in Uganda after having completed her training in the United Kingdom during the 1950s.
Tu Youyou (born 1930), first Chinese Nobel laureate in physiology or medicine and the first female citizen of the People's Republic of China to receive a Nobel Prize in any category (2015).
Lucie Lods and Jacqueline Exbroyat (1931–2013) were the first female doctors in New Caledonia. Lods started her practice in 1938, whereas Exbroyat did so during the 1960s.
Ayten Berkalp (born 1933) became the first female doctor in Northern Cyprus in 1963.
Lobsang Dolma Khangkar (1934–1989) was the first female doctor in the region of Tibet.
Widad Kidanemariam (1935–1988) became the first female doctor in Ethiopia during the 1960s.
Xhanfize (Frashëri) Basha returned to Albania to become the country's first female doctor upon completing her studies at the University of Philadelphia in 1937.
Edna Adan Ismail (born 1937) became Somaliland's first nurse midwife during the 1950s upon completing her training at the then-named Borough Polytechnic in the United Kingdom.
Hajah Habibah Haji Mohd Hussain (born 1937) was among the first women in Brunei to work as a nurse after finishing nursing school in 1955.
Marguerite Issembe became the first midwife in Gabon in 1940.
Ulai Otobed (born 1941) from Palau became the first female doctor in Micronesia. In 2020, Lara Reklai became the first Palauan female to complete her medical studies in Cuba.
María Herminia Yelsi and Digna Maldonado de Candía became the first female professional nurses in Paraguay in 1941.
Barbara Ross-Lee (born 1942) was the first African American female dean of a U.S. medical school (1993) (Ohio University College of Osteopathic Medicine).
Kek Galabru (born 1942) became the first female doctor in Cambodia upon obtaining her medical degree in France in 1968.
Choua Thao (born 1943), at the age of 14, was one of two Hmong girls recruited to receive nursing training around the time of the Secret War in Laos.
Dalva Maria Carvalho Mendes (born 1956), Brazilian doctor and soldier; first woman to be made a rear admiral in the Brazilian Navy
Nancy Dickey (born 1950) was the first female president of the American Medical Association.
Rosa Mari Mandicó (born 1951) became the first qualified female nurse in Andorra in 1971. In 1991, Concepció Álvarez Martínez, Isabel Navarro Gilabert, Dominica Ramond Punsola, Montserrat Rue Capella, Pilar Serrano Gascón, Purificación Valverde Hernández and Maria Líria Viñolas Blasco were the first nurse graduates in Andorra.
Nancy C. Andrews (born 1958), first female dean of a top-ten medical school in the United States (2007), Duke University School of Medicine.
Alganesh Haregot and Alganesh Adhanom were among the first women to graduate from a formal nursing school in Eritrea in 1959.
Ramlati Ali (born 1961) became the first female doctor in Mayotte in 1996.
Anniest Hamilton, the first female doctor in Turks and Caicos Islands, began her healthcare career sometime during the 1960s.
Under the tutelage of matron Daw Dem, Pem Choden, Nim Dem, Choni Zangmo, Gyem, Namgay Dem and Tsendra Pem became the first nurses in Bhutan in 1962.
Clara Raquel Epstein (born 1963), first Mexican-American woman U.S. trained and U.S. board certified in neurological surgery and youngest recipient of the prestigious Lifetime Achievement Award in Neurosurgery.
Viopapa Annandale-Atherton is the first Samoan woman to become a doctor upon graduating from New Zealand's University of Otago in 1964. She later returned to Samoa in 1993 and started a medical practice.
Cora LeEthel Christian became the first female doctor in the United States Virgin Islands upon completing her medical education in the early 1970s.
Madeline Nyamwanza-Makonese (b. unknown, mid-20th century) was the first female doctor in Zimbabwe. She was the second African woman to become a doctor and the first African woman to graduate from the University of Rhodesia Medical School in 1970.
Rehana Kausar (b. mid-20th century) became the first woman doctor from Azad Kashmir to graduate from Medical School in Pakistan in 1971.
Elwyn Chomba became the first female doctor in Zambia in 1973. In 1999, Jacqueline Mulundika-Mulwanda became Zambia's first female surgeon.
N'Guessan Affoué Christine from Ivory Coast is the first midwife advisor of the United Nations Population Fund (UNFPA). She retired from the profession in 2016 after having worked in the field since 1976.
Zoe Gardner becomes the first woman in 1976 to overwinter with the Australian Antarctic Program as a medical officer on sub-Antarctic Macquarie Island.
Margaret Allen (born 1948) became the first female heart transplant surgeon in the United States after having performed a transplant performed in 1985
Desiree Cox became the first (female) Rhodes Scholar from The Bahamas in 1987. She became a medical doctor upon earning her MBBS at the University of Oxford in 1992.
Marlene Toma became the first Saint Martin woman to graduate in midwifery in 1990.
Kinneh Sogur was the first home-trained female medical doctor to graduate from the University of the Gambia (UTG) in 2007. The medical school was the first one to be established in the country in 1999.
Margeret 'Molly' Brown (died 2008) was the first female doctor in the Cayman Islands
Esther Apuahe became the first female surgeon in Papua New Guinea in 2011. Naomi Kori Pomat (died 2021) was the first female doctor in Papua New Guinea's Western Province.
ʻAmelia Afuhaʻamango Tuʻipulotu became the first Tongan (female) to receive a Nursing PhD in 2012.
Neti Tamarua Herman became the first Cook Islands (female) nurse to earn a doctorate degree in 2015.
Alice Niragire was the first Rwandan female to graduate with a master's degree in surgery in 2015 since the course was introduced in 2006. In 2018, Claire Karekezi returned to Rwanda to become the country's first female neurosurgeon.
Natalie Joyce Brewley (died 2016) was the first female doctor in the British Virgin Islands. Stacy Rhymer is considered the first female doctor in the British Virgin Islands' Virgin Gorda.
Jin Cody became the first (female) certified nurse-midwife in the Northern Mariana Islands in 2017.
Elisa Gaspar becomes the first female to lead the Medical Association of Angola (ORMED) in 2019.
George Tarer was the first midwife to graduate in Guadeloupe.
Olivia Torres Cruz is the first Chamorro female doctor in Guam.
Errolyn Tungu is the first female obstetrician-gynaecologist in Vanuatu.
Rebecca Edwards became the first Falkland Islander woman to become a doctor after completing her medical training at the University College London.
Sergelen Orgoi developed low-cost liver transplantation for developing countries.
Adama Saidou is the first female surgeon in Niger, as well as the first woman to lead a surgical department.
See also
American Medical Women's Association
Female education
History of medicine
History of nursing
List of British women physicians
List of first female pharmacists by country
List of first female physicians by country
List of first women dentists by country
Sexism in medicine
Timeline of women's education
Timeline of women in science
Women in dentistry
Phanostratê
References
Bibliography
Abram, Ruth Abram., Send Us a Lady Physician: Women Doctors in America, 1835–1920
Blake, Catriona. The Charge of the Parasols: Women's Entry to the Medical Profession
Borst, Charlotte G. Catching Babies: Professionalization of Childbirth, 1870–1920 (1995), Cambridge, MA: Harvard University Press
Elisabeth Brooke, Women Healers: Portraits of Herbalists, Physicians, and Midwives (biographical encyclopedia)
Chenevert, Melodie. STAT: Special Techniques in Assertiveness Training for Women in the Health Profession
Barbara Ehrenreich and Deirdre English, Witches, Midwives, and Nurses: A History of Women Healers
Deirdre English and Barbara Ehrenreich, For Her Own Good (gendering of history of midwifery and professionalization of medicine)
Henderson, Metta Lou. American Women Pharmacists: Contributions to the Profession
Junod, Suzanne White and Seaman, Barbara, eds. Voices of the Women's Health Movement, Volume OneSeven Stories Press. New York. 2012. pp 60–62.
Luchetti, Cathy. Medicine Women: The Story of Early-American Women Doctors. New York: Crown,
Regina Morantz-Sanchez, Sympathy and Science: Women Physicians in American Medicine (1985 first ed.; 2001)
More, Ellen S. Restoring the Balance: Women Physicians and the Profession of Medicine, 1850–1995
Perrone, Bobette H. et al. Medicine Women, Curanderas, and Women Doctors (1993); cross-cultural anthropological survey of traditional societies
Pringle, Rosemary. Sex and Medicine: Gender, Power and Authority in the Medical Profession
Schwirian, Patricia M. Professionalization of Nursing: Current Issues and Trends (1998), Philadelphia: Lippencott,
Walsh, Mary Roth. Doctors Wanted: No Women Need Apply: Sexual Barriers in the Medical Profession, 1835–1975 (1977)
Biographies
Laurel Thatcher Ulrich, A Midwife's Tale: The Life of Martha Ballard Based on Her Diary, 1785–1812 (1991)
Rebecca Wojahn, Dr. Kate: Angel on Snowshoes (1956)
External links
The Archives for Women in Medicine , Countway Library, Harvard Medical School
"Changing the Face of Medicine", 2003 Exhibition at the National Library of Medicine;"NLM Exhibit Honors Outstanding Women", NIH Record, 11 November 2003. exhibition website at Changing the Face of Medicine .
Women are Changing the face of medicine
Women Physicians: 1850s–1970s – online exhibit at the Drexel University College of Medicine Archives and Special Collections on Women in Medicine and Homeopathy
"The Stethoscope Sorority", an online exhibit from the Archives for Women in Medicine
Women in Medicine Oral History Project Collection held at the University of Toronto Archives and Records Management Services
What's It Like to Be a Woman in Medicine? – online website at Cedar Sinai
Women scientists | Women in medicine | [
"Technology"
] | 10,747 | [
"Women in science and technology",
"Women scientists"
] |
14,457,589 | https://en.wikipedia.org/wiki/Protein-glutamate%20methylesterase | The enzyme protein-glutamate methylesterase (EC 3.1.1.61) catalyzes the reaction
protein L-glutamate O 5-methyl ester + H2O protein L-glutamate + methanol
This enzyme is a demethylase, and more specifically it belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is protein-Lglutamate-O 5-methyl-ester acylhydrolase. Other names in common use include chemotaxis-specific methylesterase, methyl-accepting chemotaxis protein methyl-esterase, CheB methylesterase, methylesterase CheB, protein methyl-esterase, protein carboxyl methylesterase, PME, protein methylesterase, and protein-L-glutamate-5-O-methyl-ester acylhydrolase. This enzyme participates in 3 metabolic pathways: two-component system - general, bacterial chemotaxis - general, and bacterial chemotaxis - organism-specific.
CheB is part of a two-component signal transduction system. These systems enable bacteria to sense, respond, and adapt to a wide range of environments, stressors, and growth conditions. Two-component systems are composed of a sensor histidine kinase (HK) and its cognate response regulator (RR). The HK catalyses its own autophosphorylation followed by the transfer of the phosphoryl group to the receiver domain on RR; phosphorylation of the RR usually activates an attached output domain, in this case a methyltransferase domain.
CheB is involved in chemotaxis. CheB methylesterase is responsible for removing the methyl group from the gamma-glutamyl methyl ester residues in the methyl-accepting chemotaxis proteins (MCP). CheB is regulated through phosphorylation by CheA. The N-terminal region of the protein is similar to that of other regulatory components of sensory transduction systems.
Structural studies
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes and .
References
Further reading
Protein domains
EC 3.1.1
Enzymes of known structure | Protein-glutamate methylesterase | [
"Biology"
] | 485 | [
"Protein domains",
"Protein classification"
] |
14,457,637 | https://en.wikipedia.org/wiki/Andrey%20Markov%20Jr. | Andrey Andreyevich Markov (; 22 September 1903, Saint Petersburg – 11 October 1979, Moscow) was a Soviet mathematician, the son of the Russian mathematician Andrey Markov Sr, and one of the key founders of the Russian school of constructive mathematics and logic. He made outstanding contributions to various areas of mathematics, including differential equations, topology, mathematical logic and the foundations of mathematics.
His name is in particular associated with Markov's principle and Markov's rule in mathematical logic, Markov's theorem in knot theory and Markov algorithm in theoretical computer science. An important result that he proved in 1947 was that the word problem for semigroups was unsolvable; Emil Leon Post obtained the same result independently at about the same time. In 1953 he became a member of the Communist Party.
In 1960, Markov obtained fundamental results showing that the classification of four-dimensional manifolds is undecidable: no general algorithm exists for distinguishing two arbitrary manifolds with four or more dimensions. This is because four-dimensional manifolds have sufficient flexibility to allow us to embed any algorithm within their structure. Hence, classifying all four-manifolds would imply a solution to Turing's halting problem. Embedding implies failure to create a correspondence between algorithms and indexing (naturally uncountably infinite, but even larger) of the four-manifolds structure. Failure is in Cantor's sense. Indexing is in Godel's sense. This result has profound implications for the limitations of mathematical analysis.
His doctoral students include Boris Kushner, Gennady Makanin, and Nikolai Shanin.
Awards and honors
Medal "For Valiant Labour in the Great Patriotic War 1941–1945" (1945)
Order of the Badge of Honour (1945)
Medal "For the Defence of Leningrad" (1946)
Order of Lenin (1954)
Order of the Red Banner of Labour (1963)
Notes
External links
1903 births
1979 deaths
20th-century Russian mathematicians
Mathematicians from Saint Petersburg
Academic staff of Saint Petersburg State University
Corresponding Members of the USSR Academy of Sciences
Communist Party of the Soviet Union members
Recipients of the Order of the Badge of Honour
Recipients of the Order of Lenin
Recipients of the Order of the Red Banner of Labour
Mathematical logicians
Topologists
Soviet logicians
Soviet mathematicians
Burials at Kuntsevo Cemetery
Russian scientists | Andrey Markov Jr. | [
"Mathematics"
] | 471 | [
"Topologists",
"Mathematical logic",
"Topology",
"Mathematical logicians"
] |
14,457,671 | https://en.wikipedia.org/wiki/Sedoheptulose-bisphosphatase | Sedoheptulose-bisphosphatase (also sedoheptulose-1,7-bisphosphatase or SBPase, EC number 3.1.3.37; systematic name sedoheptulose-1,7-bisphosphate 1-phosphohydrolase) is an enzyme that catalyzes the removal of a phosphate group from sedoheptulose 1,7-bisphosphate to produce sedoheptulose 7-phosphate. SBPase is an example of a phosphatase, or, more generally, a hydrolase. This enzyme participates in the Calvin cycle.
Structure
SBPase is a homodimeric protein, meaning that it is made up of two identical subunits. The size of this protein varies between species, but is about 92,000 Da (two 46,000 Da subunits) in cucumber plant leaves. The key functional domain controlling SBPase function involves a disulfide bond between two cysteine residues. These two cysteine residues, Cys52 and Cys57, appear to be located in a flexible loop between the two subunits of the homodimer, near the active site of the enzyme. Reduction of this regulatory disulfide bond by thioredoxin incites a conformational change in the active site, activating the enzyme. Additionally, SBPase requires the presence of magnesium (Mg2+) to be functionally active. SBPase is bound to the stroma-facing side of the thylakoid membrane in the chloroplast in a plant. Some studies have suggested the SBPase may be part of a large (900 kDa) multi-enzyme complex along with a number of other photosynthetic enzymes.
Regulation
SBPase is involved in the regeneration of 5-carbon sugars during the Calvin cycle. Although SBPase has not been emphasized as an important control point in the Calvin cycle historically, it plays a large part in controlling the flux of carbon through the Calvin cycle. Additionally, SBPase activity has been found to have a strong correlation with the amount of photosynthetic carbon fixation. Like many Calvin cycle enzymes, SBPase is activated in the presence of light through a ferredoxin/thioredoxin system. In the light reactions of photosynthesis, light energy powers the transport of electrons to eventually reduce ferredoxin. The enzyme ferredoxin-thioredoxin reductase uses reduced ferredoxin to reduce thioredoxin from the disulfide form to the dithiol. Finally, the reduced thioredoxin is used to reduced a cysteine-cysteine disulfide bond in SBPase to a dithiol, which converts the SBPase into its active form.
SBPase has additional levels of regulation beyond the ferredoxin/thioredoxin system. Mg2+ concentration has a significant impact on the activity of SBPase and the rate of the reactions it catalyzes. SBPase is inhibited by acidic conditions (low pH). This is a large contributor to the overall inhibition of carbon fixation when the pH is low inside the stroma of the chloroplast. Finally, SBPase is subject to negative feedback regulation by sedoheptulose-7-phosphate and inorganic phosphate, the products of the reaction it catalyzes.
Evolutionary origin
SBPase and FBPase (fructose-1,6-bisphosphatase, EC 3.1.3.11) are both phosphatases that catalyze similar during the Calvin cycle. The genes for SBPase and FBPase are related. Both genes are found in the nucleus in plants, and have bacterial ancestry. SBPase is found across many species. In addition to being universally present in photosynthetic organism, SBPase is found in a number of evolutionarily-related, non-photosynthetic microorganisms. SBPase likely originated in red algae.
Horticultural Relevance
Moreso than other enzymes in the Calvin cycle, SBPase levels have a significant impact on plant growth, photosynthetic ability, and response to environmental stresses. Small decreases in SBPase activity result in decreased photosynthetic carbon fixation and reduced plant biomass. Specifically, decreased SBPase levels result in stunted plant organ growth and development compared to wild-type plants, and starch levels decrease linearly with decreases in SBPase activity, suggesting that SBPase activity is a limiting factor to carbon assimilation. This sensitivity of plants to decreased SBPase activity is significant, as SBPase itself is sensitive to oxidative damage and inactivation from environmental stresses. SBPase contains several catalytically relevant cysteine residues that are vulnerable to irreversible oxidative carbonylation by reactive oxygen species (ROS), particularly from hydroxyl radicals created during the production of hydrogen peroxide. Carbonylation results in SBPase enzyme inactivation and subsequent growth retardation due to inhibition of carbon assimilation. Oxidative carbonylation of SBPase can be induced by environmental pressures such as chilling, which causes an imbalance in metabolic processes leading to increased production of reactive oxygen species, particularly hydrogen peroxide. Notably, chilling inhibits SBPase and a related enzyme, fructose bisphosphatase, but does not affect other reductively activated Calvin cycle enzymes.
The sensitivity of plants to synthetically reduced or inhibited SBPase levels provides an opportunity for crop engineering. There are significant indications that transgenic plants which overexpress SBPase may be useful in improving food production efficiency by producing crops that are more resilient to environmental stresses, as well as have earlier maturation and higher yield. Overexpression of SBPase in transgenic tomato plants provided resistance to chilling stress, with the transgenic plants maintaining higher SBPase activity, increased carbon dioxide fixation, reduced electrolyte leakage and increased carbohydrate accumulation relative to wild-type plants under the same chilling stress. It is also likely that transgenic plants would be more resilient to osmotic stress caused by drought or salinity, as the activation of SBPase is shown to be inhibited in chloroplasts exposed to hypertonic conditions, though this has not been directly tested. Overexpression of SBPase in transgenic tobacco plants resulted in enhanced photosynthetic efficiency and growth. Specifically, transgenic plants exhibited greater biomass and improved carbon dioxide fixation, as well as an increase in RuBisCO activity. The plants grew significantly faster and larger than wild-type plants, with increased sucrose and starch levels.
References
Further reading
Photosynthesis
EC 3.1.3 | Sedoheptulose-bisphosphatase | [
"Chemistry",
"Biology"
] | 1,433 | [
"Biochemistry",
"Photosynthesis"
] |
14,458,388 | https://en.wikipedia.org/wiki/Integrated%20Software%20for%20Imagers%20and%20Spectrometers | Integrated Software for Imagers and Spectrometers (Isis) is a specialized software package developed by the USGS to process images and spectra collected by current and past NASA planetary missions sent to Earth's Moon, Mars, Jupiter, Saturn, and other solar system bodies.
History
The history of ISIS began in 1971 at the United States Geological Survey (USGS) in Flagstaff, Arizona.
Isis was developed in 1989, primarily to support the Galileo NIMS instrument.
It contains standard image processing capabilities (such as image algebra, filters, statistics) for both 2D images and 3D data cubes, as well as mission-specific data processing capabilities and cartographic rendering functions.
Raster data format name
Family of related formats that are used by the USGS Planetary Cartography group to store and distribute planetary imagery data.
PDS, Planetary Data System
ISIS2, USGS Astrogeology Isis cube (Version 2)
ISIS3, USGS Astrogeology ISIS Cube (Version 3)
See also
Ames Stereo Pipeline
References
External links
Image processing software
Works about astronomy
Science software
Public-domain software
United States Geological Survey | Integrated Software for Imagers and Spectrometers | [
"Astronomy"
] | 223 | [
"Works about astronomy"
] |
14,458,653 | https://en.wikipedia.org/wiki/Coalescence%20%28chemistry%29 | In chemistry, coalescence is a process in which two phase domains of the same composition come together and form a larger phase domain. In other words, the process by which two or more separate masses of miscible substances
seem to "pull" each other together should they make the slightest
contact.
References
External links
IUPAC Gold Book
Physical chemistry | Coalescence (chemistry) | [
"Physics",
"Chemistry"
] | 71 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"Physical chemistry stubs",
"nan"
] |
14,458,673 | https://en.wikipedia.org/wiki/Rudiviridae | Rudiviridae is a family of viruses with linear double stranded DNA genomes that infect archaea. The viruses of this family are highly thermostable and can act as a template for site-selective and spatially controlled chemical modification. Furthermore, the two strands of the DNA are covalently linked at both ends of the genomes, which have long inverted terminal repeats. These inverted repeats are an adaptation to stabilize the genome in these extreme environments.
Taxonomy
The following genera are assigned to the family:
Azorudivirus
Hoswirudivirus
Icerudivirus
Itarudivirus
Japarudivirus
Mexirudivirus
Usarudivirus
References
Virus families | Rudiviridae | [
"Biology"
] | 138 | [
"Virus stubs",
"Viruses"
] |
14,459,043 | https://en.wikipedia.org/wiki/Influence%20line | In engineering, an influence line graphs the variation of a function (such as the shear, moment etc. felt in a structural member) at a specific point on a beam or truss caused by a unit load placed at any point along the structure. Common functions studied with influence lines include reactions (forces that the structure's supports must apply for the structure to remain static), shear, moment, and deflection (Deformation). Influence lines are important in designing beams and trusses used in bridges, crane rails, conveyor belts, floor girders, and other structures where loads will move along their span. The influence lines show where a load will create the maximum effect for any of the functions studied.
Influence lines are both scalar and additive. This means that they can be used even when the load that will be applied is not a unit load or if there are multiple loads applied. To find the effect of any non-unit load on a structure, the ordinate results obtained by the influence line are multiplied by the magnitude of the actual load to be applied. The entire influence line can be scaled, or just the maximum and minimum effects experienced along the line. The scaled maximum and minimum are the critical magnitudes that must be designed for in the beam or truss.
In cases where multiple loads may be in effect, influence lines for the individual loads may be added together to obtain the total effect felt the structure bears at a given point. When adding the influence lines together, it is necessary to include the appropriate offsets due to the spacing of loads across the structure. For example, a truck load is applied to the structure. Rear axle, B, is three feet behind front axle, A, then the effect of A at x feet along the structure must be added to the effect of B at (x – 3) feet along the structure—not the effect of B at x feet along the structure.
Many loads are distributed rather than concentrated. Influence lines can be used with either concentrated or distributed loadings. For a concentrated (or point) load, a unit point load is moved along the structure. For a distributed load of a given width, a unit-distributed load of the same width is moved along the structure, noting that as the load nears the ends and moves off the structure only part of the total load is carried by the structure. The effect of the distributed unit load can also be obtained by integrating the point load's influence line over the corresponding length of the structures.
The Influence lines of determinate structures becomes a mechanism whereas the Influence lines of indeterminate structures become just determinate.
Demonstration from Betti's theorem
Influence lines are based on Betti's theorem. From there, consider two external force systems, and , each one associated with a displacement field whose displacements measured in the force's point of application are represented by and .
Consider that the system represents actual forces applied to the structure, which are in equilibrium. Consider that the system is formed by a single force, . The displacement field associated with this forced is defined by releasing the structural restraints acting on the point where is applied and imposing a relative unit displacement that is kinematically admissible in the negative direction, represented as . From Betti's theorem, we obtain the following result:
Concept
When designing a beam or truss, it is necessary to design for the scenarios causing the maximum expected reactions, shears, and moments within the structure members to ensure that no member fails during the life of the structure. When dealing with dead loads (loads that never move, such as the weight of the structure itself), this is relatively easy because the loads are easy to predict and plan for. For live loads (any load that moves during the life of the structure, such as furniture and people), it becomes much harder to predict where the loads will be or how concentrated or distributed they will be throughout the life of the structure.
Influence lines graph the response of a beam or truss as a unit load travels across it. The influence line helps designers find where to place a live load in order to calculate the maximum resulting response for each of the following functions: reaction, shear, or moment. The designer can then scale the influence line by the greatest expected load to calculate the maximum response of each function for which the beam or truss must be designed.
Influence lines can also be used to find the responses of other functions (such as deflection or axial force) to the applied unit load, but these uses of influence lines are less common.
Methods for constructing influence lines
There are three methods used for constructing the influence line. The first is to tabulate the influence values for multiple points along the structure, then use those points to create the influence line. The second is to determine the influence-line equations that apply to the structure, thereby solving for all points along the influence line in terms of x, where x is the number of feet from the start of the structure to the point where the unit load is applied. The third method is called the Müller-Breslau's principle. It creates a qualitative influence line. This influence line will still provide the designer with an accurate idea of where the unit load will produce the largest response of a function at the point being studied, but it cannot be used directly to calculate what the magnitude that response will be, whereas the influence lines produced by the first two methods can.
Tabulate values
To tabulate the influence values with respect to some point A on the structure, a unit load must be placed at various points along the structure. Statics is used to calculate what the value of the function (reaction, shear, or moment) is at point A. Typically an upwards reaction is seen as positive. Shear and moments are given positive or negative values according to the same conventions used for shear and moment diagrams.
R. C. Hibbeler states, in his book Structural Analysis, “All statically determinate beams will have influence lines that consist of straight line segments.” Therefore, it is possible to minimize the number of computations by recognizing the points that will cause a change in the slope of the influence line and only calculating the values at those points. The slope of the inflection line can change at supports, mid-spans, and joints.
An influence line for a given function, such as a reaction, axial force, shear force, or bending moment, is a graph that shows the variation of that function at any given point on a structure due to the application of a unit load at any point on the structure.
An influence line for a function differs from a shear, axial, or bending moment diagram. Influence lines can be generated by independently applying a unit load at several points on a structure and determining the value of the function due to this load, i.e. shear, axial, and moment at the desired location. The calculated values for each function are then plotted where the load was applied and then connected together to generate the influence line for the function.
Once the influence values have been tabulated, the influence line for the function at point A can be drawn in terms of x. First, the tabulated values must be located. For the sections in between the tabulated points, interpolation is required. Therefore, straight lines may be drawn to connect the points. Once this is done, the influence line is complete.
Influence-line equations
It is possible to create equations defining the influence line across the entire span of a structure. This is done by solving for the reaction, shear, or moment at the point A caused by a unit load placed at x feet along the structure instead of a specific distance. This method is similar to the tabulated values method, but rather than obtaining a numeric solution, the outcome is an equation in terms of x.
It is important to understanding where the slope of the influence line changes for this method because the influence-line equation will change for each linear section of the influence line. Therefore, the complete equation is a piecewise linear function with a separate influence-line equation for each linear section of the influence line.
Müller-Breslau's Principle
According to www.public.iastate.edu, “The Müller-Breslau Principle can be utilized to draw qualitative influence lines, which are directly proportional to the actual influence line.” Instead of moving a unit load along a beam, the Müller-Breslau Principle finds the deflected shape of the beam caused by first releasing the beam at the point being studied, and then applying the function (reaction, shear, or moment) being studied to that point. The principle states that the influence line of a function will have a scaled shape that is the same as the deflected shape of the beam when the beam is acted upon by the function.
To understand how the beam deflects under the function, it is necessary to remove the beam's capacity to resist the function. Below are explanations of how to find the influence lines of a simply supported, rigid beam (such as the one displayed in Figure 1).
When determining the reaction caused at a support, the support is replaced with a roller, which cannot resist a vertical reaction. Then an upward (positive) reaction is applied to the point where the support was. Since the support has been removed, the beam will rotate upwards, and since the beam is rigid, it will create a triangle with the point at the second support. If the beam extends beyond the second support as a cantilever, a similar triangle will be formed below the cantilevers position. This means that the reaction’s influence line will be a straight, sloping line with a value of zero at the location of the second support.
When determining the shear caused at some point B along the beam, the beam must be cut and a roller-guide (which is able to resist moments but not shear) must be inserted at point B. Then, by applying a positive shear to that point, it can be seen that the left side will rotate down, but the right side will rotate up. This creates a discontinuous influence line that reaches zero at the supports and whose slope is equal on either side of the discontinuity. If point B is at a support, then the deflection between point B and any other supports will still create a triangle, but if the beam is cantilevered, then the entire cantilevered side will move up or down creating a rectangle.
When determining the moment caused by at some point B along the beam, a hinge will be placed at point B, releasing it to moments but resisting shear. Then when a positive moment is placed at point B, both sides of the beam will rotate up. This will create a continuous influence line, but the slopes will be equal and opposite on either side of the hinge at point B. Since the beam is simply supported, its end supports (pins) cannot resist moment; therefore, it can be observed that the supports will never experience moments in a static situation regardless of where the load is placed.
The Müller-Breslau Principle can only produce qualitative influence lines. This means that engineers can use it to determine where to place a load to incur the maximum of a function, but the magnitude of that maximum cannot be calculated from the influence line. Instead, the engineer must use statics to solve for the functions value in that loading case.
Alternate loading cases
Multiple loads
The simplest loading case is a single point load, but influence lines can also be used to determine responses due to multiple loads and distributed loads. Sometimes it is known that multiple loads will occur at some fixed distance apart. For example, on a bridge the wheels of cars or trucks create point loads that act at relatively standard distances.
To calculate the response of a function to all these point loads using an influence line, the results found with the influence line can be scaled for each load, and then the scaled magnitudes can be summed to find the total response that the structure must withstand. The point loads can have different magnitudes themselves, but even if they apply the same force to the structure, it will be necessary to scale them separately because they act at different distances along the structure. For example, if a car's wheels are 10 feet apart, then when the first set is 13 feet onto the bridge, the second set will be only 3 feet onto the bridge. If the first set of wheels is 7 feet onto the bridge, the second set has not yet reached the bridge, and therefore only the first set is placing a load on the bridge.
Also, if, between two loads, one of the loads is heavier, the loads must be examined in both loading orders (the larger load on the right and the larger load on the left) to ensure that the maximum load is found. If there are three or more loads, then the number of cases to be examined increases.
Distributed loads
Many loads do not act as point loads, but instead act over an extended length or area as distributed loads. For example, a tractor with continuous tracks will apply a load distributed over the length of each track.
To find the effect of a distributed load, the designer can integrate an influence line, found using a point load, over the affected distance of the structure. For example, if a three-foot-long track acts between 5 feet and 8 feet along a beam, the influence line of that beam must be integrated between 5 and 8 feet. The integration of the influence line gives the effect that would be felt if the distributed load had a unit magnitude. Therefore, after integrating, the designer must still scale the results to get the actual effect of the distributed load.
Indeterminate structures
While the influence lines of statically determinate structures (as mentioned above) are made up of straight line segments, the same is not true for indeterminate structures. Indeterminate structures are not considered rigid; therefore, the influence lines drawn for them will not be straight lines but rather curves. The methods above can still be used to determine the influence lines for the structure, but the work becomes much more complex as the properties of the beam itself must be taken into consideration.
See also
Beam
Shear and Moment Diagram
Dead and Live Loads
Müller-Breslau's principle
References
Beam theory
Structural analysis
Structural engineering | Influence line | [
"Engineering"
] | 2,903 | [
"Structural engineering",
"Structural analysis",
"Construction",
"Civil engineering",
"Mechanical engineering",
"Aerospace engineering"
] |
14,460,347 | https://en.wikipedia.org/wiki/Martyn%20Thomas | Martyn Thomas (born 1948) is a British independent consultant and software engineer.
Biography
Martyn Thomas founded the software engineering company Praxis in 1983, based in Bath, southern England. He has a special interest in safety-critical systems and other high integrity applications. He has acted as an expert witness involving complex software engineering issues.
Thomas was born in Salisbury, southern England. He studied biochemistry at University College London, graduating in 1969, when he started working in the field of computing. Between 1969 and 1983, he was employed at universities in London and the Netherlands, at STC working on telecommunications software, and at the South West Universities Regional Computer Centre in Bath.
In 1983, Thomas founded Praxis with David Bean, where he encouraged the use of formal methods within the company for software development. In 1986, Praxis became the first independent systems house to achieve BS 5750 (later ISO 9001) certification for all its activities. Praxis became internationally recognised as a leader in the use of rigorous software engineering, including formal methods, and grew to around 200 staff.
In December 1992, Praxis was sold to Deloitte and Touche, an international firm of accountants and management consultants, and Martyn became a Deloitte Consulting international partner whilst remaining chairman and, later, managing director of Praxis. He left Deloitte Consulting in 1997.
He is currently director of Martyn Thomas Associates Limited and a visiting professor at the University of Manchester, and a Fellow and Emeritus Professor at Gresham College. He lives in London.
Current career
Fellow, Emeritus Professor and member of Council. Gresham College,
Visiting Professor of Software Engineering at Aberystwyth University, UK,
Fellow at The Royal Academy of Engineering,
Member at UK Computing Research Committee,
Owner, Principal Consultant and Expert Witness at MTAL.,
Past career
Non-executive director of the Health and Safety Executive (HSE),
IT Livery Company Professor of Information Technology at Gresham College,
Member of Advisory Council at Foundation for Information Policy Research,
Non-executive Director of the Serious Organised Crime Agency,
Non-executive Director of the Office of the Independent Adjudicator for Higher Education,
Fellow at British Computer Society,
Chair, Executive Board at DEPLOY Project,
Member, "Sufficient Evidence" study at National Academies / CSTB,
Chair, Steering Committee at DIRC,
Member of Council at EPSRC,
Member of Advisory Group at OST Foresight programmes,
Partner at Deloitte Consulting,
Founder/Chairman/managing director at Praxis,
Chairman at Praxis Critical Systems,
Deputy Director at SWURCC,
Software Engineer at STC.
Honors and awards
Commander of the Order of the British Empire, CBE,
Fellow of the Royal Academy of Engineering,
Honorary DSc (Hull),
Honorary DSc (Edinburgh),
Honorary DSc (City),
Honorary DSc (Bath), Dr of Engineering,
IEE Achievement Medal, Computing and Control,
Who's Who.
References
External links
Martyn Thomas Associates Limited website
IT Livery Professor, Gresham College
Martyn Thomas biography from the IET
Oxford University Computing Laboratory home page
Dr Thomas Oration, University of Bath
1948 births
Living people
People from Salisbury
Alumni of University College London
British software engineers
Formal methods people
People in information technology
British corporate directors
Commanders of the Order of the British Empire
Fellows of the British Computer Society
Fellows of the Institution of Engineering and Technology
Members of the Department of Computer Science, University of Oxford
People from Bath, Somerset
Academics of Aberystwyth University | Martyn Thomas | [
"Technology",
"Engineering"
] | 717 | [
"People in information technology",
"Information technology",
"Institution of Engineering and Technology",
"Fellows of the Institution of Engineering and Technology"
] |
14,460,441 | https://en.wikipedia.org/wiki/Dynatext | DynaText is an SGML publishing tool. It was introduced in 1990, and was the first system to handle arbitrarily large SGML documents, and to render them according to multiple style-sheets that could be switched at will.
DynaText and its Web sibling DynaWeb won multiple Seybold and other awards, and there are eleven US Patents related to the DynaText technology: 5,557,722; 5,644,776; 5,708,806; 5,893,109; 5,983,248; 6,055,544; 6,101,511; 6,101,512; 6,105,044; 6,167,409; and 6,546,406.
History
DynaText was developed by Electronic Book Technologies (EBT), Incorporated, of Providence, Rhode Island. EBT was founded by Louis Reynolds, Steven DeRose, Jeffrey Vogel, and Andries van Dam, and was sold to Inso corporation in 1996, when it had about 150 employees.
DynaText stands in the long tradition of hypermedia at Brown University, and adopted many features pioneered by FRESS, such as unlimited document sizes, dynamically-controllable styles and views, and reader-created links and trails.
DynaText heavily influenced stylesheet technologies such as DSSSL and CSS. XML chairman Jon Bosak cites EBT chief architect Steven DeRose as one of the originators of the notion of well-formedness formalized in XML, as well as DynaText for influencing the design of Web browsers in general; Jon Bosak produced SGML versions of the complete works of Shakespeare, the KJV Old Testament and New Testament, Book of Mormon, and Quran, and released them in 1994 bundled with Dynatext.
Inso corporation went out of business in 2002.
DynaText was demonstrated live by DeRose and David Sklar at "A Half-Century of Hypertext at Brown: A Symposium", held at Brown University on May 23, 2019, using a variorum edition The Wife of Bath's Tale, published in DynaText by Cambridge University Press.
Technology
DynaText accepted SGML as input, and built a binary representation of the structure (similar to DOM for XML, but persistent), as well as a full-text inverted index of the text, elements, and attributes. Customers typically distributed such compiled e-books on CD-ROM or via network servers. Later versions of DynaText could also read SGML and XML on the fly, providing exactly the same interface.
Unlike many prior systems, DynaText was not limited to any particular DTD (or schema). Rather, customers could build style sheets in a simple language (also SGML-based), using properties very much like the later DSSSL, CSS, and XSL-FO. However, every property could have an expression as its value, which would be evaluated (if necessary) for each element the style applied to. Graphics, tables, formulae, and plug-ins could be included in documents.
Unlike nearly all prior SGML systems, DynaText was not limited to documents that could fit in RAM on the viewing or serving computer system. Users commonly created documents in the tens to hundreds of MB. DynaText customers included aerospace, workstation and other computer industry firms, government, literary and technical publishers, and others.
Full-text searches were based on an inverted index of words and other tokens (except for Japanese text, which was handled specially). Dynatext could report the number of "hits" for a given search, that occur within each section in the table of contents (by default, the table of contents appeared in a separate pane as an expandable outline, and clicking on any entry scrolled the full-text pane to the start of the corresponding section). Searches could also restrict hits to particular SGML element types, or sequences of types; refer to attributes; and use Boolean operators and parentheses. The "and" operator restricted its operands to occurring near each other, by default in the same paragraph or comparable element.
References
External links
DynaText Notes by Tim Berners-Lee (this note refers to a pre-release or very early release of DynaText).
Document Number: 007-3229-001
Information retrieval systems | Dynatext | [
"Technology"
] | 914 | [
"Information technology",
"Information retrieval systems"
] |
14,460,507 | https://en.wikipedia.org/wiki/Clapotis | In hydrodynamics, a clapotis (from French for "lapping of water") is a non-breaking standing wave pattern, caused for example, by the reflection of a traveling surface wave train from a near vertical shoreline like a breakwater, seawall or steep cliff.
The resulting clapotic wave does not travel horizontally, but has a fixed pattern of nodes and antinodes.
These waves promote erosion at the toe of the wall, and can cause severe damage to shore structures. The term was coined in 1877 by French mathematician and physicist Joseph Valentin Boussinesq who called these waves 'le clapotis' meaning "the lapping".
In the idealized case of "full clapotis" where a purely monotonic incoming wave is completely reflected normal to a solid vertical wall,
the standing wave height is twice the height of the incoming waves at a distance of one half wavelength from the wall.
In this case, the circular orbits of the water particles in the deep-water wave are converted to purely linear motion, with vertical velocities at the antinodes, and horizontal velocities at the nodes.
The standing waves alternately rise and fall in a mirror image pattern, as kinetic energy is converted to potential energy, and vice versa.
In his 1907 text, Naval Architecture, Cecil Peabody described this phenomenon:
Related phenomena
True clapotis is very rare, because the depth of the water or the precipitousness of the shore are unlikely to completely satisfy the idealized requirements. In the more realistic case of partial clapotis, where some of the incoming wave energy is dissipated at the shore, the incident wave is less than 100% reflected, and only a partial standing wave is formed where the water particle motions are elliptical.
This may also occur at sea between two different wave trains of near equal wavelength moving in opposite directions, but with unequal amplitudes. In partial clapotis the wave envelope contains some vertical motion at the nodes.
When a wave train strikes a wall at an oblique angle, the reflected wave train departs at the supplementary angle causing a cross-hatched wave interference pattern known as the clapotis gaufré ("waffled clapotis"). In this situation, the individual crests formed at the intersection of the incident and reflected wave train crests move parallel to the structure. This wave motion, when combined with the resultant vortices, can erode material from the seabed and transport it along the wall, undermining the structure until it fails.
Clapotic waves on the sea surface also radiate infrasonic microbaroms into the atmosphere, and seismic signals called microseisms coupled through the ocean floor to the solid Earth.
Clapotis has been called the bane and the pleasure of sea kayaking.
See also
Rogue wave
Seiche
References
Further reading
External links
Wave mechanics
Coastal engineering
Water waves | Clapotis | [
"Physics",
"Chemistry",
"Engineering"
] | 589 | [
"Physical phenomena",
"Water waves",
"Coastal engineering",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Civil engineering",
"Fluid dynamics"
] |
14,461,309 | https://en.wikipedia.org/wiki/Nitrogen-vacancy%20center | The nitrogen-vacancy center (N-V center or NV center) is one of numerous photoluminescent point defects in diamond. Its most explored and useful properties include its spin-dependent photoluminescence (which enables measurement of the electronic spin state using optically detected magnetic resonance), and its relatively long (millisecond) spin coherence at room temperature, lasting up to milliseconds. The NV center energy levels are modified by magnetic fields, electric fields, temperature, and strain, which allow it to serve as a sensor of a variety of physical phenomena. Its atomic size and spin properties can form the basis for useful quantum sensors.
NV centers enable nanoscale measurements of magnetic and electric fields, temperature, and mechanical strain with improved precision. External perturbation sensitivity makes NV centers ideal for applications in biomedicine—such as single-molecule imaging and cellular process modeling. NV centers can also be initialized as qubits and enable the implementation of quantum algorithms and networks. It has also been explored for applications in quantum computing (e.g. for entanglement generation), quantum simulation, and spintronics.
Structure
The nitrogen-vacancy center is a point defect in the diamond lattice. It consists of a nearest-neighbor pair of a nitrogen atom, which substitutes for a carbon atom, and a lattice vacancy.
Two charge states of this defect, neutral NV0 and negative NV−, are known from spectroscopic studies using optical absorption, photoluminescence (PL), electron paramagnetic resonance (EPR) and optically detected magnetic resonance (ODMR), which can be viewed as a hybrid of PL and EPR; most details of the structure originate from EPR. The nitrogen atom on one hand has five valence electrons. Three of them are covalently bonded to the carbon atoms, while the other two remain non-bonded and are called a lone pair. The vacancy on the other hand has three unpaired electrons. Two of them form a quasi covalent bond and one remains unpaired. The overall symmetry, however, is axial (trigonal C3V); one can visualize this by imagining the three unpaired vacancy electrons continuously exchanging their roles.
The NV0 thus has one unpaired electron and is paramagnetic. However, despite extensive efforts, electron paramagnetic resonance signals from NV0 avoided detection for decades until 2008. Optical excitation is required to bring the NV0 defect into the EPR-detectable excited state; the signals from the ground state are presumably too broad for EPR detection.
The NV0 centers can be converted into NV− by changing the Fermi level position. This can be achieved by applying external voltage to a p-n junction made from doped diamond, e.g., in a Schottky diode.
In the negative charge state NV−, an extra electron is located at the vacancy site forming a spin S=1 pair with one of the vacancy electrons. This extra electron induces spin triplet ground states of the form |3A⟩ and excited states of the form |3E⟩. There is an additional metastable state that exists between these spin triplets, that often manifests as a singlet. These states play a crucial role in enabling ground state depletion (GSD) microscopy. As in NV0, the vacancy electrons are "exchanging roles" preserving the overall trigonal symmetry. This NV− state is what is commonly, and somewhat incorrectly, called "the nitrogen-vacancy center". The neutral state is not generally used for quantum technology.
The NV centers are randomly oriented within a diamond crystal. Ion implantation techniques can enable their artificial creation in predetermined positions.
Production
Nitrogen-vacancy centers are typically produced from single substitutional nitrogen centers (called C or P1 centers in diamond literature) by irradiation followed by annealing at temperatures above 700 °C. A wide range of high-energy particles is suitable for such irradiation, including electrons, protons, neutrons, ions, and gamma photons. Irradiation produces lattice vacancies, which are a part of NV centers. Those vacancies are immobile at room temperature, and annealing is required to move them. Single substitutional nitrogen produces strain in the diamond lattice; it therefore efficiently captures moving vacancies, producing the NV centers.
During chemical vapor deposition of diamond, a small fraction of single substitutional nitrogen impurity (typically <0.5%) traps vacancies generated as a result of the plasma synthesis. Such nitrogen-vacancy centers are preferentially aligned to the growth direction. Delta doping of nitrogen during CVD growth can be used to create two-dimensional ensembles of NV centers near the diamond surface for enhanced sensing or simulation.
Diamond is notorious for having a relatively large lattice strain. Strain splits and shifts optical transitions from individual centers resulting in broad lines in the ensembles of centers. Special care is taken to produce extremely sharp NV lines (line width ~10 MHz) required for most experiments: high-quality, pure natural or better synthetic diamonds (type IIa) are selected. Many of them already have sufficient concentrations of grown-in NV centers and are suitable for applications. If not, they are irradiated by high-energy particles and annealed. Selection of a certain irradiation dose allows tuning the concentration of produced NV centers such that individual NV centers are separated by micrometre-large distances. Then, individual NV centers can be studied with standard optical microscopes or, better, near-field scanning optical microscopes having sub-micrometre resolution.
Energy level structure
The NV center has a ground-state triplet (3A), an excited-state triplet (3E) and two intermediate-state singlets (1A and 1E). Both 3A and 3E contain ms = ±1 spin states, in which the two electron spins are aligned (either up, such that ms = +1 or down, such that ms = -1), and an ms = 0 spin state where the electron spins are antiparallel. Due to the magnetic interaction, the energy of the ms = ±1 states is higher than that of the ms = 0 state. 1A and 1E only contain a spin state singlet each with ms = 0.
If an external magnetic field is applied along the defect axis (the axis which aligns with the nitrogen atom and the vacancy) of the NV center, it does not affect the ms = 0 states, but it splits the ms = ±1 levels (Zeeman effect). Similarly the following other properties of the environment influence the energy level diagram :
Amplitude and orientation of a static magnetic field splits the ms = ±1 levels in the ground and excited states.
Amplitude and orientation of elastic (strain) or electric fields have a much smaller but also more complex effects on the different levels.
Continuous-wave microwave radiation (applied in resonance with the transition between ms = 0 and (one of the) ms = ±1 states) changes the population of the sublevels within the ground and excited state.
A tunable laser can selectively excite certain sublevels of the ground and excited states.
Surrounding spins and spin–orbit interaction will modulate the magnetic field experienced by the NV center.
Temperature and pressure affect different parts of the spectrum including the shift between ground and excited states.
The above-described energy structure is by no means exceptional for a defect in diamond or other semiconductor. It was not this structure alone, but a combination of several favorable factors (previous knowledge, easy production, biocompatibility, simple initialisation, use at room temperature etc.) which suggested the use of the NV center as a qubit and quantum sensor.
Optical properties
NV centers emit bright red light (3E→3A transitions), if excited off-resonantly by visible green light (3A →3E transitions). This can be done with convenient light sources such as argon or krypton lasers, frequency doubled Nd:YAG lasers, dye lasers, or He-Ne lasers. Excitation can also be achieved at energies below that of zero phonon emission.
As the relaxation time from the excited state is small (~10 ns), the emission happens almost instantly after the excitation. At room temperature the NV center's optical spectrum exhibits no sharp peaks due to thermal broadening. However, cooling the NV centers with liquid nitrogen or liquid helium dramatically narrows the lines down to a width of a few MHz. At low temperature it also becomes possible to specifically address the zero-phonon line (ZPL).
An important property of the luminescence from individual NV centers is its high temporal stability. Whereas many single-molecular emitters bleach (i.e. change their charge state and become dark) after emission of 106–108 photons, bleaching is unlikely for NV centers at room temperature. Strong laser illumination, however, may also convert some NV− into NV0 centers.
Because of these properties, the ideal technique to address the NV centers is confocal microscopy, both at room temperature and at low temperature.
State manipulation
Optical spin manipulation
Optical transitions must preserve the total spin and occur only between levels of the same total spin. Specifically, transitions between the ground and excited states (with equal spin) can be induced using a green laser with a wavelength of 546 nm. Transitions 3E→1A and 1E→3A are non-radiative, while 1A →1E has both a non-radiative and infrared decay path.
The diagram on the right shows the multi-electronic states of the NV center labeled according to their symmetry (E or A) and their spin state (3 for a triplet (S=1) and 1 for a singlet (S=0)). There are two triplet states and two intermediate singlet states.
Spin-state initialisation
An important property of the non-radiative transition between 3E and 1A is that it is stronger for ms = ±1 and weaker for ms = 0. This provides the basis a very useful manipulation strategy, which is called spin state initialisation (or optical spin-polarization). To understand the process, first consider an off-resonance excitation which has a higher frequency (typically 2.32 eV (532 nm)) than the frequencies of all transitions and thus lies in the vibronic bands for all transitions. By using a pulse of this wavelength, one can excite all spin states from 3A to 3E. An NV center in the ground state with ms = 0 will be excited to the corresponding excited state with ms = 0 due to the conservation of spin. Afterwards it decays back to its original state. For a ground state with ms = ±1, the situation is different. After the excitation, it has a relatively high probability to decay into the intermediate state 1A by non-radiative transition and further into the ground state with ms = 0. After many cycles, the state of the NV center (independently of whether it started in ms = 0 or ms = ±1) will end up in the ms = 0 ground state. This process can be used to initialize the quantum state of a qubit for quantum information processing or quantum sensing.
Sometimes the polarisability of the NV center is explained by the claim that the transition from 1E to the ground state with ms = ±1 is small, compared to the transition to ms = 0. However, it has been shown that the comparatively low decay probability for ms = 0 states w.r.t. ms = ±1 states into 1A is enough to explain the polarization.
Effects of external fields
Microwave spin manipulation
The energy difference between the ms = 0 and ms = ±1 states corresponds to the microwave regime. Population can be transferred between the states by applying a resonant magnetic field perpendicular to the defect axis. Numerous dynamic effects (spin echo, Rabi oscillations, etc.) can be exploited by applying a carefully designed sequence of microwave pulses. Such protocols are rather important for the practical realization of quantum computers. By manipulating the population, it is possible to shift the NV center into a more sensitive or stable state. Its own resulting fluctuating fields may also be used to influence the surrounding nuclei or protect the NV center itself from noise. This is typically done using a wire loop (microwave antenna) which creates an oscillating magnetic field.
Optical manipulation
There are inherent difficulties in achieving miniaturization and effective error reduction in microwave and radio frequency driven spin manipulation techniques. This poses special challenge on application of spin based quantum sensors on sensing electric and magnetic field or any physical phenomena at nanoscale level. The recent developments in microwave-free and optically driven methods pave the way towards energy efficient and coherent quantum sensing. This technique is based on coherent mapping of the spin states of the Nitrogen nucleus to that of the NV center under the application of external magnetic field transverse to the NV symmetry axis. The optical pumping then prepares the system in a coherent superposition state which is the key element in a quantum network.
Influence of external factors
If a magnetic field is oriented along the defect axis it leads to Zeeman splitting separating the ms = +1 from the ms = -1 states. This technique is used to lift the degeneracy and use only two of the spin states (usually the ground states with ms = -1 and ms = 0) as a qubit. Population can then be transferred between them using a microwave field. In the specific instance that the magnetic field reaches 1027 G (or 508 G) then the ms = –1 and ms = 0 states in the ground (or excited) state become equal in energy (Ground/Excited State Level Anticrossing). The following strong interaction results in so-called spin polarization, which strongly affects the intensity of optical absorption and luminescence transitions involving those states.
Importantly, this splitting can be modulated by applying an external electric field, in a similar fashion to the magnetic field mechanism outlined above, though the physics of the splitting is somewhat more complex. Nevertheless, an important practical outcome is that the intensity and position of the luminescence lines is modulated. Strain has a similar effect on the NV center as electric fields.
There is an additional splitting of the ms = ±1 energy levels, which originates from the hyperfine interaction between surrounding nuclear spins and the NV center. These nuclear spins create magnetic and electric fields of their own leading to further distortions of the NV spectrum (see nuclear Zeeman and quadrupole interaction). Also the NV center's own spin–orbit interaction and orbital degeneracy leads to additional level splitting in the excited 3E state.
Temperature and pressure directly influence the zero-field term of the NV center leading to a shift between the ground and excited state levels.
The Hamiltonian, a quantum mechanical equation describing the dynamics of a system, which shows the influence of different factors on the NV center can be found below.
Although it can be challenging, all of these effects are measurable, making the NV center a perfect candidate for a quantum sensor.
Charge state manipulation
It is also possible to switch the charge state of the NV center (i.e. between NV−, NV+ and NV0) by applying a gate voltage. The gate voltage electrically shifts the Fermi level at the diamond surface and changes its surface band bending. Upon varying the gate voltage, individual centers are allowed to switch from an unknown non-fluorescent state to the neutral charge state NV0. The ensemble of centers can be transitioned from NV0 to the qubit state NV−. The diamond surface termination additionally influences the charge state of near-surface NV centers. Oxygen termination is known to stabilize the NV−state by reducing surface conductivity and mitigating band bending This improves charge state stability and coherence. In a similar capacity, nitrogen termination also affects surface properties and can optimize NV centers for specific sensing applications.
Optical excitation methods additionally play a role in charge state manipulation. Illumination with specific wavelengths can induce transitions between charge states. Near-infrared light at 1064 nm has been shown to convert NV0 to NV−, enhancing photoluminescence.
Applications
The spectral shape and intensity of the optical signals from the NV− centers are sensitive to external perturbation, such as temperature, strain, electric and magnetic field. However, the use of spectral shape for sensing those perturbation is impractical, as the diamond would have to be cooled to cryogenic temperatures to sharpen the NV− signals. A more realistic approach is to use luminescence intensity (rather than lineshape), which exhibits a sharp resonance when a microwave frequency is applied to diamond that matches the splitting of the ground-state levels. The resulting optically detected magnetic resonance signals are sharp even at room temperature, and can be used in miniature sensors. Such sensors can detect magnetic fields of a few nanotesla or electric fields of about 10 V/cm at kilohertz frequencies after 100 seconds of averaging. This sensitivity allows detecting a magnetic or electric field produced by a single electron located tens of nanometers away from an NV− center.
Using the same mechanism, the NV− centers were employed in scanning thermal microscopy to measure high-resolution spatial maps of temperature and thermal conductivity (see image).
Because the NV center is sensitive to magnetic fields, it is being actively used in scanning probe measurements to study myriad condensed matter phenomena both through measuring a spatially varying magnetic field or inferring local currents in a device.
Another possible use of the NV− centers is as a detector to measure the full mechanical stress tensor in the bulk of the crystal. For this application, the stress-induced splitting of the zero-phonon-line is exploited, and its polarization properties. A robust frequency-modulated radio receiver using the electron-spin-dependent photoluminescence that operated up to 350 °C demonstrates the possibility for use in extreme conditions.
In addition to the quantum optical applications, luminescence from the NV− centers can be applied for imaging biological processes, such as fluid flow in living cells. This application relies on good compatibility of diamond nano-particles with the living cells and on favorable properties of photoluminescence from the NV− centers (strong intensity, easy excitation and detection, temporal stability, etc.). Compared with large single-crystal diamonds, nanodiamonds are cheap (about US$1 per gram) and available from various suppliers. NV− centers are produced in diamond powders with sub-micrometre particle size using the standard process of irradiation and annealing described above. Due to the relatively small size of nanodiamond, NV centers can be produced by irradiating nanodiamond of 100 nm or less with medium energy H+ beam. This method reduces the required ion dose and reaction, making it possible to mass-produce fluorescent nanodiamonds in ordinary laboratory. Fluorescent nanodiamond produced with such method is bright and photostable, making it excellent for long-term, three dimensional tracking of single particle in living cell. Those nanodiamonds are introduced in a cell, and their luminescence is monitored using a standard fluorescence microscope.
Stimulated emission from the NV− center has been demonstrated, though it could be achieved only from the phonon side-band (i.e. broadband light) and not from the ZPL. For this purpose, the center has to be excited at a wavelength longer than ~650 nm, as higher-energy excitation ionizes the center.
The first continuous-wave room-temperature maser has been demonstrated. It used 532-nm pumped NV− centers held within a high Purcell factor microwave cavity and an external magnetic field of 4300 G. Continuous maser oscillation generated a coherent signal at ~9.2 GHz.
The NV center can have a very long spin coherence time approaching the second regime. This is advantageous for applications in quantum sensing and quantum communication. Disadvantageous for these applications is the long radiative lifetime (~12 ns
) of the NV center and the strong phonon sideband in its emission spectrum. Both issues can be addressed by putting the NV center in an optical cavity.
Historical remarks
The microscopic model and most optical properties of ensembles of the NV− centers have been firmly established in the 1970s based on the optical measurements combined with uniaxial stress and on the electron paramagnetic resonance. However, a minor error in EPR results (it was assumed that illumination is required to observe NV− EPR signals) resulted in the incorrect multiplicity assignments in the energy level structure. In 1991 it was shown that EPR can be observed without illumination, which established the energy level scheme shown above. The magnetic splitting in the excited state has been measured only recently.
The characterization of single NV− centers has become a very competitive field nowadays, with many dozens of papers published in the most prestigious scientific journals. One of the first results was reported back in 1997. In that paper, it was demonstrated that the fluorescence of single NV− centers can be detected by room-temperature fluorescence microscopy and that the defect shows perfect photostability. Also one of the outstanding properties of the NV center was demonstrated, namely room-temperature optically detected magnetic resonance.
See also
Crystallographic defects in diamond
Crystallographic defect
Material properties of diamond
Notes
References
Diamond
Spintronics
Spectroscopy
Crystallographic defects
Quantum computing | Nitrogen-vacancy center | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,496 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Spintronics",
"Crystallographic defects",
"Materials science",
"Crystallography",
"Condensed matter physics",
"Materials degradation",
"Spectroscopy"
] |
14,463,033 | https://en.wikipedia.org/wiki/Free%20surface | In physics, a free surface is the surface of a fluid that is subject to zero parallel shear stress,
such as the interface between two homogeneous fluids.
An example of two such homogeneous fluids would be a body of water (liquid) and the air in the Earth's atmosphere (gas mixture). Unlike liquids, gases cannot form a free surface on their own.
Fluidized/liquified solids, including slurries, granular materials, and powders may form a free surface.
A liquid in a gravitational field will form a free surface if unconfined from above.
Under mechanical equilibrium this free surface must be perpendicular to the forces acting on the liquid; if not there would be a force along the surface, and the liquid would flow in that direction. Thus, on the surface of the Earth, all free surfaces of liquids are horizontal unless disturbed (except near solids dipping into them, where surface tension distorts the surface in a region called the meniscus).
In a free liquid that is not affected by outside forces such as a gravitational field, internal attractive forces only play a role (e.g. Van der Waals forces, hydrogen bonds). Its free surface will assume the shape with the least surface area for its volume: a perfect sphere. Such behaviour can be expressed in terms of surface tension. It can be demonstrated experimentally by observing a large globule of oil placed below the surface of a mixture of water and alcohol having the same density so the oil has neutral buoyancy.
Flatness
Flatness refers to the shape of a liquid's free surface. On Earth, the flatness of a liquid is a function of the curvature of the planet, and from trigonometry, can be found to deviate from true flatness by approximately 19.6 nanometers over an area of 1 square meter, a deviation which is dominated by the effects of surface tension. This calculation uses Earth's mean radius at sea level, however a liquid will be slightly flatter at the poles.
Over large distances or planetary scale, the surface of an undisturbed liquid tends to conform to equigeopotential surfaces; for example, mean sea level follows approximately the geoid.
Waves
If the free surface of a liquid is disturbed, waves are produced on the surface. These waves are not elastic waves due to any elastic force; they are gravity waves caused by the force of gravity tending to bring the surface of the disturbed liquid back to its horizontal level. Momentum causes the wave to overshoot, thus oscillating and spreading the disturbance to the neighboring portions of the surface. The velocity of the surface waves varies as the square root of the wavelength if the liquid is deep; therefore long waves on the sea go faster than short ones. Very minute waves or ripples are not due to gravity but to capillary action, and have properties different from those of the longer ocean surface waves,
because the surface is increased in area by the ripples and the capillary forces are in this case large compared with the gravitational forces.
Capillary ripples are damped both by sub-surface viscosity and by surface rheology.
Rotation
If a liquid is contained in a cylindrical vessel and is rotating around a vertical axis coinciding with the axis of the cylinder, the free surface will assume a parabolic surface of revolution known as a paraboloid. The free surface at each point is at a right angle to the force acting at it, which is the resultant of the force of gravity and the centrifugal force from the motion of each point in a circle. Since the main mirror in a telescope must be parabolic, this principle is used to create liquid-mirror telescopes.
Consider a cylindrical container filled with liquid rotating in the z direction in cylindrical coordinates, the equations of motion are:
where is the pressure, is the density of the fluid, is the radius of the cylinder, is the angular frequency, and is the gravitational acceleration. Taking a surface of constant pressure the total differential becomes
Integrating, the equation for the free surface becomes
where is the distance of the free surface from the bottom of the container along the axis of rotation. If one integrates the volume of the paraboloid formed by the free surface and then solves for the original height, one can find the height of the fluid along the centerline of the cylindrical container:
The equation of the free surface at any distance from the center becomes
If a free liquid is rotating about an axis, the free surface will take the shape of an oblate spheroid: the approximate shape of the Earth due to its equatorial bulge.
Related terms
In hydrodynamics, the free surface is defined mathematically by the free-surface condition, that is, the material derivative on the pressure is zero:
In fluid dynamics, a free-surface vortex, also known as a potential vortex or whirlpool, forms in an irrotational flow, for example when a bathtub is drained.
In naval architecture and marine safety, the free surface effect occurs when liquids or granular materials under a free surface in partially filled tanks or holds shift when the vessel heels.
In hydraulic engineering a free-surface jet is one where the entrainment of the fluid outside the jet is minimal, as opposed to submerged jet where the entrainment effect is significant. A liquid jet in air approximates a free surface jet.
In fluid mechanics a free surface flow, also called open-channel flow, is the gravity driven flow of a fluid under a free surface, typically water flowing under air in the atmosphere.
See also
Free surface effect
Surface tension
Laser-heated pedestal growth
Liquid level
Splash (fluid mechanics)
Slosh dynamics
Riabouchinsky solid
Computational methods for free surface flow
Water current
References
Fluid mechanics | Free surface | [
"Engineering"
] | 1,177 | [
"Civil engineering",
"Fluid mechanics"
] |
6,989,597 | https://en.wikipedia.org/wiki/Premovement%20neuronal%20activity | Premovement neuronal activity in neurophysiological literature refers to neuronal modulations that alter the rate at which neurons fire before a subject produces movement. Through experimentation with multiple animals, predominantly monkeys, it has been shown that several regions of the brain are particularly active and involved in initiation and preparation of movement. Two specific membrane potentials, the bereitschaftspotential, or the BP, and contingent negative variation, or the CNV, play a pivotal role in premovement neuronal activity. Both have been shown to be directly involved in planning and initiating movement. Multiple factors are involved with premovement neuronal activity including motor preparation, inhibition of motor response, programming of the target of movement, closed-looped and open-looped tasks, instructed delay periods, short-lead and long-lead changes, and mirror motor neurons.
Two types of movement
Research of pre-movement neuronal activity generally involves studying two different kinds of movement, movement in natural settings versus movement triggered by a sensory stimulus. These two types of movements are referred to with different nomenclature throughout different studies and literature on the topic of premovement neuronal activity. Voluntary movements are also known as self-timed, self-initiated, self-paced, and non-triggered movements. This type of movement is what generally occurs in natural settings, carried out independently of a sensory cue or external signal which would trigger or cause the movement to be performed. In contrast, movements that are carried out as a result of a sensory cue or stimulus, or reflex-reactions to external conditions or changes are called reactive movements, but also known as cued movements, stimulated movements, and externally triggered movements depending on the choice of a particular study. In one such study by Lee and Assad (2003), rhesus monkeys were trained to execute arm movement in response to a visual cue versus the same arm movement performed without any correlation to this external (visual) cue. This is one example of reactive movements in contrast to self-initiated movements. Subsequent studies of rates of neuronal firing in the respective types of movements are recorded in different areas of the brain in order to develop a more thorough understanding of premovement neuronal activity.
Regions of the brain involved in pre-movement
Pre-frontal area
Functions in:
Decision making
Response selection with move
Timing of movement
Initiation/suppression of action
Pre-supplementary motor area (Pre-SMA) and the lateral pre-motor cortex
Functions in:
Preparatory processes
Supplementary motor area (SMA) proper and the primary motor cortex (M1)
Functions in:
Initiation of movement
Execution of movement
Bereitschaftspotential
In 1964, two movement related cortical potentials were discovered by Kornhuber and Deecke. Using both the electroencephalography (EEG) and the electromyogram (EMG) recordings, Kornhuber and Deecke were able to identify two components prior to movement onset. These components are the Bereitschaftspotential (abbreviated BP, and also known as readiness potential, abbreviated RP) and the Contingent Negative Variation (CNV). The difference between these two potentials is that the BP is involved in self-paced, or voluntary movements, whereas the CNV is involved with cued movements, movements performed as reactions to an environmental signal.
The Bereitschaftspotential is a movement related potential. The initiation of the BP occurs approximately 2 seconds prior to movement onset. The BP is an index of motor preparation and is therefore also referred to as the "readiness potential", as it is the potential for movement to occur. The initial stage of the BP, or readiness potential, is an unconscious intention of, and preparation for movement. After this initial stage, the preparation of movement becomes a conscious thought.
The BP, more specifically, is composed of movement related cortical potentials (MRCPs) the peak being the MP or Motor potential. MRCPs tend to resemble a "set of plans" used by the cortex for the generation and control of movement. The BP is activated by voluntary movements involving the SMA and the somatosensory cortex in movement preparation and initiation. Initially only the late BP was considered to be specific for the site of movement and the early BP was thought to be characterized by more general preparation for upcoming movements. However, over the past couple of decades the early BP is considered to perhaps also be site specific within the supplementary motor area (SMA) and the lateral premotor cortex. Using principal component analysis and functional magnetic resonance imaging (fMRI) the main source of early BP was determined to be Area 6 of the precentral gyrus bilaterally, and the main sources of late BP were determined to be Area 4 (also known as the Primary Motor Cortex) and Area 6. The current consensus is that the early BP starts first in the SMA, including pre-SMA and SMA proper, and then approximately 400ms later in the lateral premotor cortices bilaterally prior to the movement onset, and the late BP starts in the M1 and premotor cortex contralaterally.
The two factors that most greatly influence the BP are the effect of discreteness and complexity of movement. A study conducted in 1993 compared isolated extensions of the middle finger with simultaneous extensions of the middle and index fingers. The results showed that the isolated movement of the middle finger produced a larger amplitude in the late BP, but not the early BP. The amplitude difference in the late BP was seen over the central region contralateral to the movement, which suggests an important role of M1. Complex movements cause greater amplitudes of the BP, which reflects the fact that there is greater activation of the SMA. Further experiments also suggest that the bilateral sensorimotor cortices play a role in the preparation of complex movements, along with the SMA.
Organization of primary motor cortex
Some of the first relevant experimentation and subsequent findings about the organization of the primary motor cortex were observed by Wilder Penfield. Penfield, a neurosurgeon in Montreal began his experimentation in the 1950s, to better serve his epileptic patients. Penfield understood that his epileptic patients experience a warning sign before the seizures occur. This knowledge started the beginning of his stimulation experimentations. Here, Penfield tried to induce this warning sign in an attempt to specifically pinpoint the source of epilepsy. Penfield confirmed the presence of a spatial map of the contralateral body of the brain. He noted the location of muscle contractions with the site of electro-stimulation on the surface of the motor cortex and subsequently mapped the motor representation of the pre-central gyrus. This follows the same trends and disproportions in the somatic sensory maps in the post central gyrus.
Experimentation via intra-cortical micro-stimulation brought about a more detailed understanding of motor maps. By injecting current via a sharpened tip of a microelectrode into the cortex, the upper motor neurons in layer 5, which project to lower motor neurons can be stimulated. These neurons are associated with the neurons in the spinal cord, and thus stimulating specific movements which occur in specified muscular regions rather than stimulating specific muscles which produce those movements. Neuron connections in the motor map are linked for the purpose of generating specific movements. These connections are not linked for the purpose of generating specific muscles movements or contractions.
Spike-triggered averaging is a way to measure the activity of one cortical motor neuron, on a group of lower motor neurons in the spinal cord. Experimentation confirmed that single upper motor neurons are connected to multiple lower motor neurons. This supports the general conclusion that movements and not individual muscles are controlled by the activity of upper motor neurons.
Rates of upper neuron firing change prior to movement
Individual motor neurons were recorded using implanted microelectrodes to record their activity in awake and behaving monkeys. This experimentation provided a way to figure out the correlation between neuronal activity and voluntary movement. It was found that the force generated by contracting muscles changed as a function of the firing rate of upper motor neurons. The firing rates of the active neurons often change prior to movements involving very small forces. This suggests that, the primary motor cortex contributes to the initial phase of the recruitment of lower motor neurons, involved in the generation of finely controlled movements.
"Closed-loop" motor tasks vs. "open-loop" motor tasks
Approximately 65% of the neurons in the pre-motor cortex are responsible for conditional "closed-loop" motor tasks. In experimentation using monkeys, when they were trained to reach in different directions, depending on the specified visual cue, the approximately coordinated lateral pre-motor neurons began to fire at the appearance of that specified cue, but before the actual signal to perform the movement. As learning takes place, to associate a new visual cue with a particular movement, the approximately coordinated neurons increase their rate of fire during the time between the initial specified cue and the actual signal for the initiation of the movement. It now seems that these specific neurons do not command the initiation of the movements but the intention to perform the movements. Thus these pre-motor neurons are especially involved in the selection of movements based on external events.
More evidence that the lateral pre-motor area is involved in movement selection comes from observations of the effects of cortical damage on motor behavior. Lesions to this area severely impair the ability of monkeys to perform visually cued conditional tasks. Meaning that on command, it becomes extremely difficult for the monkey to perform the trained movement. But, when placed in another setting, the monkey is perfectly capable of performing that movement in a spontaneous, self initiated manner, as a response to the same visual stimulus.
The medial pre-motor cortex seems to be specialized for initiating movements specified by internal rather than external cues. These movements based on internal events are called "open-loop" conditions. In contrast to lesions in the lateral pre-motor area, removal of this medial pre-motor area reduces the number of self initiated or spontaneous movements that the animal makes. Conversely, the ability to move in response to an external cue is largely intact.
Parietal area 5
The parietal cortex plays a role in the internal command of actions. Most specifically, parietal area 5 is responsible for the actions which precede movement. Area 5 neurons exhibit pre-movement activity in response to self initiated movements. The neurons in area 5 play a role in the initiation and execution of movement and respond at enormously quick speeds. An EMG (electromyogram) is a test of electrical activity in muscles. The neurons in area 5 respond at least 100ms faster than EMG detectable activity allows. The cerebral cortex forms a series of loops with the basal ganglia and the cerebellum which drive the initiation of movements, via these positive feedback loops. The neurons on the parietal associative cortex are most strongly involved in programming and execution of voluntary movements.
A learned act is the movement which is produced when the starting sensory signal launches the programmed execution. This action requires the neurons of the parietal associative cortex. There are two phases of the readiness potential, the early phase and the late phase. The early phase is responsible for the "planning of programmed movements". The late phase is responsible for the "stimulation of the movement’s direct implementation." The early phase of the readiness potential occurs in the supplementary motor region and is involved in the generation of voluntary movement. The late phase of premovement occurs in the cortical regions and is involved in definite voluntary movements. The two formal stages of premovement are planning and initiation.
Mirror motor neurons
Mirror motor neurons are found in the ventrolateral portion of the pre-motor cortex. These mirror motor neurons respond not only to the preparation for movement execution, but also to observation of the same movements by others. But, these mirror motor neurons do not respond as well when an action is being pantomimed without the presence of a motor goal. Additionally, in observations of goal oriented movements, these neurons fire even when the result is blocked from view. The mirror motor neuron system is responsible for encoding intention and relevant behaviors of others. Additionally, these neurons may play a role with the frontal and parietal lobes in imitation learning.
A study by Daniel Glaser involved dancers trained in ballet and those trained in capoeira (Brazilian martial art form). The dancers were shown a short video of both ballet and capoeira dance moves. The research indicated that the mirror motor neurons showed increased activity when the dancers watched the video for the style they have been trained in. Additionally, the control, non-dancers, showed significantly less brain activation in the mirror motor neurons when watching either type of dance.
This research provides insight into how the brain responds to movements you have personally learned to do. This may provide a way to allow professionals to maintain a skill without actually performing the movement. Or it may provide an outlet for mental rehabilitation to those with impaired motor skills. Simple observation of the movement allows the same type of brain stimulation as the actual physical movement.
Preparatory changes in neuronal activity
Execution of certain motor tasks requires an instructed delay. This delay period occurs in between the instructed cue and the subsequently triggered movement. During these delay periods preparatory changes occur in neuronal activity. The primary motor cortex, the pre-motor cortex, the supplementary motor area, parietal cortex, and the basal ganglia all may experience these preparatory delay periods. These activities coordinate during the delay periods and reflect movement planning in accordance with the instructional cue and the subsequent movement but occur prior to muscle activity. The movement planning may be anything from the direction of the movement to the extent of the movement.
Short lead changes vs. long lead changes
Premovement neuronal activity has been widely experimented upon in three major motor fields of the frontal cortex. The goal of this experimentation is to compare the neuronal activity which comes from visual signals, versus neuronal activity which comes from non-triggered or self-paced movements. From this comparison, two changes were identified, occurring at different time scales in relation to the onset of movement. These changes are the short lead and long lead changes. The short lead changes are observed about 480ms before the movement, whereas the long lead changes occur about 1–2 seconds earlier. The short lead changes are exhibited in the SMA (supplementary motor area) and the PM (pre-motor area) during both the visual signal trials and the non-triggered/self-paced trials. The pre-central motor cortex was also identified in this study as having similar neuronal activities as in the PM and SMA. Experimentation found that approximately 61% of the neurons in the PM were preferentially related to the triggered (visual) movements. The long lead neuronal changes were more frequently active during the self paced stimuli than before the triggered movements. These long lead changes are particularly abundant among the SMA neurons. In summation, these experiments challenged the idea that the SMA primarily takes part in self-paced movements and the PM is only involved in visually triggered movements. Although the PM neurons showed more preference for the visual trigger signals and the SMA neurons are intimately related to initiation of self paced movements, both are involved with premovement for both types of stimuli.
Link between the cortex and basal ganglia
A subcortical loop exists within the brain linking upper motor neurons originating in the primary motor and pre-motor cortices and the brainstem, with the basal ganglia. These upper motor neurons eventually initiate movement by controlling the activity of lower motor neurons, located in the brainstem and spinal cord, and project out to innervate the muscles in the body. Upper motor neurons also modulate activity of local circuit neurons, whose synapses are a large input to these lower motor neurons, in turn affecting subsequent movement. Thus, the basal ganglia indirectly influence movement via regulation of the activity of the upper motor neurons, which ultimately determine activity of the lower motor neurons.
Areas of basal ganglia affect movement
The basal ganglia include groups of motor nuclei located deep within the cerebral hemispheres, including the corpus striatum, which contains two nuclei named the caudate and putamen, and also the pallidum, which contains the globus pallidus and substantia nigra pars reticulate. The corpus striatum is the main input center of the basal ganglia, specifically upper neurons of motor areas in the frontal lobe that control eye movement link to neurons in the caudate, while upper neurons from pre-motor and motor cortices in the frontal lobe connect to neurons in the putamen. The main neurons found within these structures are named medium spiny neurons.
Activation of medium spiny neurons
The activation of medium spiny neurons is generally associated with the occurrence of a movement. Extracellular recording have shown that these specific neurons increase their rate of discharge before an impending movement. Such anticipatory discharges seem to be involved in a movement selection process, can precede a movement by several seconds, and vary according to location in space of the destination of the movement.
Processing movement signals by the basal ganglia
The neurons present in the global pallidus and substantia nigra are the main output areas of the basal ganglia. These efferent neurons influence activity of the upper motor neurons. Neurons in these areas are GABAergic, and thus the main output of the basal ganglia is inhibitory, and spontaneous activation of these neurons consistently prevents unwanted movement. The input of medium spiny neurons to these output areas of the basal ganglia are also GABAergic and therefore inhibitory. The net effect of excitatory inputs to the basal ganglia from the cortex is inhibition (via the medium spiny neurons) of the persistently active inhibitory cells in the output center of the basal ganglia. This double inhibitory effect leads to activation of upper motor neurons, which causes subsequent signaling of local-circuit and lower motor neurons to initiate movement. This pathway is defined as the direct pathway through the basal ganglia. There is another indirect pathway present between the corpus striatum and part of the globus pallidus. This indirect pathway also involves the subthalamic nucleus (a part of the thalamus), which receives signals from the cerebral cortex. Excitatory signals from the cortex will activate subthalamic neurons, which are excitatory also. Thus, this indirect pathway serves to reinforce inhibition by excitatory signals to the GABAergic cells present in the globus pallidus. In effect, this pathway regulates the direct pathway by feeding back onto the output centers of the basal ganglia. The balance between these two pathways processes movement signals and influences the initiation of an impending movement.
Movement Disorders and Future Research
The bereitschaftspotential has also been found to be influenced by movement disorders such as Parkinson's disease (PD), cerebellar lesions, and degenerative diseases of the dentate nucleus. Since at least some part of the BP originates from the SMA, "which receives main dopaminergic input from the basal ganglia via thalamus", many are conducting studies of BP in patients with Parkinson's disease (PD). It has also been found that people with cerebellar lesions have clear abnormalities in their BP. In patients with degenerative diseases of dentate nucleus there is virtually a complete lack of BP. In patients with cerebellar hemispheric lesions the BP is much smaller again or even absent.
Of all the movement disorders studied, Parkinson's is by far the most investigated. Most experiments have studied the effects of lesions on the BP in patients with Parkinson's disease. It was observed that the amplitude of BP in many Parkinson's patients was significantly reduced. A BP with a more positive polarity was observed compared to the BP in control patients without the disease. Other research shows the same reduction in amplitude of movement-related cortical potentials, which is consistent with other studies showing a decrease in the activation of the SMA in Parkinson's patients. A decreased ability to terminate premovement preparatory activity has been observed in Parkinson's patients, reflected by prolonged activity in the SMA . This prolonged activity may have to do with impaired function of the basal ganglia, which is thought to send a termination signal to the SMA. While changes in preparatory movement activity are present in PD patients, executed-movement processes in the brain seem to be unaffected, suggesting that premovement activity abnormality may underlie the difficulty in movement prevalent in Parkinson's disease. More research studying the effects of the disease on MRCPs (like the BP) and premovement preparatory activity in PD patients is ongoing.
References
Cognitive neuroscience
Neurophysiology
Motor control | Premovement neuronal activity | [
"Biology"
] | 4,266 | [
"Behavior",
"Motor control"
] |
6,989,858 | https://en.wikipedia.org/wiki/Coordinated%20vulnerability%20disclosure | In computer security, coordinated vulnerability disclosure (CVD, formerly known as responsible disclosure) is a vulnerability disclosure model in which a vulnerability or an issue is disclosed to the public only after the responsible parties have been allowed sufficient time to patch or remedy the vulnerability or issue. This coordination distinguishes the CVD model from the "full disclosure" model.
Developers of hardware and software often require time and resources to repair their mistakes. Often, it is ethical hackers who find these vulnerabilities. Hackers and computer security scientists have the opinion that it is their social responsibility to make the public aware of vulnerabilities. Hiding problems could cause a feeling of false security. To avoid this, the involved parties coordinate and negotiate a reasonable period of time for repairing the vulnerability. Depending on the potential impact of the vulnerability, the expected time needed for an emergency fix or workaround to be developed and applied and other factors, this period may vary between a few days and several months.
Coordinated vulnerability disclosure may fail to satisfy security researchers who expect to be financially compensated. At the same time, reporting vulnerabilities with the expectation of compensation is viewed by some as extortion. While a market for vulnerabilities has developed, vulnerability commercialization (or "bug bounties") remains a hotly debated topic. Today, the two primary players in the commercial vulnerability market are iDefense, which started their vulnerability contributor program (VCP) in 2003, and TippingPoint, with their zero-day initiative (ZDI) started in 2005. These organizations follow the coordinated vulnerability disclosure process with the material bought. Between March 2003 and December 2007 an average 7.5% of the vulnerabilities affecting Microsoft and Apple were processed by either VCP or ZDI. Independent firms financially supporting coordinated vulnerability disclosure by paying bug bounties include Facebook, Google, and Barracuda Networks.
Disclosure policies
Google Project Zero has a 90-day disclosure deadline which starts after notifying vendors of vulnerability, with details shared in public with the defensive community after 90 days, or sooner if the vendor releases a fix.
ZDI has a 120-day disclosure deadline which starts after receiving a response from the vendor.
Examples
Selected security vulnerabilities resolved by applying coordinated disclosure:
MD5 collision attack that shows how to create false CA certificates, 1 week
Starbucks gift card double-spending/race condition to create free extra credits, 10 days (Egor Homakov)
Dan Kaminsky discovery of DNS cache poisoning, 5 months
MBTA vs. Anderson, MIT students find vulnerability in the Massachusetts subway security, 5 months
Radboud University Nijmegen breaks the security of the MIFARE Classic cards, 6 months
The Meltdown vulnerability, hardware vulnerability affecting Intel x86 microprocessors and some ARM-based microprocessors, 7 months.
The Spectre vulnerability, hardware vulnerability with implementations of branch prediction affecting modern microprocessors with speculative execution, allowing malicious processes access to the mapped memory contents of other programs, 7 months.
The ROCA vulnerability, affecting RSA keys generated by an Infineon library and Yubikeys, 8 months.
See also
Information sensitivity
Computer emergency response team
Critical infrastructure protection
References
External links
The CERT Guide to Coordinated Vulnerability Disclosure
CISA Coordinated Vulnerability Disclosure (CVD) Process
Microsoft's Approach to Coordinated Vulnerability Disclosure
Dutch National Cyber Security Centre Coordinated Vulnerability Disclosure Guideline (archive on archive.org)
Hewlett-Packard Coordinated Vulnerability Disclosure policy
Linksys Coordinated Vulnerability Disclosure Program
Global Forum on Cyber Expertise Coordinated Vulnerability Disclosure policy
Philips coordinated vulnerability disclosure statement
ETSI Coordinated Vulnerability Disclosure policy
External links
Computer security procedures
Disclosure | Coordinated vulnerability disclosure | [
"Technology",
"Engineering"
] | 739 | [
"Cybersecurity engineering",
"Computer security procedures",
"Computer security exploits"
] |
6,989,876 | https://en.wikipedia.org/wiki/Vision%20for%20perception%20and%20vision%20for%20action | Vision for perception and vision for action in neuroscience literature refers to two types of visual processing in the brain: visual processing to obtain information about the features of objects such as color, size, shape (vision for perception) versus processing needed to guide movements such as catching a baseball (vision for action). An idea is currently debated that these types of processing are done by anatomically different brain networks. Ventral visual stream subserves vision for perception, whereas dorsal visual stream subserves vision for action. This idea finds support in clinical research and animal experiments.
Visual Processing in the Brain
Visual stimuli have been known to process through the brain via two streams: the dorsal stream and the ventral stream. The dorsal pathway is commonly referred to as the ‘where’ system; this allows the processing of location, distance, position, and motion. This pathway spreads from the primary visual cortex dorsally to the parietal lobe. Information then feeds into the motor cortex of the frontal lobe. The second pathway, the ventral stream, processes information relating to shape, size, objects, orientation, and text. This is commonly known as the ‘what’ system. Visual stimuli in this system process ventrally from the primary visual cortex to the medial temporal lobe. In childhood development, vision for action and vision for perception develop at different rates, supporting the hypothesis of two distinct, linear streams for visual processing.
The above hypothesis has recently been challenged by a new and more parsimonious hypothesis with regard to evolution. The two streams must work hand-in-hand while processing visual information. Neuroanatomical and function neuroimaging studies have proven multiple visual maps that exist in the posterior brain, regarding at least 40 distinct regions. A single part of the outside world controls visual processing, and then particular areas are recognized in which single cells react to specific stimuli, such as faces. This hypothesis, one that indicates a more network-like model, is becoming more and more accepted among researchers. The pathway model mentioned above now experiences many conflicts. It has been discovered experimentally that there is more than just one way to process actions. For example, three distinct processing routes could exist dorsally, one for grasping, another for reaching, and yet a third for awareness of personal actions. No longer can just one dorsal stream be accounted for with regard to processing vision for action. The previous hypothesis also states that there is a clear hierarchy in which processing of visual stimuli goes from least complex to most complex in a linear fashion. However, lesions at one end should therefore have the same effect on the opposite end, and this cannot be observed experimentally. This further proves the integration of the two streams and many visual processes operating in parallel, involving multiple ventral and dorsal streams in a patchwork-type model.
However, while there exists to be two different hypotheses regarding the processing of vision in the human brain, it is still possible to accept both. Recent experiments prove that difficulties arise when deciphering between vision for action and vision for perception. A clear distinction between the two is difficult to make. Studies prove visual illusions that involve perception more so have considerable results on action. This can clearly rule out the first hypothesis noted above, indicating the thought that visually directed actions always avoid the matter of perception. However, a weaker form of the first hypothesis can still be considered. This states that the content of conscious perception will sometimes influence action, but that its impact on action is less asserted. Both the assumed ventral and dorsal streams can provide guidance of action, however information processed ventrally appears less pronounced and appears more substantial in the processing of perceptual tasks. It has been noted that one can still accept the two-stream hypothesis, but in doing so one must also realize that such a hypothesis still acknowledges the sharing of visual information across pathways and functions, heavily shaped by behavioral tasks.
See also
Two-streams hypothesis
References
Visual system
Neurophysiology
Motor control | Vision for perception and vision for action | [
"Biology"
] | 794 | [
"Behavior",
"Motor control"
] |
6,990,113 | https://en.wikipedia.org/wiki/Van%20der%20Grinten%20projection | The van der Grinten projection is a compromise map projection, which means that it is neither equal-area nor conformal. Unlike perspective projections, the van der Grinten projection is an arbitrary geometric construction on the plane. Van der Grinten projects the entire Earth into a circle. It largely preserves the familiar shapes of the Mercator projection while modestly reducing Mercator's distortion. Polar regions are subject to extreme distortion. Lines of longitude converge to points at the poles.
History
Alphons J. van der Grinten invented the projection in 1898 and received US patent #751,226 for it and three others in 1904. The National Geographic Society adopted the projection for their reference maps of the world in 1922, raising its visibility and stimulating its adoption elsewhere. In 1988, National Geographic replaced the van der Grinten projection with the Robinson projection.
Geometric construction
The geometric construction given by van der Grinten can be written algebraically:
where x takes the sign of , y takes the sign of φ, and
If φ = 0, then
Similarly, if λ = λ or φ = ±/2, then
In all cases, φ is the latitude, λ is the longitude, and λ is the central meridian of the projection.
Van der Grinten IV projection
The van der Grinten IV projection is a later polyconic map projection developed by Alphons J. van der Grinten.
The central meridian and equator are straight lines. All other meridians and parallels are arcs of circles.
See also
List of map projections
Robinson projection (successor)
References
Bibliography
Map projections | Van der Grinten projection | [
"Mathematics"
] | 318 | [
"Map projections",
"Coordinate systems"
] |
6,990,718 | https://en.wikipedia.org/wiki/Audio-visual%20speech%20recognition | Audio visual speech recognition (AVSR) is a technique that uses image processing capabilities in lip reading to aid speech recognition systems in recognizing undeterministic phones or giving preponderance among near probability decisions.
Each system of lip reading and speech recognition works separately, then their results are mixed at the stage of feature fusion. As the name suggests, it has two parts. First one is the audio part and second one is the visual part. In audio part we use features like log mel spectrogram, mfcc etc. from the raw audio samples and we build a model to get feature vector out of it . For visual part generally we use some variant of convolutional neural network to compress the image to a feature vector after that we concatenate these two vectors (audio and visual ) and try to predict the target object.
External links
IBM Research - Audio Visual Speech Technologies
Looking to listen at cocktail party
Google AI blog
Computational linguistics
Speech recognition
Applications of computer vision
Multimodal interaction | Audio-visual speech recognition | [
"Technology"
] | 204 | [
"Natural language and computing",
"Computational linguistics"
] |
6,990,865 | https://en.wikipedia.org/wiki/No-observed-adverse-effect%20level | The no-observed-adverse-effect level (NOAEL) denotes the level of exposure of an organism, found by experiment or observation, at which there is no biologically or statistically significant increase in the frequency or severity of any adverse effects of the tested protocol. In drug development, the NOAEL of a new drug is assessed in laboratory animals, such as mice, prior to initiation of human trials in order to establish a safe clinical starting dose in humans. The OECD publishes guidelines for Preclinical Safety Assessments, in order to help scientists discover the NOAEL.
Synopsis
Some adverse effects in the exposed population when compared to its appropriate control might include alteration of morphology, functional capacity, growth, development or life span. The NOAEL is determined or proposed by qualified personnel, often a pharmacologist or a toxicologist.
The NOAEL could be defined as "the highest experimental point that is without adverse effect," meaning that under laboratory conditions, it is the level where there are no side-effects. It either does not provide the effects of drug with respect to duration and dose, or it does not address the interpretation of risk based on toxicologically relevant effects.
In toxicology it is specifically the highest tested dose or concentration of a substance (i.e. a drug or chemical) or agent (e.g. radiation), at which no such adverse effect is found in exposed test organisms where higher doses or concentrations resulted in an adverse effect.
The NOAEL level may be used in the process of establishing a dose-response relationship, a fundamental step in most risk assessment methodologies.
Synonyms
The NOAEL is also known as NOEL (no-observed-effect level) as well as NEC (no-effect concentration) and NOEC (no-observed-effect concentration).
US EPA definition
The United States Environmental Protection Agency defines NOAEL as 'an exposure level at which there are no statistically or biologically significant increases in the frequency or severity of adverse effects between the exposed population and its appropriate control; some effects may be produced at this level, but they are not considered as adverse, or as precursors to adverse effects. In an experiment with several NOAELs, the regulatory focus is primarily on the highest one, leading to the common usage of the term NOAEL as the highest exposure without adverse effects.'
See also
Lowest-observed-adverse-effect level
Measures of pollutant concentration
References
Concentration indicators
Environmental policy in the United States
Food law
Food safety
Food and Drug Administration
Toxicology
United States Environmental Protection Agency | No-observed-adverse-effect level | [
"Environmental_science"
] | 517 | [
"Toxicology",
"Toxicology stubs"
] |
6,991,065 | https://en.wikipedia.org/wiki/Ammonium%20sulfamate | Ammonium sulfamate (or ammonium sulphamate) is a white crystalline solid, readily soluble in water. It is commonly used as a broad spectrum herbicide, with additional uses as a compost accelerator, flame retardant and in industrial processes.
Manufacture and distribution
It is a salt formed from ammonia and sulfamic acid.
Ammonium sulfamate is distributed under the following tradenames, which are principally herbicidal product names: Amicide, Amidosulfate, Ammate, Amcide, Ammate X-NI, AMS, Fyran 206k, Ikurin, Sulfamate, AMS and Root-Out.
Uses
Herbicide
Ammonium sulfamate is considered to be particularly useful in controlling tough woody weeds, tree stumps and brambles.
Ammonium sulfamate has been successfully used in several major UK projects by organisations like the British Trust for Conservation Volunteers, English Heritage, the National Trust, and various railway, canal and waterways authorities.
Several years ago the Henry Doubleday Research Association (HDRA) (known as Garden Organic), published an article on ammonium sulfamate after a successful set of herbicide trials. Though not approved for use by organic growers it does provide an option when alternatives have failed.
The following problem weeds / plants can be controlled:
Japanese Knotweed (Reynoutria japonica, syn. Fallopia japonica),
Marestail / Horsetail (Equisetum),
Ground-elder (Aegopodium podagraria),
Rhododendron ponticum,
Brambles,
Brushwood,
Ivy (Hedera species),
Senecio/Ragwort,
Honey fungus (Armillaria), and
felled tree stumps and most other tough woody specimens.
Compost accelerator
Ammonium sulfamate is used as a composting accelerator in horticultural settings. It is especially effective in breaking down the tougher and woodier weeds put onto the compost heap.
Flame retardant
Ammonium sulfamate (like other ammonium salts, e.g. Ammonium dihydrogen phosphate, Ammonium sulfate) is a useful flame retardant. These salt based flame retardants offer advantages over other metal/mineral-based flame retardants in that they are water processable. Their relatively low decomposition temperature makes them suitable for flame retarding cellulose based materials (paper/wood). Ammonium sulfamate (like Ammonium dihydrogen phosphate) is sometimes used in conjunction with Magnesium sulfate or Ammonium sulfate (in ratios of approximately 2:1) for enhanced flame retardant properties.
Other uses
Within industry ammonium sulfamate is used as a flame retardant, a plasticiser and in electro-plating. Within the laboratory it is used as a reagent.
Safety
Ammonium sulfamate is considered to be only slightly toxic to humans and other animals, making it appropriate for amateur home garden, professional and forestry uses. It is generally accepted to be safe for use on plots of land that will be used for growing fruit and vegetables intended for consumption.
It corrodes brass, copper, and iron. Its contact with eyes or skin can be harmful unless it is quickly washed off.
In the United States, the Occupational Safety and Health Administration has set a permissible exposure limit at 15 mg/m3 over an eight-hour time-weighted average, while the National Institute for Occupational Safety and Health recommends exposures no greater than 10 mg/m3 over an eight-hour time-weighted average. These occupational exposure limits are protective values, given the IDLH concentration is set at 1500 mg/m3.
It is also considered to be environmentally friendly due to its degradation to non-harmful residues.
European Union licensing
The pesticides review by the European Union led to herbicides containing ammonium sulfamate becoming unlicensed, and therefore effectively banned, from 2008.
Its availability and use as a compost accelerator is unaffected by the EU's pesticide legislation.
See also
Sulfamide
References
Herbicides
Ammonium compounds
Sulfamates | Ammonium sulfamate | [
"Chemistry",
"Biology"
] | 846 | [
"Sulfamates",
"Herbicides",
"Functional groups",
"Salts",
"Ammonium compounds",
"Biocides"
] |
6,991,147 | https://en.wikipedia.org/wiki/Superman%20logo | The Superman shield, also known as the Superman logo, Superman symbol, or Superman S, is the iconic emblem for the fictional DC Comics superhero Superman. As a representation of one of the first superheroes, it served as a template for character design decades after Superman's first appearance. The tradition of wearing a representative symbol on the chest was followed by many subsequent superheroes, including Batman, Spider-Man, Green Lantern, the Flash, Wonder Woman, Hawkman, and many others.
In its current form, the logo is a red capital "S" inside a pentagonal yellow stylized shield with a red border. In earlier Superman stories, "S" was simply an initial for "Superman", but in the 1978 film, it was portrayed as the family crest of the House of El, the family of Superman.
Evolution of the symbol
In its original inception in Action Comics #1, Superman's symbol was a letter S with red and blue on a yellow police badge symbol that resembled a shield. The symbol was first changed a few issues later in Action Comics #7. The shield varied over the first few years of the comics, and many times was nothing more than an inverted triangle with an S inside of it.
The shield first became a diamond in the Fleischer cartoon serial Superman. It was black with a red S outlined with white (or occasionally with yellow). The S has varied in size and shape and the diamond shape containing it has also changed size and shape. Extreme examples of this would be the very large logo on the Dean Cain costume from the television series Lois & Clark: The New Adventures of Superman and the comparatively small version of the shield as depicted in the 2006 film Superman Returns. It has, in most incarnations, retained its original color, while changing shape here and there. The classic logo is the basis for virtually all other interpretations of the logo.
In the mid-1990s, when Superman's costume and powers changed briefly, during the "Superman Red/Superman Blue" comic book storylines, the shield changed colors and slightly changed shape, in accordance with the changes in the costume. In 1992's 75th edition of Vol 2, the logo is shown soaked in blood, depicting Superman's death. In 1997's, Superman Vol 2 #123 Superman's new powers forced him to find a suit that was capable of containing his new abilities. The effective material found to create Superman's suit, along with being blue and silver, was also courtesy of Lex Luthor. Not only did the colors and powers change, the logo also evolved into a re-imagined "electric" emblem. In the miniseries Kingdom Come, an aged Superman sported an S–like symbol against a black background in a red pentagon.
In the 2000s television series Smallville, the S symbol originally appeared as a diamond with a vertical infinity symbol (to be similar to the S), in an attempt to make it seem more alien. However, beginning with the Season Six opening episode "Zod", Clark is given a crystal bearing the S logo in the style of the Superman Returns logo and is seen several times throughout the series and specifically in the season finale when a beam shaped like the logo hits a Phantom Zone escapee. The logo also appeared in the first episode of the series, when the students captured Clark by giving him a kryptonite necklace (which they didn't know was a weakness) and tied him to a cross-like beam in a cornfield and painted a red 'S' on Clark's chest (standing for Scarecrow, which was what he was meant to be as part of a high school hazing tradition) as a reference.
Symbol evolution
Some of the symbols used by Superman through the years are:
Notes
Representations
Initially, the S-shield had one meaning: S for Superman. One of the first alternative meanings was presented in Superman: The Movie, in which it was not an S, but rather the S-shaped Coat of arms of the House of El as it was Brando's idea to have Jor-El wear the "S" as a family crest, from which folklore spun off. After the Superman reboot story The Man of Steel, the symbol's story was that it was designed by Jonathan Kent and was derived from an ancient Native American symbol. The symbol was featured on a medicine blanket given to an ancestor of the Kent family by a Native American tribe after he helped to cure them of a plague and was supposed to represent a snake, an animal held to possess healing powers by the tribe (implying that, by wearing this symbol, Superman was a metaphorical healer). This was also included in the 1997 Superman encyclopedia. In 2004, Mark Waid's Superman: Birthright series says the S-Shield is the Kryptonian symbol for "hope" and Superman believes it may have begun as a coat of arms for the House of El. Later, writer Geoff Johns confirmed it was indeed a coat of arms, as well as a symbol for hope. In the 2013 film Man of Steel, Jor-El mentions that the symbol represents the House of El and means "hope", and Superman later also specifies that it is not an "S" when Lois Lane asks what the "S" was short for. Superman further explains that the design is based on a river in the 2017 film Justice League.
In Supergirl (TV series, 2015) Season 1, episode 2 Kara states that the Kryptonian symbol stands for her family's motto, "Stronger together."
House of El
Marv Wolfman's novelization of the film Superman Returns depicts the symbol as belonging to one of the three primary houses of Krypton that brought peace to the planet after a civil uprising, a serpent coiled inside a shield, a warning not to return to the ways of violence and deception. The shield has since been taken as a coat of arms, resulting in Jor-El's continued presence in the council despite his "mad" findings. The other two are merely described, including Pol-Us' eye of vigilance and Kol-Ar's open hand of truth and justice.
Cape
In almost all versions, Superman's red cape has an all-yellow version of the logo, with a thin black line separating the areas. A notable exception is in Superman: The Animated Series, where the logo was absent from Superman's cape. This was due to the difficulty in animating the S on the flowing cape. In the 2006 film Superman Returns, the logo appeared on the belt buckle in reverse colors (yellow diamond and S with red background pieces) instead of on Superman's cape. The logo is also absent from Superman's cape in the 2013 film Man of Steel due to the difficulty of animating a cape with the logo in CGI shots. In the 2011 reboot of the comic book series, on his cape is a black pentagon and S with a red background piece.
Variants
In the 1960s the tiny bottle city of Kandor, the miniaturized capital city of the planet Krypton, resided in Superman's Fortress of Solitude. Originally, certain inhabitants resembling Kal-El formed the Superman Emergency Squad and would, on occasion, leave the bottle, enter the Earth's atmosphere and gain super powers to aid Superman. As designed by artist Curt Swan, their uniforms were similar to that of Superman save for the 'S' on their chests which resembled the early version, the 'S' and border in red on a yellow field, but in an elongated triangle.
During the Reign of the Supermen story arc, Each of the four different Supermen was represented by a variant of the symbol, which each wore on their person.
The Last Son of Krypton (The Eradicator) wore a normal shield when he attempted to continue Superman's career. Later on, he wore a slightly altered, more curved version with an opening in the border and which was red and black instead of yellow and red.
The Man of Tomorrow (The Cyborg)'s shield was half-normal on the left side, but the red darkened to an almost black color on the right half.
Superboy wore an all-yellow symbol stitched into the back of his leather jacket, in addition to a normal one on his chest.
The Man of Steel wore an all-metallic symbol. The classic 'S' was redesigned into metallic burnt red color with a grey undertone in the background.
Other variations include:
Black and red
The modern Superboy wears a black and red variant of the symbol on his third costume. Lex Luthor hypothesized it is because that version of the symbol was everywhere following the death of Superman and his consequent first appearance.
In Kingdom Come, Superman wears a black and red, simplified version following his return.
After the Imperiex War, Superman wore the black and red variant to signify his mourning of the losses during the war.
The Eradicator, for a time, wore a red and blacked, curvier version of the S-Shield.
Bizarro's symbol is a reversed purple and yellow version.
The inverted symbol, first seen in 52, means "resurrection" in Kryptonian.
Jor-El sports a white symbol on his black clothing, as well as a black symbol on his white clothing in the 1978 Superman movie.
In his modern appearances, Superman of Earth-2 wears a slightly different version of the symbol. Its most notable difference is an exaggerated serif on the 'S'.
A variation of the symbol, designed as a stubbed red lightning bolt against a black shield has been used in several media, including an Elseworlds where Darkseid raised Kal-El, and was the basis of one of the final season of Superman: The Animated Series, and an episode named Brave New Metropolis from an alternate reality where Superman and Luthor took over Metropolis.
A further variation, this time a white symbol against a red background, was used in episodes of Justice League and Justice League Unlimited, including an over the shoulder style cape and black costume. This version was a fascistic Superman and his Justice Lords who assassinated President Lex Luthor and then ruled the world with an iron fist after Luthor arranged the murder of The Flash.
Black and white
Superman briefly wore an all-black costume with a white shield following his resurrection at the conclusion of The Death of Superman.
In Batman Beyond, an aged Superman wears a heavily stylised symbol, consisting of a white slash within an inverted black triangle, and a rhombus in the upper left corner. A similar symbol is worn by an adult Jonathan Samuel Kent in the alternate future depicted in Futures End.
After being transported to the New 52 universe, the pre-Flashpoint Superman wore a black costume with an emblem resembling a white version of the Kingdom Come shield.
The symbol has been adapted to various flags in alternate realities, including the Nazi swastika (in JL-Axis) and Soviet Union hammer and sickle (Superman: Red Son).
In the movie Justice League: The New Frontier, and the Elseworlds story JSA: The Liberty Files, the shield has black background pieces and a yellow outline, resembling the Fleischer Studios version.
In Superman: Earth One, the shield has an additional yellow outline.
In Supergirl, the emblem worn by the eponymous Supergirl has a blue background, with yellow outlines around the red S and border.
Wearers of the shield
Kryptonian family members of Superman wear the symbol, but sometimes it is worn to honor Superman. After The Death of Superman, many DC Comics superheroes wore a black armband with the Superman logo.
Superman
Kal-El
Kal-L (variant)
Kal Kent (variant)
Supergirl
Kara Zor-El
Linda Danvers
Matrix
Superboy
Kal-El (Superman as a teenager; pre-1986)
Conner Kent (red and black variant)
Superboy-Prime (carves an S-symbol in blood on his chest).
Jonathan Samuel Kent (the new Superboy in DC Rebirth, son of Superman and Lois Lane)
Superwoman
Lucy Lane (white and red variant)
House of El
Jor-El (Superman: Birthright, Superman, Action Comics #850, Man of Steel)
Lar Gand (Mon-El)
Zor-El
Others
Krypto (shown as tag on his collar)
Bizarro (reversed 'S', sometimes purple and yellow)
The Superman Emergency Squad from Kandor, a red 'S' in an elongated triangle
Hank Henshaw (red and black variant, like Kon-El. It is sometimes shown as a white and black variant)
Steel (John Henry Irons)
Natasha Irons
Eradicator (variant)
Strange Visitor (variant)
Preus
Alura (red and white variant)
Diamond and eight shield
In the television series Smallville, an octagonal silver metallic key is used in the Kawatche caves, as well as in a transport device used to travel to the Fortress of Solitude. Along the edges are imprinted Kryptonian characters, one of which is the diamond border of the Superman shield with a figure 8 inside of the diamond instead of the S. The 8-shield is later described as being from an ancient form of the Kryptonian language (in the Kryptonian alphabet employed in the 2004 storyline The Supergirl from Krypton, it corresponds to the letter S), while the modern symbol is the familiar S-shield. The familiar S-shield is seen in season 9. It is used as Clark Kent's calling card in his vigilante actions in Metropolis. Clark, known as the Blur, burns the sign using his heat vision in various places throughout the city. In the season 9 episode "Rabid", Clark mentions while talking to Oliver Queen that the mark "gives people hope."
References
Fictional elements introduced in 1938
Fictional symbols
Logos
Logo of Superman
Symbols introduced in 1938 | Superman logo | [
"Mathematics"
] | 2,845 | [
"Symbols",
"Fictional symbols"
] |
6,991,303 | https://en.wikipedia.org/wiki/Switched%20communication%20network | In computer networking and telecommunications, a switched communication network is a communication network which uses switching for connection of two non-adjacent nodes.
Switched communication networks are divided into circuit switched networks, message switched networks, and packet switched networks.
See also
Broadcast communication network
Fully connected network
Network architecture | Switched communication network | [
"Technology",
"Engineering"
] | 57 | [
"Network architecture",
"Computing stubs",
"Computer networks engineering",
"Computer network stubs"
] |
6,991,605 | https://en.wikipedia.org/wiki/Broadcast%20communication%20network | In computer networking and telecommunications, a broadcast communication network is a communication network which uses broadcasting for communication between its nodes. They take messages from a single sender and transmit to all endpoints on the network. For example, radio, television, etc.
See also
Fully connected network
Multicast
Switched communication network
Telecommunications engineering | Broadcast communication network | [
"Technology",
"Engineering"
] | 64 | [
"Computing stubs",
"Electrical engineering",
"Telecommunications engineering",
"Computer network stubs"
] |
6,992,066 | https://en.wikipedia.org/wiki/IOK-1 | IOK-1 is a distant galaxy in the constellation Coma Berenices. When discovered in 2006, it was the oldest and most distant galaxy ever found, at redshift 6.96.
It was discovered in April 2006 by Masanori Iye at National Astronomical Observatory of Japan using the Subaru Telescope in Hawaii and is seen as it was 12.88 billion years ago. Its emission of Lyman alpha radiation has a redshift of 6.96, corresponding to just 750 million years after the Big Bang. While some scientists have claimed other objects (such as Abell 1835 IR1916) to be even older, the IOK-1's age and composition have been more reliably established.
"IOK" stands for the observers' names Iye, Ota, and Kashikawa.
See also
Abell 2218
Abell 370
A1689-zD1
UDFy-38135539
List of the most distant astronomical objects
References
Galaxies
Coma Berenices | IOK-1 | [
"Astronomy"
] | 205 | [
"Coma Berenices",
"Galaxies",
"Astronomical objects",
"Constellations"
] |
6,992,164 | https://en.wikipedia.org/wiki/Ordered%20Bell%20number | In number theory and enumerative combinatorics, the ordered Bell numbers or Fubini numbers count the weak orderings on a set of elements. Weak orderings arrange their elements into a sequence allowing ties, such as might arise as the outcome of a horse race.
The ordered Bell numbers were studied in the 19th century by Arthur Cayley and William Allen Whitworth. They are named after Eric Temple Bell, who wrote about the Bell numbers, which count the partitions of a set; the ordered Bell numbers count partitions that have been equipped with a total order. Their alternative name, the Fubini numbers, comes from a connection to Guido Fubini and Fubini's theorem on equivalent forms of multiple integrals. Because weak orderings have many names, ordered Bell numbers may also be called by those names, for instance as the numbers of preferential arrangements or the numbers of asymmetric generalized weak orders.
These numbers may be computed via a summation formula involving binomial coefficients, or by using a recurrence relation. They also count combinatorial objects that have a bijective correspondence to the weak orderings, such as the ordered multiplicative partitions of a squarefree number or the faces of all dimensions of a permutohedron.
Definitions and examples
Weak orderings arrange their elements into a sequence allowing ties. This possibility describes various real-world scenarios, including certain sporting contests such as horse races. A weak ordering can be formalized axiomatically by a partially ordered set for which incomparability is an equivalence relation. The equivalence classes of this relation partition the elements of the ordering into subsets of mutually tied elements, and these equivalence classes can then be linearly ordered by the weak ordering. Thus, a weak ordering can be described as an ordered partition, a partition of its elements and a total order on the sets of the partition. For instance, the ordered partition {a,b},{c},{d,e,f} describes an ordered partition on six elements in which a and b are tied and both less than the other four elements, and c is less than d, e, and f, which are all tied with each other.
The th ordered Bell number, denoted here , gives the number of distinct weak orderings on elements. For instance, there are three weak orderings on the two elements a and b: they can be ordered with a before b, with b before a, or with both tied. The figure shows the 13 weak orderings on three elements.
Starting from , the ordered Bell numbers are
When the elements to be ordered are unlabeled (only the number of elements in each tied set matters, not their identities) what remains is a composition or ordered integer partition, a representation of as an ordered sum of positive integers. For instance, the ordered partition {a,b},{c},{d,e,f} discussed above corresponds in this way to the composition 2 + 1 + 3. The number of compositions of is exactly . This is because a composition is determined by its set of partial sums, which may be any subset of the integers from 1 to .
History
The ordered Bell numbers appear in the work of , who used them to count certain plane trees with totally ordered leaves. In the trees considered by Cayley, each root-to-leaf path has the same length, and the number of nodes at distance from the root must be strictly smaller than the number of nodes at distance , until reaching the leaves. In such a tree, there are pairs of adjacent leaves, that may be weakly ordered by the height of their lowest common ancestor; this weak ordering determines the tree. call the trees of this type "Cayley trees", and they call the sequences that may be used to label their gaps (sequences of positive integers that include at least one copy of each positive integer between one and the maximum value in the sequence) "Cayley permutations".
traces the problem of counting weak orderings, which has the same sequence as its solution, to the work of . These numbers were called Fubini numbers by Louis Comtet, because they count the different ways to rearrange the ordering of sums or integrals in Fubini's theorem, which in turn is named after Guido Fubini. The Bell numbers, named after Eric Temple Bell, count the partitions of a set, and the weak orderings that are counted by the ordered Bell numbers may be interpreted as a partition together with a total order on the sets in the partition.
The equivalence between counting Cayley trees and counting weak orderings was observed in 1970 by Donald Knuth, using an early form of the On-Line Encyclopedia of Integer Sequences (OEIS). This became one of the first successful uses of the OEIS to discover equivalences between different counting problems.
Formulas
Summation
Because weak orderings can be described as total orderings on the subsets of a partition, one can count weak orderings by counting total orderings and partitions, and combining the results appropriately. The Stirling numbers of the second kind, denoted , count the partitions of an -element set into nonempty subsets. A weak ordering may be obtained from such a partition by choosing one of total orderings of its subsets. Therefore, the ordered Bell numbers can be counted by summing over the possible numbers of subsets in a partition (the parameter ) and, for each value of , multiplying the number of partitions by the number of total orderings . That is, as a summation formula:
By general results on summations involving Stirling numbers, it follows that the ordered Bell numbers are log-convex, meaning that they obey the inequality for all .
An alternative interpretation of the terms of this sum is that they count the features of each dimension in a permutohedron of dimension , with the th term counting the features of dimension . A permutohedron is a convex polytope, the convex hull of points whose coordinate vectors are the permutations of the numbers from 1 to . These vectors are defined in a space of dimension , but they and their convex hull all lie in an -dimensional affine subspace. For instance, the three-dimensional permutohedron is the truncated octahedron, the convex hull of points whose coordinates are permutations of (1,2,3,4), in the three-dimensional subspace of points whose coordinate sum is 10. This polyhedron has one volume (), 14 two-dimensional faces (), 36 edges (), and 24 vertices (). The total number of these faces is 1 + 14 + 36 + 24 = 75, an ordered Bell number, corresponding to the summation formula above for .
By expanding each Stirling number in this formula into a sum of binomial coefficients, the formula for the ordered Bell numbers may be expanded out into a double summation. The ordered Bell numbers may also be given by an infinite series:
Another summation formula expresses the ordered Bell numbers in terms of the Eulerian numbers , which count the permutations of items in which pairs of consecutive items are in increasing order:
where is the th Eulerian polynomial. One way to explain this summation formula involves a mapping from weak orderings on the numbers from 1 to to permutations, obtained by sorting each tied set into numerical order. Under this mapping, each permutation with consecutive increasing pairs comes from weak orderings, distinguished from each other by the subset of the consecutive increasing pairs that are tied in the weak ordering.
Generating function and approximation
As with many other integer sequences, reinterpreting the sequence as the coefficients of a power series and working with the function that results from summing this series can provide useful information about the sequence.
The fast growth of the ordered Bell numbers causes their ordinary generating function to diverge; instead the exponential generating function is used. For the ordered Bell numbers, it is:
Here, the left hand side is just the definition of the exponential generating function and the right hand side is the function obtained from this summation.
The form of this function corresponds to the fact that the ordered Bell numbers are the numbers in the first column of the infinite matrix . Here is the identity matrix and is an infinite matrix form of Pascal's triangle. Each row of starts with the numbers in the same row of Pascal's triangle, and then continues with an infinite repeating sequence of zeros.
Based on a contour integration of this generating function, the ordered Bell numbers can be expressed by the infinite sum
Here, stands for the natural logarithm. This leads to an approximation for the ordered Bell numbers, obtained by using only the term for in this sum and discarding the remaining terms:
where . Thus, the ordered Bell numbers are larger than the factorials by an exponential factor. Here, as in Stirling's approximation to the factorial, the indicates asymptotic equivalence. That is, the ratio between the ordered Bell numbers and their approximation tends to one in the limit as grows arbitrarily large. As expressed in little o notation, the relative error is , and the error term decays exponentially as grows.
Comparing the approximations for and shows that
For example, taking gives the approximation to .
This sequence of approximations, and this example from it, were calculated by Ramanujan, using a general method for solving equations numerically (here, the equation ).
Recurrence and modular periodicity
As well as the formulae above, the ordered Bell numbers may be calculated by the recurrence relation
The intuitive meaning of this formula is that a weak ordering on items may be broken down into a choice of some nonempty set of items that go into the first equivalence class of the ordering, together with a smaller weak ordering on the remaining items. There are choices of the first set, and choices of the weak ordering on the rest of the elements. Multiplying these two factors, and then summing over the choices of how many elements to include in the first set, gives the number of weak orderings, . As a base case for the recurrence, (there is one weak ordering on zero items). Based on this recurrence, these numbers can be shown to obey certain periodic patterns in modular arithmetic: for sufficiently large ,
Many more modular identities are known, including identities modulo any prime power. Peter Bala has conjectured that this sequence is eventually periodic (after a finite number of terms) modulo each positive integer , with a period that divides Euler's totient function of , the number of residues mod that are relatively prime to .
Applications
Combinatorial enumeration
As has already been mentioned, the ordered Bell numbers count weak orderings, permutohedron faces, Cayley trees, Cayley permutations, and equivalent formulae in Fubini's theorem. Weak orderings in turn have many other applications. For instance, in horse racing, photo finishes have eliminated most but not all ties, called in this context dead heats, and the outcome of a race that may contain ties (including all the horses, not just the first three finishers) may be described using a weak ordering. For this reason, the ordered Bell numbers count the possible number of outcomes of a horse race. In contrast, when items are ordered or ranked in a way that does not allow ties (such as occurs with the ordering of cards in a deck of cards, or batting orders among baseball players), the number of orderings for items is a factorial number , which is significantly smaller than the corresponding ordered Bell number.
Problems in many areas can be formulated using weak orderings, with solutions counted using ordered Bell numbers. consider combination locks with a numeric keypad, in which several keys may be pressed simultaneously and a combination consists of a sequence of keypresses that includes each key exactly once. As they show, the number of different combinations in such a system is given by the ordered Bell numbers. In seru, a Japanese technique for balancing assembly lines, cross-trained workers are allocated to groups of workers at different stages of a production line. The number of alternative assignments for a given number of workers, taking into account the choices of how many stages to use and how to assign workers to each stage, is an ordered Bell number. As another example, in the computer simulation of origami, the ordered Bell numbers give the number of orderings in which the creases of a crease pattern can be folded, allowing sets of creases to be folded simultaneously.
In number theory, an ordered multiplicative partition of a positive integer is a representation of the number as a product of one or more of its divisors. For instance, 30 has 13 multiplicative partitions, as a product of one divisor (30 itself), two divisors (for instance ), or three divisors (, etc.). An integer is squarefree when it is a product of distinct prime numbers; 30 is squarefree, but 20 is not, because its prime factorization repeats the prime 2. For squarefree numbers with prime factors, an ordered multiplicative partition can be described by a weak ordering on its prime factors, describing which prime appears in which term of the partition. Thus, the number of ordered multiplicative partitions is given by . On the other hand, for a prime power with exponent , an ordered multiplicative partition is a product of powers of the same prime number, with exponents summing to , and this ordered sum of exponents is a composition of . Thus, in this case, there are ordered multiplicative partitions. Numbers that are neither squarefree nor prime powers have a number of ordered multiplicative partitions that (as a function of the number of prime factors) is between these two extreme cases.
A parking function, in mathematics, is a finite sequence of positive integers with the property that, for every up to the sequence length, the sequence contains at least values that are at most . A sequence of this type, of length , describes the following process: a sequence of cars arrives on a street with parking spots. Each car has a preferred parking spot, given by its value in the sequence. When a car arrives on the street, it parks in its preferred spot, or, if that is full, in the next available spot. A sequence of preferences forms a parking function if and only if each car can find a parking spot on or after its preferred spot. The number of parking functions of length is exactly . For a restricted class of parking functions, in which each car parks either on its preferred spot or on the next spot, the number of parking functions is given by the ordered Bell numbers. Each restricted parking function corresponds to a weak ordering in which the cars that get their preferred spot are ordered by these spots, and each remaining car is tied with the car in its preferred spot. The permutations, counted by the factorials, are parking functions for which each car parks on its preferred spot. This application also provides a combinatorial proof for upper and lower bounds on the ordered Bell numbers of a simple form,
The ordered Bell number counts the number of faces in the Coxeter complex associated with a Coxeter group of type . Here, a Coxeter group can be thought of as a finite system of reflection symmetries, closed under repeated reflections, whose mirrors partition a Euclidean space into the cells of the Coxeter complex. For instance, corresponds to , the system of reflections of the Euclidean plane across three lines that meet at the origin at angles. The complex formed by these three lines has 13 faces: the origin, six rays from the origin, and six regions between pairs of rays.
uses the ordered Bell numbers to analyze -ary relations, mathematical statements that might be true of some choices of the arguments to the relation and false for others. He defines the "complexity" of a relation to mean the number of other relations one can derive from the given one by permuting and repeating its arguments. For instance, for , a relation on two arguments and might take the form . By Kemeny's analysis, it has derived relations. These are the given relation , the converse relation obtained by swapping the arguments, and the unary relation obtained by repeating an argument. (Repeating the other argument produces the same relation.)
apply these numbers to optimality theory in linguistics. In this theory, grammars for natural languages are constructed by ranking certain constraints, and (in a phenomenon called factorial typology) the number of different grammars that can be formed in this way is limited to the number of permutations of the constraints. A paper reviewed by Ellison and Klein suggested an extension of this linguistic model in which ties between constraints are allowed, so that the ranking of constraints becomes a weak order rather than a total order. As they point out, the much larger magnitude of the ordered Bell numbers, relative to the corresponding factorials, allows this theory to generate a much richer set of grammars.
Other
If a fair coin (with equal probability of heads or tails) is flipped repeatedly until the first time the result is heads, the number of tails follows a geometric distribution. The moments of this distribution are the ordered Bell numbers.
Although the ordinary generating function of the ordered Bell numbers fails to converge, it describes a power series that (evaluated at and then multiplied by ) provides an asymptotic expansion for the resistance distance of opposite vertices of an -dimensional hypercube graph. Truncating this series to a bounded number of terms and then applying the result for unbounded values of approximates the resistance to arbitrarily high order.
In the algebra of noncommutative rings, an analogous construction to the (commutative) quasisymmetric functions produces a graded algebra WQSym whose dimensions in each grade are given by the ordered Bell numbers.
In spam filtering, the problem of assigning weights to sequences of words with the property that the weight of any sequence exceeds the sum of weights of all its subsequences can be solved by using weight for a sequence of words, where is obtained from the recurrence equation
with base case . This recurrence differs from the one given earlier for the ordered Bell numbers, in two respects: omitting the term from the sum (because only nonempty sequences are considered), and adding one separately from the sum (to make the result exceed, rather than equalling, the sum). These differences have offsetting effects, and the resulting weights are the ordered Bell numbers.
References
Integer sequences
Enumerative combinatorics | Ordered Bell number | [
"Mathematics"
] | 3,830 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Enumerative combinatorics",
"Combinatorics",
"Numbers",
"Number theory"
] |
6,992,805 | https://en.wikipedia.org/wiki/NTEN | NTEN is an international nonprofit organization based in the United States. Founded in 2000, NTEN offers training and certificate programs for nonprofit staff learning about the equitable use of technology. Their CEO Amy Sample Ward was on the NonProfit Times Top 50 Influencers list every year from 2015 through 2020. The organization was named "the best small non-profit to work for" in Oregon by the magazine Oregon Business in 2019.
Overview
NTEN was originally incorporated as The Nonprofit Technology Enterprise Network and is now known solely as NTEN. Described in the Nonprofit Quarterly as "One of the more recent and potentially powerful networks evolving in the nonprofit technology-assistance scene", the association provides forums for people involved in nonprofit technology, acts as a conduit for connecting journalists and researchers with nonprofit technology practitioners, makes technology information and resources available to organizations, and undertakes research for the sector. It runs the annual Nonprofit Technology Conference (NTC) in the USA as well as co-hosting other events. In 2012, the company created Tech Accelerate, a tool for making assessment about technology use and policies designed for nonprofit staff to assist them with decisions, planning, and investments.
Research and publications
NTEN publishes research reports about issues affecting nonprofits. In 2017 they published their tenth annual Nonprofit Technology Staffing and Investments Report looking at where nonprofits spend their IT money, determining that they were spending more money on software than hardware and that over half provided funding for technology-specific training. In 2018 NTEN surveyed 250 nonprofits and published a report, State of Nonprofit Cybersecurity, concluding that while many organizations maintain adequate data backups, many have no policies on cyberattacks, and "only 40 percent said they provide regular cybersecurity training for staff." Other publications include:
Equity Guide for Nonprofit Technology
2022 Data Empowerment Report
Nonprofits and Artificial Intelligence
Cybersecurity for Nonprofits
Digital Inclusion
In 2015 NTEN partnered with Google Fiber to create a Digital Inclusion Fellowship program sponsoring 68 fellows across the US working with local organizations to assist in launching or expanding digital literacy programs. In 2019 the company began requiring salary information in postings on its job board citing equity reasons.
In 2020 the organization pulled their advertising from Facebook in response to their content moderation and data privacy policies. In January 2021, NTEN supported the impeachment and removal of President Trump. In October 2022, citing an increase in hate speech, the company stopped buying ads and sharply reduced its use of the Twitter platform.
See also
Nonprofit technology
Circuit rider (Technology)
References
External links
Non-profit technology
Non-profit organizations based in Oregon
Organizations established in 2000
2000 establishments in Oregon | NTEN | [
"Technology"
] | 532 | [
"Information technology",
"Non-profit technology"
] |
6,993,166 | https://en.wikipedia.org/wiki/Quentin%20Meillassoux | Quentin Meillassoux (; ; born
26 October 1967) is a French philosopher. He teaches at the Université Paris 1 Panthéon-Sorbonne.
Biography
Quentin Meillassoux is the son of the anthropologist Claude Meillassoux. He is a former student of the philosophers Bernard Bourgeois and Alain Badiou. He is married to the novelist and philosopher Gwenaëlle Aubry.
Philosophical work
Meillassoux's first book is After Finitude (Après la finitude, 2006). Alain Badiou, Meillassoux's former teacher, wrote the foreword. Badiou describes the work as introducing a new possibility for philosophy which is different from Immanuel Kant's three alternatives of criticism, skepticism, and dogmatism. The book was translated into English by Ray Brassier. Meillassoux is associated with the speculative realism movement.
In this book, Meillassoux argues that post-Kantian philosophy is dominated by what he calls "correlationism", the theory that humans cannot exist without the world nor the world without humans. In Meillassoux's view, this theory allows philosophy to avoid the problem of how to describe the world as it really is independent of human knowledge. He terms this reality independent of human knowledge as the "ancestral" realm. Following the commitment to mathematics of his mentor Alain Badiou, Meillassoux claims that mathematics describes the primary qualities of things as opposed to their secondary qualities shown by perception.
Meillassoux argues that in place of the agnostic scepticism about the reality of cause and effect, there should be a radical certainty that there is no causality at all. Following the rejection of causality, Meillassoux says that it is absolutely necessary that the laws of nature be contingent. The world is a kind of hyper-chaos in which the principle of sufficient reason is not necessary although Meillassoux says that the principle of non-contradiction is necessary.
For these reasons, Meillassoux rejects Kant's Copernican Revolution in philosophy. Since Kant makes the world dependent on the conditions by which humans observe it, Meillassoux accuses Kant of a "Ptolemaic Counter-Revolution." Meillassoux clarified and revised some of the views published in After Finitude during his lectures at the Free University of Berlin in 2012.
Several of Meillassoux's articles have appeared in English via the British philosophical journal Collapse, helping to spark interest in his work in the Anglophone world.
His unpublished dissertation L'inexistence divine (1997) is noted in After Finitude to be "forthcoming" in book form; as of 2021, it had not yet been published. In Parrhesia, in 2016, an excerpt from Meillassoux's dissertation was translated by Nathan Brown, who noted in his introduction that "what is striking about the document... is the marked difference of its rhetorical strategies, its order of reasons, and its philosophical style" from After Finitude, counter to the general view that the latter merely constituted "a partial précis" of L'inexistence divine; he notes further that the dissertation presents a "very different articulation of the Principle of Factiality" from that in After Finitude.
While Nathan Brown's translation uses the French text of the 1997 dissertation, in 2011 Graham Harman used a 2003 revision to offer a partial translation of Meillassoux's ongoing work of expanding the dissertation into a book.
In September 2011, Meillassoux's book on Stéphane Mallarmé was published in France under the title Le nombre et la sirène. Un déchiffrage du coup de dés de Mallarmé. In this second book, he offers a detailed reading of Mallarmé's famous poem "Un coup de dés jamais n'abolira le hasard" ("A Throw of the Dice Will Never Abolish Chance"), in which he finds a numerical code at work in the text.
Bibliography
Books
After Finitude: An Essay on the Necessity of Contingency, trans. Ray Brassier (Continuum, 2008). ISBN 978-2-02109-215-8
The Number and the Siren: A Decipherment of Mallarme's Coup De Des (Urbanomic, 2012). ISBN 978-0-98321-692-6
Time Without Becoming, edited by Anna Longo (Mimesis International, 2014). ISBN 978-8-85752-386-6
Science Fiction and Extro-Science Fiction, trans. Alyosha Edlebi (Univocal, 2015). ISBN 978-1-937561-48-2
Articles
"Potentiality and Virtuality," in Collapse: Philosophical Research and Investigations, Volume II (Speculative Realism), ed. Robin Mackay (Urbanomic, 2007): 55–81.
"Subtraction and Contraction: Deleuze, Immanence and Matter and Memory," in Collapse: Philosophical Research and Investigations, Volume III (Unknown Deleuze [+Speculative Realism]), ed. Robin Mackay (Urbanomic, 2007): 63–107.
"Presentation by Quentin Meillassoux," in Collapse: Philosophical Research and Investigations, Volume III (Unknown Deleuze [+Speculative Realism]), ed. Robin Mackay (Urbanomic, 2007): 408–449.
"Spectral Dilemma," in Collapse: Philosophical Research and Investigations, Volume IV (Concept Horror), ed. Robin Mackay (Urbanomic, 2008): 261–275.
"The Immanence of the World Beyond," in Grandeur of Reason: Religion, Tradition and Universalism, ed. Peter M. Chandler Jr. and Connor Cunningham, trans. Peter M. Chandler Jr., Adrian Pabst, and Aaron Riches (SCM Press, 2010): 444–478.
(with Florian Hecker and Robin Mackay) "Speculative Solution: Quentin Meillassoux and Florian Hecker Talk Hyperchaos," on Urbanomic, published 2010.
(with Florian Hecker, Robin Mackay, and Elie Ayache) "Metaphysics and Extro-Science Fiction," in Speculative Solution, ed. and trans. Robin Mackay (Urbanomic, 2010).
"Metaphysics, Speculation, Correlation," trans. Taylor Adkins, Pli: The Warwick Journal of Philosophy Vol. 22 (2011): 3–25.
"History and Event in Alain Badiou," trans. Thomas Nail, Parrhesia Vol. 12 (2011): 1–11.
"The Contingency of the Laws of Nature," trans. Robin Mackay, Environment and Planning D: Society and Space Vol. 30, No. 2 (2012): 322–334.
"Badiou and Mallarmé: The Event and the Perhaps," trans. Alyosha Edlebi, Parrhesia Vol. 16 (2013): 35–47.
"The Materialist Divinization of the Hypothesis," in Collapse: Philosophical Research and Investigations, Volume VIII (Casino Real), ed. Robin Mackay (Urbanomic, 2014): 813–846.
"Decision and Undecidability of the Event in Being and Event I and II," trans. Alyosha Edlebi, Parrhesia Vol. 19 (2014): 22–35.
"Excerpts from L'inexistence divine," in Graham Harman, Quentin Meillassoux: Philosophy in the Making (2nd Edition), trans, Graham Harman (Edinburgh University Press, 2015): 224–287.
"Iteration, Reiteration, Repetition: A Speculative Analysis of the Sign Devoid of Meaning," in Genealogies of Speculation: Materialism and Subjectivity Since Structuralism, ed. Armen Avanessian and Suhail Malik, trans, Robin Mackay and Moritz Gansen (Bloomsbury, 2016): 117–197.
"From L'inexistence divine," trans. Nathan Brown, Parrhesia Vol. 25 (2016): 20–40.
Interviews
(with Rick Dolphijn and Iris van der Tuin) "Interview with Quentin Meillassoux," in New Materialism: Interviews & Cartographies, ed. Rick Dolphijn and Iris van der Tuin, trans. Marie-Pier Boucher (Open Humanities Press, 2012): 71–81.
(with Sinziana Ravini) "'Archeology of the Future': Interview with Quentin Meillassoux," Palatten Vol. 1/2 (2013): 86–97.
(with Graham Harman) "Interview with Quentin Meillassoux (August 2010)," Graham Harman, Quentin Meillassoux: Philosophy in the Making (2nd Edition), trans, Graham Harman (Edinburgh University Press, 2015): 208–223.
(with Kağan Kahveci and Sercan Çalci) "Founded on Nothing: An Interview with Quentin Meillassoux," trans. Robin Mackay on Urbanomic, published 2021.
See also
New materialisms
Notes
Further reading
Pierre-Alexandre Fradet, « Sortir du cercle corrélationnel : un examen critique de la tentative de Quentin Meillassoux », Cahiers Critiques de philosophie, num. 19, dec. 2017, p. 103-119, online : https://www.academia.edu/34706673/_Sortir_du_cercle_corr%C3%A9lationnel_un_examen_critique_de_la_tentative_de_Quentin_Meillassoux_publi%C3%A9_dans_le_dossier_Le_r%C3%A9alisme_sp%C3%A9culation_probl%C3%A8mes_et_enjeux_coordonn%C3%A9_par_A._Longo_Cahiers_Critiques_de_philosophie_no_19_d%C3%A9cembre_2017_p._103-119
Pierre-Alexandre Fradet and Tristan Garcia (eds.), issue "Réalisme spéculatif", in Spirale, no 255, winter 2016—introduction here : "https://www.academia.edu/20381265/With_Tristan_Garcia_Petit_panorama_du_réalisme_spéculatif_in_Spirale_num._255_winter_2016_p._27-30_online_http_magazine-spirale.com_dossier-magazine_petit-panorama-du-realisme-speculatif
Olivier Ducharme et Pierre-Alexandre Fradet, Une vie sans bon sens. Regard philosophique sur Pierre Perrault (dialogue between Perrault, Nietzsche, Henry, Bourdieu, Meillassoux), foreword by Jean-Daniel Lafond, Montréal, Éditions Nota bene, coll. "Philosophie continentale", 2016, 210 p.
Harman, Graham. Quentin Meillassoux: Philosophy in the Making. Edinburgh: Edinburgh University Press, 2011.
Watkin, Christopher. Difficult Atheism: Post-Theological Thinking in Alain Badiou, Jean-Luc Nancy and Quentin Meillassoux. Edinburgh: Edinburgh University Press, paperback: March 2013; hardback: 2011.
Ennis, Paul. Continental Realism. Winchester: Zero Books, 2011.
Edouard Simca, "Recension: Q. Meillassoux, Après la finitude: Essai sur la nécessité de la contingence, Paris, Seuil, 2006"
Michel Bitbol. Maintenant la finitude: Peut-on penser l'absolu?. Paris, Flammarion, 2019.
External links
« Deuil à venir, dieu à venir », Critique, janvier-février 2006, no 704-705 (revised edition, Éditions Ionas, 2016).
« Potentialité et virtualité », Failles 2, Printemps 2006 (revised edition, Éditions Ionas, 2016).
Recording of Meillassoux's 2007 lecture in English at the Speculative Realism Conference at Goldsmiths, University of London
Conferences by Meillassoux (in French)
Speculative Heresy blog resources page, which contains articles by Meillassoux
1967 births
21st-century French essayists
21st-century French male writers
21st-century French philosophers
Action theorists
Deleuze scholars
École Normale Supérieure alumni
Academic staff of the École Normale Supérieure
French epistemologists
Existentialists
Academic staff of the Free University of Berlin
French logicians
French male essayists
French male non-fiction writers
Hermeneutists
Kant scholars
Living people
Lycée Louis-le-Grand alumni
Materialists
Metaphysical realism
Ontologists
Phenomenologists
Philosophers of logic
Philosophers of mathematics
Philosophers of time
Philosophical realism
French philosophy academics
Philosophy writers
Syntheism | Quentin Meillassoux | [
"Physics",
"Mathematics"
] | 2,735 | [
"Philosophers of mathematics",
"Materialism",
"Matter",
"Materialists"
] |
6,993,227 | https://en.wikipedia.org/wiki/Analog%20delay%20line | An analog delay line is a network of electrical components connected in cascade, where each individual element creates a time difference between its input and output. It operates on analog signals whose amplitude varies continuously. In the case of a periodic signal, the time difference can be described in terms of a change in the phase of the signal. One example of an analog delay line is a bucket-brigade device.
Other types of delay line include acoustic (usually ultrasonic), magnetostrictive, and surface acoustic wave devices. A series of resistor–capacitor circuits (RC circuits) can be cascaded to form a delay. A long transmission line can also provide a delay element. The delay time of an analog delay line may be only a few nanoseconds or several milliseconds, limited by the practical size of the physical medium used to delay the signal and the propagation speed of impulses in the medium.
Analog delay lines are applied in many types of signal processing circuits; for example the PAL television standard uses an analog delay line to store an entire video scanline. Acoustic and electromechanical delay lines are used to provide a "reverberation" effect in musical instrument amplifiers, or to simulate an echo. High-speed oscilloscopes used an analog delay line to allow observation of waveforms just before some triggering event. Radar systems used liquid delay lines to compare one pulse of radio to another, and after World War II these were used as computer memory systems.
With the growing use of digital signal processing techniques, digital forms of delay are practical and eliminate some of the problems with dissipation and noise in analog systems.
History
Inductor–capacitor ladder networks were used as analog delay lines in the 1920s. For example, Francis Hubbard's sonar direction finder patent filed in 1921. Hubbard referred to this as an Artificial transmission line. In 1941, Gerald Tawney of Sperry Gyroscope Company filed for a patent on a compact packaging of an inductor–capacitor ladder network that he explicitly referred to as a time delay line.
In 1924, Robert Mathes of Bell Telephone Laboratories filed a broad patent covering essentially all electromechanical delay lines, but focusing on acoustic delay lines where an air column confined to a pipe served as the mechanical medium, and a telephone receiver at one end and a telephone transmitter at the other end served as the electromechanical transducers. Mathes was motivated by the problem of echo suppression on long-distance telephone lines, and his patent clearly explained the fundamental relationship between inductor–capacitor ladder networks and mechanical elastic delay lines such as his acoustic line.
In 1938, William Spencer Percival of Electrical & Musical Industries (later EMI) applied for a patent on an acoustical delay line using piezoelectric transducers and a liquid medium. He used water or kerosene, with a 10 MHz carrier frequency, with multiple baffles and reflectors in the delay tank to create a long acoustic path in a relatively small tank.
In 1939, Laurens Hammond applied electromechanical delay lines to the problem of creating artificial reverberation for his Hammond organ. Hammond used coil springs to transmit mechanical waves between voice-coil transducers.
The problem of suppressing multipath interference in television reception motivated Clarence Hansell of RCA to use delay lines in his 1939 patent application. He used "delay cables" for this, relatively short pieces of coaxial cable used as delay lines, but he recognized the possibility of using magnetostrictive or piezoelectric delay lines.
By 1943, compact delay lines with distributed capacitance and inductance were devised. Typical early designs involved winding an enamel insulated wire on an insulating core and then surrounding that with a grounded conductive jacket. Richard Nelson of General Electric filed a patent for such a line that year. Other GE employees, John Rubel and Roy Troell, concluded that the insulated wire could be wound around a conducting core to achieve the same effect. Much of the development of delay lines during World War II was motivated by the problems encountered in radar systems.
In 1944, Madison G. Nicholson applied for a general patent on magnetostrictive delay lines. He recommended their use for applications requiring delays or measurement of intervals in the 10 to 1000 microseconds time range.
In 1945, Gordon D. Forbes and Herbert Shapiro filed a patent for the mercury delay line with piezoelectric transducers. This delay line technology would play an important role, serving as the basis of the delay-line memory used in several first-generation computers.
In 1946, David Arenberg filed patents covering the use of piezoelectric transducers attached to single crystal solid delay lines. He tried using quartz as a delay medium and reported that anisotropy in the quartz crystals caused problems. He reported success with single crystals of lithium bromide, sodium chloride and aluminum. Arlenberg developed the idea of complex 2- and 3-dimensional folding of the acoustic path in the solid medium in order to package long delays into a compact crystal. The delay lines used to decode PAL television signals follow the outline of this patent, using quartz glass as a medium instead of a single crystal.
See also
Digital delay line
Delay-line memory
Propagation delay
Charge-coupled device
References
Telecommunications engineering
Analog circuits | Analog delay line | [
"Engineering"
] | 1,097 | [
"Electrical engineering",
"Electronic engineering",
"Telecommunications engineering",
"Analog circuits"
] |
6,993,427 | https://en.wikipedia.org/wiki/Surficial%20aquifer | Surficial aquifers are shallow aquifers typically less than thick, but larger surficial aquifers of about have been mapped. They mostly consist of unconsolidated sand enclosed by layers of limestone, sandstone or clay and the water is commonly extracted for urban use. The aquifers are replenished by streams and from precipitation and can vary in volume considerably as the water table fluctuates. Being shallow, they are susceptible to contamination by fuel spills, industrial discharge, landfills, and saltwater. Parts of southeastern United States are dependent on surficial aquifers for their water supplies.
Composition
Surficial aquifers system consists mostly of beds of unconsolidated sand, cavity-riddled limestone and shells, sandstone, sand, and clay sand with minor clay or silt from the Pliocene to Holocene periods.
In most cases the flow system is undivided, though in places, clay beds are sufficiently thick and continuous to divide the system into two or three aquifers. Complex interbedding of fine and coarse-textured rocks is typical of the system. These rocks range from late Miocene to Holocene periods.
In Georgia and South Carolina, unnamed, sandy, marine terrace deposits of Pleistocene age and sand of Holocene age comprise the system. These sandy beds commonly contain clay and silt.
In Florida, these aquifers are shallow beds of sea shells and sand that lie less than underground. They are separated from the Floridan Aquifer by a confining bed of soil. Some have been contaminated by saltwater, yet they provide most of the public freshwater supply southwest of Lake Okeechobee and along the Atlantic coast north of Palm Beach.
In surficial aquifers, the groundwater continuously moves along the hydraulic gradient from areas of recharge to streams and other places of discharge. Surficial aquifers are recharged locally as the water table fluctuates in response to drought or rainfall. Therefore, the temperature and flow from water-table springs varies.
Important surficial aquifers
The Biscayne Aquifer is a surficial aquifer located in southeast Florida. It covers over , and is the most intensely used water source in Florida, supplying water to Dade County, Broward County, Palm Beach County and Monroe County. The aquifer lies close to the surface and is extremely vulnerable to pollutants that leach through the shallow limestone bedrock. In some areas, it has been contaminated by fuel spills, industrial discharge, landfills, and saltwater.
The Sand and Gravel Aquifer stretches across the panhandle of Florida and is replenished with rainfall. Over the years water levels have dropped due to water-well use by pumping, and it has been contaminated by industrial waste and saltwater intrusion.
The Chokoloskee Aquifer is surficial aquifer that covers in southwest Florida. It is recharged by rainfall. It is believed that artificial drainage canals have lowered water levels and increased saltwater intrusion.
Sources
University of Florida
Aquifers | Surficial aquifer | [
"Environmental_science"
] | 639 | [
"Hydrology",
"Aquifers"
] |
6,993,878 | https://en.wikipedia.org/wiki/Transporter%20associated%20with%20antigen%20processing | Transporter associated with antigen processing (TAP) protein complex belongs to the ATP-binding-cassette transporter family. It delivers cytosolic peptides into the endoplasmic reticulum (ER), where they bind to nascent MHC class I molecules.
The TAP structure is formed of two proteins: TAP-1 and TAP-2, which have one hydrophobic region and one ATP-binding region each. They assemble into a heterodimer, which results in a four-domain transporter.
Function
The TAP transporter is found in the ER lumen associated with the peptide-loading complex (PLC). This complex of β2 microglobulin, calreticulin, ERp57, TAP, tapasin, and MHC class I acts to keep hold of MHC molecules until they have been fully loaded with peptides.
Peptide transport
TAP-mediated peptide transport is a multistep process. The peptide-binding pocket is formed by TAP-1 and TAP-2. Association with TAP is an ATP-independent event, ‘in a fast bimolecular association step, peptide binds to TAP, followed by a slow isomerisation of the TAP complex’. It is suggested that the conformational change in structure triggers ATP hydrolysis and so initiates peptide transport.
Both nucleotide-binding domains (NBDs) are required for peptide translocation, as each NBD cannot hydrolyse ATP alone. The exact mechanism of transport is not known; however, findings indicate that ATP binding to TAP-1 is the initial step in the transport process, and that ATP bound to TAP-1 induces ATP binding in TAP-2. It has also been shown that undocking of the loaded MHC class I is linked to the transport cycle of TAP caused by signals from the TAP-1 subunit.
Transport of mRNA out of the nucleus
Yeast protein Mex67p and human NXF1, also-called TAP, are the two best-characterized NXFs (nuclear transport factors). TAPs mediate the interaction of the messenger ribonucleoprotein particle (mRNP) and the nuclear pore complex (NPC).NXFs bear no resemblance to prototypical nuclear transport receptors of the importin – exportin (karyopherin) family and lack the characteristic Ran-binding domain found in all karyopherins.
Specificity
The ATPase activity of TAP is highly dependent on the presence of the correct substrate, and peptide binding is prerequisite for ATP hydrolysis. This prevents waste of ATP via peptide-independent hydrolysis.
The specificity of TAP proteins was first investigated by trapping peptides in the ER using glycosylation. TAP binds to 8- to 16-residue peptides with equal affinity, while translocation is most efficient for peptides that are 8 to 12 residues long. Efficiency reduces for peptides longer than 12 residues. However, peptides with more than 40 residues were translocated, albeit with low efficiency. Peptides with low affinity for the MHC class I molecule are transported out of the ER by an efficient ATP-dependent export protein. These outlined mechanisms may represent a mechanism for ensuring that only high-affinity peptides are bound to MHC class I.
See also
Endoplasmic Reticulum
MHC Class I
Immune System
References
External links
ATP-binding cassette transporters
Immune system | Transporter associated with antigen processing | [
"Biology"
] | 701 | [
"Immune system",
"Organ systems"
] |
6,993,953 | https://en.wikipedia.org/wiki/Immersion%20%28mathematics%29 | In mathematics, an immersion is a differentiable function between differentiable manifolds whose differential pushforward is everywhere injective. Explicitly, is an immersion if
is an injective function at every point of (where denotes the tangent space of a manifold at a point in and is the derivative (pushforward) of the map at point ). Equivalently, is an immersion if its derivative has constant rank equal to the dimension of :
The function itself need not be injective, only its derivative must be.
Vs. embedding
A related concept is that of an embedding. A smooth embedding is an injective immersion that is also a topological embedding, so that is diffeomorphic to its image in . An immersion is precisely a local embedding – that is, for any point there is a neighbourhood, , of such that is an embedding, and conversely a local embedding is an immersion. For infinite dimensional manifolds, this is sometimes taken to be the definition of an immersion.
If is compact, an injective immersion is an embedding, but if is not compact then injective immersions need not be embeddings; compare to continuous bijections versus homeomorphisms.
Regular homotopy
A regular homotopy between two immersions and from a manifold to a manifold is defined to be a differentiable function such that for all in the function defined by for all is an immersion, with , . A regular homotopy is thus a homotopy through immersions.
Classification
Hassler Whitney initiated the systematic study of immersions and regular homotopies in the 1940s, proving that for every map of an -dimensional manifold to an -dimensional manifold is homotopic to an immersion, and in fact to an embedding for ; these are the Whitney immersion theorem and Whitney embedding theorem.
Stephen Smale expressed the regular homotopy classes of immersions as the homotopy groups of a certain Stiefel manifold. The sphere eversion was a particularly striking consequence.
Morris Hirsch generalized Smale's expression to a homotopy theory description of the regular homotopy classes of immersions of any -dimensional manifold in any -dimensional manifold .
The Hirsch-Smale classification of immersions was generalized by Mikhail Gromov.
Existence
The primary obstruction to the existence of an immersion is the stable normal bundle of , as detected by its characteristic classes, notably its Stiefel–Whitney classes. That is, since is parallelizable, the pullback of its tangent bundle to is trivial; since this pullback is the direct sum of the (intrinsically defined) tangent bundle on , , which has dimension , and of the normal bundle of the immersion , which has dimension , for there to be a codimension immersion of , there must be a vector bundle of dimension , , standing in for the normal bundle , such that is trivial. Conversely, given such a bundle, an immersion of with this normal bundle is equivalent to a codimension 0 immersion of the total space of this bundle, which is an open manifold.
The stable normal bundle is the class of normal bundles plus trivial bundles, and thus if the stable normal bundle has cohomological dimension , it cannot come from an (unstable) normal bundle of dimension less than . Thus, the cohomology dimension of the stable normal bundle, as detected by its highest non-vanishing characteristic class, is an obstruction to immersions.
Since characteristic classes multiply under direct sum of vector bundles, this obstruction can be stated intrinsically in terms of the space and its tangent bundle and cohomology algebra. This obstruction was stated (in terms of the tangent bundle, not stable normal bundle) by Whitney.
For example, the Möbius strip has non-trivial tangent bundle, so it cannot immerse in codimension 0 (in ), though it embeds in codimension 1 (in ).
showed that these characteristic classes (the Stiefel–Whitney classes of the stable normal bundle) vanish above degree , where is the number of "1" digits when is written in binary; this bound is sharp, as realized by real projective space. This gave evidence to the immersion conjecture, namely that every -manifold could be immersed in codimension , i.e., in This conjecture was proven by .
Codimension 0
Codimension 0 immersions are equivalently relative dimension 0 submersions, and are better thought of as submersions. A codimension 0 immersion of a closed manifold is precisely a covering map, i.e., a fiber bundle with 0-dimensional (discrete) fiber. By Ehresmann's theorem and Phillips' theorem on submersions, a proper submersion of manifolds is a fiber bundle, hence codimension/relative dimension 0 immersions/submersions behave like submersions.
Further, codimension 0 immersions do not behave like other immersions, which are largely determined by the stable normal bundle: in codimension 0 one has issues of fundamental class and cover spaces. For instance, there is no codimension 0 immersion despite the circle being parallelizable, which can be proven because the line has no fundamental class, so one does not get the required map on top cohomology. Alternatively, this is by invariance of domain. Similarly, although and the 3-torus are both parallelizable, there is no immersion – any such cover would have to be ramified at some points, since the sphere is simply connected.
Another way of understanding this is that a codimension immersion of a manifold corresponds to a codimension 0 immersion of a -dimensional vector bundle, which is an open manifold if the codimension is greater than 0, but to a closed manifold in codimension 0 (if the original manifold is closed).
Multiple points
A -tuple point (double, triple, etc.) of an immersion is an unordered set of distinct points with the same image . If is an -dimensional manifold and is an n-dimensional manifold then for an immersion in general position the set of -tuple points is an -dimensional manifold. Every embedding is an immersion without multiple points (where ). Note, however, that the converse is false: there are injective immersions that are not embeddings.
The nature of the multiple points classifies immersions; for example, immersions of a circle in the plane are classified up to regular homotopy by the number of double points.
At a key point in surgery theory it is necessary to decide if an immersion of an -sphere in a -dimensional manifold is regular homotopic to an embedding, in which case it can be killed by surgery. Wall associated to an invariant in a quotient of the fundamental group ring which counts the double points of in the universal cover of . For , is regular homotopic to an embedding if and only if by the Whitney trick.
One can study embeddings as "immersions without multiple points", since immersions are easier to classify. Thus, one can start from immersions and try to eliminate multiple points, seeing if one can do this without introducing other singularities – studying "multiple disjunctions". This was first done by André Haefliger, and this approach is fruitful in codimension 3 or more – from the point of view of surgery theory, this is "high (co)dimension", unlike codimension 2 which is the knotting dimension, as in knot theory. It is studied categorically via the "calculus of functors" by Thomas Goodwillie , John Klein, and Michael S. Weiss.
Examples and properties
A mathematical rose with petals is an immersion of the circle in the plane with a single -tuple point; can be any odd number, but if even must be a multiple of 4, so the figure 8, with , is not a rose.
The Klein bottle, and all other non-orientable closed surfaces, can be immersed in 3-space but not embedded.
By the Whitney–Graustein theorem, the regular homotopy classes of immersions of the circle in the plane are classified by the winding number, which is also the number of double points counted algebraically (i.e. with signs).
The sphere can be turned inside out: the standard embedding is related to by a regular homotopy of immersions
Boy's surface is an immersion of the real projective plane in 3-space; thus also a 2-to-1 immersion of the sphere.
The Morin surface is an immersion of the sphere; both it and Boy's surface arise as midway models in sphere eversion.
Immersed plane curves
Immersed plane curves have a well-defined turning number, which can be defined as the total curvature divided by 2. This is invariant under regular homotopy, by the Whitney–Graustein theorem – topologically, it is the degree of the Gauss map, or equivalently the winding number of the unit tangent (which does not vanish) about the origin. Further, this is a complete set of invariants – any two plane curves with the same turning number are regular homotopic.
Every immersed plane curve lifts to an embedded space curve via separating the intersection points, which is not true in higher dimensions. With added data (which strand is on top), immersed plane curves yield knot diagrams, which are of central interest in knot theory. While immersed plane curves, up to regular homotopy, are determined by their turning number, knots have a very rich and complex structure.
Immersed surfaces in 3-space
The study of immersed surfaces in 3-space is closely connected with the study of knotted (embedded) surfaces in 4-space, by analogy with the theory of knot diagrams (immersed plane curves (2-space) as projections of knotted curves in 3-space): given a knotted surface in 4-space, one can project it to an immersed surface in 3-space, and conversely, given an immersed surface in 3-space, one may ask if it lifts to 4-space – is it the projection of a knotted surface in 4-space? This allows one to relate questions about these objects.
A basic result, in contrast to the case of plane curves, is that not every immersed surface lifts to a knotted surface. In some cases the obstruction is 2-torsion, such as in Koschorke's example, which is an immersed surface (formed from 3 Möbius bands, with a triple point) that does not lift to a knotted surface, but it has a double cover that does lift. A detailed analysis is given in , while a more recent survey is given in .
Generalizations
A far-reaching generalization of immersion theory is the homotopy principle:
one may consider the immersion condition (the rank of the derivative is always ) as a partial differential relation (PDR), as it can be stated in terms of the partial derivatives of the function. Then Smale–Hirsch immersion theory is the result that this reduces to homotopy theory, and the homotopy principle gives general conditions and reasons for PDRs to reduce to homotopy theory.
See also
Immersed submanifold
Isometric immersion
Submersion
Notes
References
.
.
.
.
.
.
.
.
.
.
.
External links
Immersion at the Manifold Atlas
Immersion of a manifold at the Encyclopedia of Mathematics
Differential geometry
Differential topology
Maps of manifolds
Smooth functions | Immersion (mathematics) | [
"Mathematics"
] | 2,379 | [
"Topology",
"Differential topology"
] |
6,994,104 | https://en.wikipedia.org/wiki/%C3%89tang%20de%20Berre | The Étang de Berre (, "Lagoon of Berre"; in Provençal Occitan: estanh de Bèrra / mar de Bèrra according to classical orthography, estang de Berro / mar de Berro according to Mistralian orthography) is a brackish water lagoon on the Mediterranean coast of France, about north-west of Marseille.
Geography
The lagoon covers an area of . Created by the rise in water levels at the end of the Last Glacial Period (colloquially known as the last ice age), this small inland sea is composed of three parts: the principal body of water, the Étang de Vaïne to the east and the Étang de Bolmon to the south-east.
The Étang de Berre is fed with fresh water by the rivers Arc, Touloubre and Cadière and – since 1966 – by Électricité de France's . Two canals link it to the Mediterranean, the open air leading towards Port-de-Bouc and the Canal de Marseille au Rhône which leads towards L'Estaque through the Rove Tunnel; the Rove Tunnel has been closed since 1963, after a section of the tunnel collapsed.
The Marseille Provence Airport is located in the southeast portion of the Étang de Berre, with its main runway extending into the water on reclaimed land.
Administration
Ten communes border the Étang de Berre: Istres, Miramas, Saint-Chamas, Berre-l'Étang, Rognac, Vitrolles, Marignane, Châteauneuf-les-Martigues, Martigues and Saint-Mitre-les-Remparts.
History
The ancient name of the Étang de Berre was Stagnum Mastromela, according to Pliny the Elder (Book III [34]).
References
Berre
Landforms of Bouches-du-Rhône
Eutrophication | Étang de Berre | [
"Chemistry",
"Environmental_science"
] | 394 | [
"Eutrophication",
"Environmental chemistry",
"Water pollution"
] |
6,994,353 | https://en.wikipedia.org/wiki/Fractional%20factorial%20design | In statistics, fractional factorial designs are experimental designs consisting of a carefully chosen subset (fraction) of the experimental runs of a full factorial design. The subset is chosen so as to exploit the sparsity-of-effects principle to expose information about the most important features of the problem studied, while using a fraction of the effort of a full factorial design in terms of experimental runs and resources. In other words, it makes use of the fact that many experiments in full factorial design are often redundant, giving little or no new information about the system.
The design of fractional factorial experiments must be deliberate, as certain effects are confounded and cannot be separated from others.
History
Fractional factorial design was introduced by British statistician David John Finney in 1945, extending previous work by Ronald Fisher on the full factorial experiment at Rothamsted Experimental Station. Developed originally for agricultural applications, it has since been applied to other areas of engineering, science, and business.
Basic working principle
Similar to a full factorial experiment, a fractional factorial experiment investigates the effects of independent variables, known as factors, on a response variable. Each factor is investigated at different values, known as levels. The response variable is measured using a combination of factors at different levels, and each unique combination is known as a run. To reduce the number of runs in comparison to a full factorial, the experiments are designed to confound different effects and interactions, so that their impacts cannot be distinguished. Higher-order interactions between main effects are typically negligible, making this a reasonable method of studying main effects. This is the sparsity of effects principle. Confounding is controlled by a systematic selection of runs from a full-factorial table.
Notation
Fractional designs are expressed using the notation lk − p, where l is the number of levels of each factor, k is the number of factors, and p describes the size of the fraction of the full factorial used. Formally, p is the number of generators; relationships that determine the intentionally confounded effects that reduce the number of runs needed. Each generator halves the number of runs required. A design with p such generators is a 1/(lp)=l−p fraction of the full factorial design.
For example, a 25 − 2 design is 1/4 of a two-level, five-factor factorial design. Rather than the 32 runs that would be required for the full 25 factorial experiment, this experiment requires only eight runs. With two generators, the number of experiments has been halved twice.
In practice, one rarely encounters l > 2 levels in fractional factorial designs as the methodology to generate such designs for more than two levels is much more cumbersome. In cases requiring 3 levels for each factor, potential fractional designs to pursue are Latin squares, mutually orthogonal Latin squares, and Taguchi methods. Response surface methodology can also be a much more experimentally efficient way to determine the relationship between the experimental response and factors at multiple levels, but it requires that the levels are continuous. In determining whether more than two levels are needed, experimenters should consider whether they expect the outcome to be nonlinear with the addition of a third level. Another consideration is the number of factors, which can significantly change the experimental labor demand.
The levels of a factor are commonly coded as +1 for the higher level, and −1 for the lower level. For a three-level factor, the intermediate value is coded as 0.
To save space, the points in a factorial experiment are often abbreviated with strings of plus and minus signs. The strings have as many symbols as factors, and their values dictate the level of each factor: conventionally, for the first (or low) level, and for the second (or high) level. The points in a two-level experiment with two factors can thus be represented as , , , and .
The factorial points can also be abbreviated by (1), a, b, and ab, where the presence of a letter indicates that the specified factor is at its high (or second) level and the absence of a letter indicates that the specified factor is at its low (or first) level (for example, "a" indicates that factor A is on its high setting, while all other factors are at their low (or first) setting). (1) is used to indicate that all factors are at their lowest (or first) values. Factorial points are typically arranged in a table using Yates’ standard order: 1, a, b, ab, c, ac, bc, abc, which is created when the level of the first factor alternates with each run.
Generation
In practice, experimenters typically rely on statistical reference books to supply the "standard" fractional factorial designs, consisting of the principal fraction. The principal fraction is the set of treatment combinations for which the generators evaluate to + under the treatment combination algebra. However, in some situations, experimenters may take it upon themselves to generate their own fractional design.
A fractional factorial experiment is generated from a full factorial experiment by choosing an alias structure. The alias structure determines which effects are confounded with each other. For example, the five-factor 25 − 2 can be generated by using a full three-factor factorial experiment involving three factors (say A, B, and C) and then choosing to confound the two remaining factors D and E with interactions generated by D = A*B and E = A*C. These two expressions are called the generators of the design. So for example, when the experiment is run and the experimenter estimates the effects for factor D, what is really being estimated is a combination of the main effect of D and the two-factor interaction involving A and B.
An important characteristic of a fractional design is the defining relation, which gives the set of interaction columns equal in the design matrix to a column of plus signs, denoted by I. For the above example, since D = AB and E = AC, then ABD and ACE are both columns of plus signs, and consequently so is BDCE:
D*D = AB*D = I
E*E = AC*E = I
I= ABD*ACE= A*ABCDE = BCDE
In this case, the defining relation of the fractional design is I = ABD = ACE = BCDE. The defining relation allows the alias pattern of the design to be determined and includes 2p words. Notice that in this case, the interaction effects ABD, ACE, and BCDE cannot be studied at all. As the number of generators and the degree of fractionation increases, more and more effects become confounded.
The alias pattern can then be determined through multiplying by each factor column. To determine how main effect A is confounded, multiply all terms in the defining relation by A:
A*I = A*ABD = A*ACE = A*BCDE
A = BC = CE = ABCDE
Thus main effect A is confounded with interaction effects BC, CE, and ABCDE. Other main effects can be computed following a similar method.
Resolution
An important property of a fractional design is its resolution or ability to separate main effects and low-order interactions from one another. Formally, if the factors are binary then the resolution of the design is the minimum word length in the defining relation excluding (I). The resolution is denoted using Roman numerals, and it increases with the number. The most important fractional designs are those of resolution III, IV, and V: Resolutions below III are not useful and resolutions above V are wasteful (with binary factors) in that the expanded experimentation has no practical benefit in most cases—the bulk of the additional effort goes into the estimation of very high-order interactions which rarely occur in practice. The 25 − 2 design above is resolution III since its defining relation is I = ABD = ACE = BCDE.
The resolution classification system described is only used for regular designs. Regular designs have run size that equal a power of two, and only full aliasing is present. Non-regular designs, sometimes known as Plackett-Burman designs, are designs where run size is a multiple of 4; these designs introduce partial aliasing, and generalized resolution is used as design criterion instead of the resolution described previously.
Resolution III designs can be used to construct saturated designs, where N-1 factors can be investigated in only N runs. These saturated designs can be used for quick screening when many factors are involved.
Example fractional factorial experiment
Montgomery gives the following example of a fractional factorial experiment. An engineer performed an experiment to increase the filtration rate (output) of a process to produce a chemical, and to reduce the amount of formaldehyde used in the process. The full factorial experiment is described in the Wikipedia page Factorial experiment. Four factors were considered: temperature (A), pressure (B), formaldehyde concentration (C), and stirring rate (D). The results in that example were that the main effects A, C, and D and the AC and AD interactions were significant. The results of that example may be used to simulate a fractional factorial experiment using a half-fraction of the original 24 = 16 run design. The table shows the 24-1 = 8 run half-fraction experiment design and the resulting filtration rate, extracted from the table for the full 16 run factorial experiment.
In this fractional design, each main effect is aliased with a 3-factor interaction (e.g., A = BCD), and every 2-factor interaction is aliased with another 2-factor interaction (e.g., AB = CD). The aliasing relationships are shown in the table. This is a resolution IV design, meaning that main effects are aliased with 3-way interactions, and 2-way interactions are aliased with 2-way interactions.
The analysis of variance estimates of the effects are shown in the table below. From inspection of the table, there appear to be large effects due to A, C, and D. The coefficient for the AB interaction is quite small. Unless the AB and CD interactions have approximately equal but opposite effects, these two interactions appear to be negligible. If A, C, and D have large effects, but B has little effect, then the AC and AD interactions are most likely significant. These conclusions are consistent with the results of the full-factorial 16-run experiment.
Because B and its interactions appear to be insignificant, B may be dropped from the model. Dropping B results in a full factorial 23 design for the factors A, C, and D. Performing the anova using factors A, C, and D, and the interaction terms A:C and A:D, gives the results shown in the table, which are very similar to the results for the full factorial experiment experiment, but have the advantage of requiring only a half-fraction 8 runs rather than 16.
External links
Fractional Factorial Designs (National Institute of Standards and Technology)
See also
Robust parameter designs
References
Design of experiments
Statistical process control | Fractional factorial design | [
"Engineering"
] | 2,281 | [
"Statistical process control",
"Engineering statistics"
] |
6,994,700 | https://en.wikipedia.org/wiki/Picoplankton | Picoplankton is the fraction of plankton composed by cells between 0.2 and 2 μm that can be either prokaryotic and eukaryotic phototrophs and heterotrophs:
photosynthetic
heterotrophic
They are prevalent amongst microbial plankton communities of both freshwater and marine ecosystems. They have an important role in making up a significant portion of the total biomass of phytoplankton communities.
Classification
In general, plankton can be categorized on the basis of physiological, taxonomic, or dimensional characteristics. Subsequently, a generic classification of a plankton includes:
Bacterioplankton
Phytoplankton
Zooplankton
However, there is a simpler scheme that categorizes plankton based on a logarithmic size scale:
Macroplankton (200–2000 μm)
Micro-plankton (20–200 μm)
Nanoplankton (2–20 μm)
This was even further expanded to include picoplankton (0.2–2 μm) and fem-toplankton (0.02–0.2 μm), as well as net plankton, ultraplankton. Now that picoplankton have been characterized, they have their own further subdivisions such as prokaryotic and eukaryotic phototrophs and heterotrophs that are spread throughout the world in various types of lakes and trophic states.
In order to differentiate between autotrophic picoplankton and heterotrophic picoplankton, the autotrophs could have photosynthetic pigments and the ability to show autofluorescence, which would allow for their enumeration under epifluorescence microscopy. This is how minute eukaryotes first became known.
Overall, picoplankton play an essential role in oligotrophic dimicitc lakes because they are able to produce and then accordingly recycle dissolved organic matter (DOM) in a very efficient manner under circumstance when competition of other phytoplankters is disturbed by factors such as limiting nutrients and predators. Picoplankton are responsible for the most primary productivity in oligotrophic gyres, and are distinguished from nanoplankton and microplankton. Because they are small, they have a greater surface to volume ratio, enabling them to obtain the scarce nutrients in these ecosystems.
Furthermore, some species can also be mixotrophic. The smallest of cells (200 nm) are on the order of nanometers, not picometers. The SI prefix pico- is used quite loosely here, as nanoplankton and microplankton are only 10 and 100 times larger, respectively, although it is somewhat more accurate when considering the volume rather than the length.
Role in ecosystems
Picoplankton contribute greatly to the biomass and primary production in both marine and freshwater lake ecosystems. In the ocean, the concentration of picoplankton is 105–107 cells per millilitre of ocean water. Algal picoplankton is responsible for up to 90 percent of the total carbon production daily and annually in oligotrophic marine ecosystems. The amount of total carbon production by picoplankton in oligotrophic freshwater systems is also high, making up 70 percent of total annual carbon production. Marine picoplankton make up a higher percentage of biomass and carbon production in zones that are oligotrophic, like the open ocean, versus regions near the shore that are more nutrient rich. Their biomass and carbon production percentage also increases as the depth into the euphotic zone increases. This is due to their use of photopigments and efficiency at using blue-green light at these depths. Picoplankton population densities do not fluctuate throughout the year except in a few cases of smaller lakes where their biomass increases as the temperature of the lake water increases.
Picoplankton also play an important role in the microbial loop of these systems by aiding in providing energy to higher trophic levels. They are grazed by a various number of organisms such as flagellates, ciliates, rotifers and copepods. Flagellates are their main predator due to their ability to swim towards picoplankton in order to consume them.
Oceanic picoplankton
Picoplankton are important in nutrient cycling in all major oceans, where they exist in their highest abundances. They have many features that allow them to survive in these oligotrophic (low-nutrient) and low-light regions, such as the use several nitrogen sources, including nitrate, ammonium, and urea. Their small size and large surface area allows for efficient nutrient acquisition, incident light absorption, and organism growth. A small size also allows for minimal metabolic maintenance.
Picoplankton, specifically phototrophic picoplankton, play a significant role in the carbon production of open oceanic environments, which largely contributes to the global carbon production. Their carbon production contributes to at least 10% of global aquatic net primary productivity. High primary productivity contributions are made in both oligotrophic and deep zones in oceans. Picoplankton are dominant in biomass in open ocean regions.
Picoplankton also form the base of aquatic microbial food webs and are an energy source in the microbial loop. All trophic levels in a marine food web are affected by picoplankton carbon production and the gain or loss of picoplankton in the environment, especially in oligotrophic conditions. Marine predators of picoplankton include heterotrophic flagellates and ciliates. Protozoa are a dominant predator of picoplankton. Picoplankton are often lost through processes such as grazing, parasitism, and viral lysis.
Measurement
Marine scientists have slowly begun to understand in the last 10 or 15 years the importance of even the smallest subdivisions of plankton and their role in aquatic food webs and in organic and inorganic nutrient recycling. Therefore, being able to accurately measure the biomass and size distribution of picoplankton communities has now become rather essential. Two of the prevalent methods used to identify and enumerate picoplankton are fluorescence microscopy and visual counting. However, both methods are outdated because of their time-consuming and inaccurate nature. As a result, newer, faster, and more accurate methods have emerged lately, including flow cytometry and image-analyzed fluorescence microscopy. Both techniques are efficient in measuring nano plankton and auto-fluorescing phototrophic picoplankton. However, measuring very minute size ranges of picoplankton has often proven to be difficult to measure, which is why charge-coupled devices (CCD) and video cameras are now being used to measure small picoplankton, although a slow-scan CCD-based camera is more effective at detecting and sizing tiny particles such as bacteria that is fluorochrome-stained.
See also
Picocyanobacteria
Picoeukaryote
Picozoa
References
Biological oceanography
Planktology
Aquatic ecology | Picoplankton | [
"Biology"
] | 1,492 | [
"Aquatic ecology",
"Ecosystems"
] |
6,995,526 | https://en.wikipedia.org/wiki/Hydraulic%20motor | A hydraulic motor is a mechanical actuator that converts hydraulic pressure and flow into torque and angular displacement (rotation). The hydraulic motor is the rotary counterpart of the hydraulic cylinder as a linear actuator. Most broadly, the category of devices called hydraulic motors has sometimes included those that run on hydropower (namely, water engines and water motors) but in today's terminology the name usually refers more specifically to motors that use hydraulic fluid as part of closed hydraulic circuits in modern hydraulic machinery.
Conceptually, a hydraulic motor should be interchangeable with a hydraulic pump because it performs the opposite function – similar to the way a DC electric motor is theoretically interchangeable with a DC electrical generator. However, many hydraulic pumps cannot be used as hydraulic motors because they cannot be backdriven. Also, a hydraulic motor is usually designed for working pressure at both sides of the motor, whereas most hydraulic pumps rely on low pressure provided from the reservoir at the input side and would leak fluid when abused as a motor.
History of hydraulic motors
One of the first rotary hydraulic motors to be developed was that constructed by William Armstrong for his Swing Bridge over the River Tyne. Two motors were provided, for reliability. Each one was a three-cylinder single-acting oscillating engine. Armstrong developed a wide range of hydraulic motors, linear and rotary, that were used for a wide range of industrial and civil engineering tasks, particularly for docks and moving bridges.
The first simple fixed-stroke hydraulic motors had the disadvantage that they used the same volume of water whatever the load and so were wasteful at part-power. Unlike steam engines, as water is incompressible, they could not be throttled or their valve cut-off controlled. To overcome this, motors with variable stroke were developed. Adjusting the stroke, rather than controlling admission valves, now controlled the engine power and water consumption. One of the first of these was Arthur Rigg's patent engine of 1886. This used a double eccentric mechanism, as used on variable stroke power presses, to control the stroke length of a three cylinder radial engine. Later, the swashplate engine with an adjustable swashplate angle would become a popular way to make variable stroke hydraulic motors.
Hydraulic motor types
Vane motors
A vane motor consists of a housing with an eccentric bore, in which runs a rotor with vanes in it that slide in and out. The force differential created by the unbalanced force of the pressurized fluid on the vanes causes the rotor to spin in one direction. A critical element in vane motor design is how the vane tips are machined at the contact point between vane tip and motor housing. Several types of "lip" designs are used, and the main objective is to provide a tight seal between the inside of the motor housing and the vane, and at the same time to minimize wear and metal-to-metal contact.
Gear motors
A gear motor (external gear) consists of two gears, the driven gear (attached to the output shaft by way of a key, etc.) and the idler gear. High pressure oil is ported into one side of the gears, where it flows around the periphery of the gears, between the gear tips and the wall housings in which it resides, to the outlet port. The gears mesh, not allowing the oil from the outlet side to flow back to the inlet side. For lubrication, the gear motor uses a small amount of oil from the pressurized side of the gears, bleeds this through the (typically) hydrodynamic bearings, and vents the same oil either to the low pressure side of the gears, or through a dedicated drain port on the motor housing, which is usually connected to a line that vents the motor's case pressure to the system's reservoir. An especially positive attribute of the gear motor is that catastrophic breakdown is less common than in most other types of hydraulic motors. This is because the gears gradually wear down the housing and/or main bushings, reducing the volumetric efficiency of the motor gradually until it is all but useless. This often happens long before wear causes the unit to seize or break down.
Gear motors can be supplied as single or double-directional based on their usage, and they are preferred in either aluminum or cast iron bodies, depending on application conditions. They offer design options that can handle radial loads. Additionally, alternative configurations include pressure relief valve, anti-cavitation valve, and speed sensor to meet specific application needs.
Gerotor motors
The gerotor motor is in essence a rotor with n − 1 teeth, rotating off center in a rotor/stator with n teeth. Pressurized fluid is guided into the assembly using a (usually) axially placed plate-type distributor valve. Several different designs exist, such as the Geroller (internal or external rollers) and Nichols motors. Typically, the Gerotor motors are low-to-medium speed and medium-to-high torque.
Axial plunger motors
For high quality rotating drive systems, plunger motors are generally used. Whereas the speed of hydraulic pumps range from 1200 to 1800 rpm, the machinery to be driven by the motor often requires a much lower speed. This means that when an axial plunger motor (swept volume maximum 2 litres) is used, a gearbox is usually needed. For a continuously adjustable swept volume, axial piston motors are used.
Like piston (reciprocating) type pumps, the most common design of the piston type of motor is the axial. This type of motor is the most commonly used in hydraulic systems. These motors are, like their pump counterparts, available in both variable and fixed displacement designs. Typical usable (within acceptable efficiency) rotational speeds range from below 50 rpm to above 14000 rpm. Efficiencies and minimum/maximum rotational speeds are highly dependent on the design of the rotating group, and many different types are in use.
Radial piston motors
Radial piston motors are available in two basic types: Pistons pushing inward, and pistons pushing outward.
Pistons pushing inward
The crankshaft type (e.g. Staffa or SAI hydraulic motors) with a single cam and the pistons pushing inwards is basically an old design but is one which has extremely high starting torque characteristics. They are available in displacements from 40 cc/rev up to about 50 litres/rev but can sometimes be limited in power output. Crankshaft type radial piston motors are capable of running at "creep" speeds and some can run seamlessly up to 1500 rpm whilst offering virtually constant output torque characteristics. This makes them still the most versatile design.
The single-cam-type radial piston motor exists in many different designs itself. Usually the difference lies in the way the fluid is distributed to the different pistons or cylinders, and also the design of the cylinders themselves. Some motors have pistons attached to the cam using rods (much like in an internal combustion engine), while others employ floating "shoes", and even spherical contact telescopic cylinders like the Parker Denison Calzoni type. Each design has its own set of pros and cons, such as freewheeling ability, high volumetric efficiency, high reliability and so on.
Pistons pushing outward
Multi-lobe cam ring types (e.g. Black Bruin, Rexroth, Hägglunds Drives, Poclain, Rotary Power or Eaton Hydre-MAC type) have a cam ring with multiple lobes and the piston rollers push outward against the cam ring. This produces a very smooth output with high starting torque but they are often limited in the upper speed range. This type of motor is available in a very wide range from about 1 litre/rev to 250 litres/rev. These motors are particularly good on low speed applications and can develop very high power.
Braking
Hydraulic motors usually have a drain connection for the internal leakage, which means that when the power unit is turned off the hydraulic motor in the drive system will move slowly if an external load is acting on it. Thus, for applications such as a crane or winch with suspended load, there is always a need for a brake or a locking device.
Uses
Hydraulic pumps, motors, and cylinders can be combined into hydraulic drive systems. One or more hydraulic pumps, coupled to one or more hydraulic motors, constitute a hydraulic transmission.
Hydraulic motors are used for many applications now such as winches and crane drives, wheel motors for military vehicles, self-driven cranes, excavators, conveyor and feeder drives, cooling fan drives, mixer and agitator drives, roll mills, drum drives for digesters, trommels and kilns, shredders, drilling rigs, trench cutters, high-powered lawn trimmers, and plastic injection machines.
Hydraulic motors are also used in heat transfer applications.
See also
Sisu Nemo
References
Motor
ja:圧力モーター | Hydraulic motor | [
"Physics"
] | 1,794 | [
"Physical systems",
"Hydraulic actuators",
"Hydraulics"
] |
6,996,644 | https://en.wikipedia.org/wiki/Captive%20helicopter | A captive helicopter is a helicopter which is tethered to the ground with a rope, as with a captive balloon. Captive helicopters can be used for the same purposes as captive balloons.
A primary advantage of captive helicopters is that they can be more accurately steered than captive balloons or kites in order to compensate for the influence of the wind. A further advantage is that, unlike kites, they can be launched in the absence of wind. Their main disadvantages are that they require power for their flight and are very noisy.
Unlike kites (which rely solely on the wind for power) and balloons (which require specialty lighter-than-air gases), helicopters are normally powered by aviation fuels. However, it is possible to run captive helicopters electrically by running a cable inside the tether line holding the helicopter.
In 1887, Parisian electrical engineer Gustave Trouvé demonstrated his tethered electric model helicopter at a meeting of the French Association for the Advancement of Sciences in Toulouse. At the end of the 1930s, the German company Telefunken tried to make a longwave transmission experiment with a captive helicopter driven with a three-phase, AC-power engine. The helicopter should have reached a height of 1000 metres. Because of electrostatic charges induced by earth's electric field, the fuses melted when the captive helicopter reached a height of 750 metres and the captive helicopter landed roughly.
References
External links
Electric Tethered Observation Platform (modern captive electric helicopter)
See also
Captive balloon
Captive plane
Kite
AEG helicopter
Helicopters
Aircraft configurations | Captive helicopter | [
"Engineering"
] | 309 | [
"Aircraft configurations",
"Aerospace engineering"
] |
6,997,062 | https://en.wikipedia.org/wiki/HAT-P-1b | HAT-P-1b is an extrasolar planet orbiting the Sun-like star HAT-P-1, also known as ADS 16402 B. HAT-P-1 is the dimmer component of the ADS 16402 binary star system. It is located roughly 521 light years away from Earth in the constellation Lacerta. HAT-P-1b is among the least dense of any of the known extrasolar planets.
Discovery
HAT-P-1b was detected by searching for astronomical transits of the parent star by orbiting planets. As the planet passes in front of its parent star (as seen from Earth), it blocks a small amount of the light reaching us from the star. HAT-P-1b was first detected by a dip of 0.6% in the light from the star. This enabled determination of the planet's radius and orbital period. The discovery was made by the HATNet Project (Hungarian Automated Telescope Network) using telescopes at the Fred Lawrence Whipple Observatory on Mount Hopkins in Arizona and at the Submillimeter Array facility in Hawaii. It was confirmed and the orbital parameters determined by radial velocity measurements made at the 8.2 m Subaru and 10 m Keck telescopes, the discovery announcement being made on September 14, 2006.
Orbit and mass
HAT-P-1b is located in a very close orbit to its star, taking only 4.47 days to complete. It therefore falls into the category of hot Jupiters. At only 8.27 million kilometers from the star, tidal forces would circularise the orbit unless another perturbing body exists in the system. At the present time, the existing measurements are not sufficient to determine the orbital eccentricity, so a perfectly circular orbit has been assumed by the discoverers. However, the eccentricity of the planet was calculated to be no greater than 0.067.
In order to determine the mass of the planet, measurements of the star's radial velocity variations were made by the N2K Consortium. This was done by observing the Doppler shift in the star's spectrum. Combined with the known inclination of the orbit as determined by the transit observations, this revealed the mass of the planet to be 0.53±0.04 times that of Jupiter.
Rotation
As of August 2008, the most recent calculation of HAT-P-1b's Rossiter–McLaughlin effect and so spin-orbit angle was 3.7°.
Characteristics
As evidenced by its high mass and planetary radius, HAT-P-1b is a gas giant, most likely composed primarily of hydrogen and helium. The emission from C2, CN and CH radicals in planetary atmosphere was detected in 2022. The planetary atmosphere is hazy rather than cloudy, with observed clouds area fraction 22 percent.
Current theories predict that such planets formed in the outer regions of their solar systems and migrated inwards to their present orbits.
HAT-P-1b is significantly larger than predicted by theoretical models. This may indicate the presence of an additional source of heat within the planet. One possible candidate is tidal heating from an eccentric orbit, a possibility which has not been ruled out from the available measurements. However, another planet with a significantly inflated radius, HD 209458 b, is in a circular orbit.
An alternative possibility is that the planet has a high axial tilt, like Uranus in the Solar System. The problem with this explanation is that it is thought to be quite difficult to get a planet into this configuration, so having two such planets among the set of known transiting planets is problematic.
References
External links
BBC News
HATnet official homepage
NY Times
Lacerta
Transiting exoplanets
Hot Jupiters
Exoplanets discovered in 2006
Giant planets
Exoplanets discovered by HATNet | HAT-P-1b | [
"Astronomy"
] | 752 | [
"Lacerta",
"Constellations"
] |
6,997,099 | https://en.wikipedia.org/wiki/Rope%20pump | A rope pump is a kind of pump where a loose hanging rope is lowered into a well and drawn up through a long pipe with the bottom immersed in water. On the rope, round disks or knots matching the diameter of the pipe are attached which pull the water to the surface. It is commonly used in developing countries for both community supply and self-supply of water and can be installed on boreholes or hand-dug wells.
Description
A rope pump is a type of pump of which the main or most visible component is a continuous piece of rope, in which the rope is integral in raising water from a well. Rope pumps are often used in developing areas, the most common design of which uses PVC pipe and a rope with flexible or rigid valves. Rope pumps are cheap to build and easy to maintain. One design of rope pump using a solar-powered rope pump can pump 3,000 litres to 15 meters per day using an 80 watt solar panel. Rope pumps can be powered by low speed gasoline/diesel engines, electricity, human energy, wind and solar energy.
History
Washer or chain pumps were used by the Chinese over 1000 years ago. In the 1980s Reinder van Tijen an inventor and grass roots activist, with the support of the Royal Tropical Institute of Amsterdam, created and began instructing various communities around the world how to make a rope pump from simple available parts using PVC pipes and plastic moldings. He began at Burkina Faso in Africa, continued to Tunisia, Thailand and Gambia among others. In Nicaragua, the technology was introduced around 1985 and by 2010 there were an estimated 70,000 pumps installed. An estimated 20,000 were installed on wells for rural communal water supply and over 25% of the rural water supply was with rope pumps. The other 50,000 pumps were installed on private wells of rural families and farmers, partly or completely paid for by families themselves, (so called Self-supply). Many rope pumps are now being replaced by electric pumps so families climb "the water ladder". It is also used in other parts of Central America with over 25,000 pumps installed to date. In Africa the improved model of the rope pump was introduced around 1995 but in many countries failed due to the use of outdated designs and lack of long term follow up on quality in production and installation. By the 2020 an estimated 5 million people in 20 countries worldwide were using rope pumps for domestic uses and small scale irrigation.
Construction
The original rope pumps used knots along the rope length but can be made with flexible or rigid valves on the rope instead of knots. Alternatively they may use only rope, simply relying on the water clinging to the rope as it is quickly pulled to the surface.
Flexible valve rope pumps
Flexible valves can be made from cut pieces of bicycle wheel tubing. The valves are positioned approximately 20 cm apart on the rope. One disadvantage of flexible valve rope pumps is that they must be appropriately sized and thickness for different types, sizes and length of pipes.
Rigid valve rope pump
Rigid valves using plastic or metal washers that fit tightly into the PVC pipe as the rope is dragged through are also used. If the fit is tight, the washers can be spaced up to half a meter apart. The deeper the well, the smaller the pipe inner diameter must be, given the available power constraints. These rope pumps are often worked with a hand crank
Valves can also be made from knots in the rope itself.
Valveless rope pumps
Valveless pumps rely on friction with water clinging to the rope, which is moved at high speed, often using a bicycle to produce the required speed. It is a less efficient design but is simpler to construct than the other rope pumps.
Intellectual property
Rope pump technology is in the public domain and there are no patents pending on it.
See also
Wind pump
References
External links
rope pump site
Rope pump
Akvopedia Rope Pump article
Pumps | Rope pump | [
"Physics",
"Chemistry"
] | 782 | [
"Pumps",
"Hydraulics",
"Physical systems",
"Turbomachinery"
] |
6,997,105 | https://en.wikipedia.org/wiki/MedicAlert | The MedicAlert Foundation is a non-profit company founded in 1956 and headquartered in Turlock, California. It maintains a database of members' medical information that is made available to medical authorities in the event of a medical emergency. Members supply critical medical data to the organization and receive a distinctive metal bracelet or necklace tag which is worn at all times. It can be used by first responders, such as emergency medical personnel or law-enforcement agents, to access wearers' medical history and special medical needs.
The name MedicAlert may be interpreted either as the two separate words "medic alert" or as a blended form of the phrase "medical alert".
Protocol and publicity
The MedicAlert IDs worn by members are designed to mimic regular jewelry (such as bracelets, necklaces, ID tags, etc.) with the addition of the distinctive MedicAlert engraved tag. The personalized jewelry bears the words "Medic Alert" and the Staff of Asclepius, the universal symbol of the medical profession, on the obverse side, and important medical information and a personalized MedicAlert ID number on the back of the tag. Medical personnel can call the MedicAlert 24-hour Emergency Hotline and provide the ID number on the back of the ID to get more detailed medical information on the member.
Members' conditions and allergies are reviewed by medically trained staff and prioritized in the order of importance that an emergency health professional would assess a patient. The prioritized conditions are then transferred onto a members emblem and wallet card, while more detailed information is contained at MedicAlert ready to pass on in an emergency situation.
While IDs may change depending on country and availability, the two main MedicAlert IDs are bracelets and necklaces, the former being the most popular. MedicAlert has teamed up with Citizen Watch Co. to provide a line of watches that include the Citizen Watch Co. Eco-Drive watch with the customized engraving and logo of MedicAlert.
In the 1980s the IDs were publicized in conjunction with the insurance industry, The Epilepsy Foundation, and The American Diabetes Association, amongst other foundations. Celebrities also participated in the campaign (including comedian Carol Burnett, whose bracelet is symbolically the one-millionth).
Common medical conditions
The medical conditions and prescriptions covered include, but are not limited to:
Adrenal insufficiency
Allergies (food, latex, insects, seasonal, environmental etc.)
Alzheimer's disease
Asthma
Autism
Diabetes
Epilepsy
Heart disease
Hemophilia
Hypertension
Drug-induced long QT syndrome
Medications with serious interactions, e.g. MAOIs and Lamotrigine
Parkinson's disease
Devices/implants (artificial heart valves, pacemaker)
The MedicAlert Foundation of Australia permits organ donation directions to be engraved on their IDs.
Advance directives
An advance directive covers specific directives as to the course of treatment that is to be taken by caregivers should the patient be unable to give informed consent due to incapacity. Currently in the United States, MedicAlert will hold on to signed advance directives which can be provided to first responders and medical personnel when they contact MedicAlert. A common advance directive is a do not resuscitate order which states that member has requested that resuscitation should not be attempted if the member suffers cardiac or respiratory arrest.
International affiliates
MedicAlert has international affiliates in nine countries.
United States
Australia
Canada
Cyprus
Iceland
Malaysia
New Zealand
South Africa
United Kingdom
Zimbabwe
References
External links
U.S. MedicAlert Foundation
MedicAlert Foundation Australia
MedicAlert Foundation Canada
MedicAlert Foundation Cyprus
MedicAlert Foundation Iceland
MedicAlert Foundation Malaysia
MedicAlert Foundation New Zealand
MedicAlert Foundation South Africa
MedicAlert Foundation United Kingdom
MedicAlert Foundation Zimbabwe
Medical equipment
First aid
Organizations established in 1956
Non-profit organizations based in California | MedicAlert | [
"Biology"
] | 774 | [
"Medical equipment",
"Medical technology"
] |
6,997,434 | https://en.wikipedia.org/wiki/Dark%20therapy | Dark therapy is the practice of keeping people in complete darkness for extended periods of time in an attempt to treat psychological conditions. The human body produces the melatonin hormone, which is responsible for supporting the circadian rhythms. Darkness seems to help keep these circadian rhythms stable.
Dark therapy was said to be founded by a German anthropologist by the name of Holger Kalweit. A form of dark therapy is to block blue wavelength lights to stop the disintegration of melatonin.
This dark therapy concept was originated back in 1998 from a research which suggested that systematic exposure to darkness might alter people's mood. Original studies enforced 14 hours of darkness to bipolar patients for three nights straight. This study showed a decrease of manic episodes in the patients. Participation in this study became unrealistic, as patients did not want to participate in treatment of total darkness from 6 p.m. to 8 a.m. More recently, with the discovery of intrinsically photosensitive retinal ganglion cells, it has been hypothesized that similar results could be achieved by blocking blue light, as a potential treatment for bipolar disorder. Moreover, researchers exploring blue-blocking glasses have so far considered dark therapy only as an add-on treatment to be used together with psychotherapy, rather than a replacement for other therapies.
Another study consisting of healthy females and males suggested that a single exposure to blue light after being kept in a dim setting could reduce sleepiness. Contrary to the original claim that decreasing the amount of blue light could help with insomnia, this study suggested improvement with blue light exposure.
See also
Clinical depression
Light therapy
Seasonal affective disorder
Sleep hygiene
References
Circadian rhythm
Light therapy
Treatment of bipolar disorder | Dark therapy | [
"Biology"
] | 353 | [
"Behavior",
"Sleep",
"Circadian rhythm"
] |
6,997,477 | https://en.wikipedia.org/wiki/Engineering%20technician | An engineering technician is a professional trained in skills and techniques related to a specific branch of technology, with a practical understanding of the relevant engineering concepts. Engineering technicians often assist in projects relating to research and development, or focus on post-development activities like implementation or operation.
The Dublin Accord was signed in 2002 as an international agreement recognizing engineering technician qualifications. The Dublin Accord is analogous to the Washington Accord for engineers and the Sydney Accord for engineering technologists.
Nature of work
Engineering technicians help solve technical problems in many ways. They build or set up equipment, conduct experiments, collect data, and calculate results. They might also help to make a model of new equipment. Some technicians work in quality control, checking products, tests, and collecting data. In manufacturing, they help to design and develop products. They also find ways to produce things efficiently. There are multiple fields in this job such as; software design, repair, etc.
They may also be people who produce technical drawings or engineering drawings.
Engineering technicians are responsible for using the theories and principles of science, engineering, and mathematics to solve problems and come up with solutions in the research, design, development, manufacturing, sales, construction, inspection, and maintenance of systems and products. Engineering technicians help engineers and scientists in researching and developing, while some other engineering technicians may be responsible for inspections, quality control, and processes which may include conducting tests and data collection.
Education
Engineering technician diplomas and two-year degrees are generally offered by technical schools and non-university higher education institutions like colleges of further education, vocational schools, and community colleges. Many four-year colleges and universities offer bachelor's degrees in engineering technology but engineering technologists are somewhat different from engineering technicians.
In Portugal and Spain, the titles and (literally 'technical engineering') are used. Professionals attain the title with the award of a short-cycle three- to four-year undergraduate degree (associate degree or bachelor's degree) in a technical engineering field from colleges or technical engineering institutes (in Portugal) and (in Spain), from universities. Spanish "technical engineers" have full competency in their respective professional fields of engineering, being the difference that the three or four-year Engineers have competence only in their specialty (Mechanical, Electrical, Chemical, etc.), and the "Engineering Superior School" Engineers have wider competencies.
In the United States, the engineering technology accreditation commission (ETAC) of the Accreditation Board for Engineering and Technology (ABET) grants two-year associate degree programs to students that meet a set of specified standards. These programs include at least a college algebra and trigonometry course and, if needed, one or two basic science courses at any accredited school. The number of math and science prerequisite courses depends on the branch of engineering that the student chooses.
Engineering technicians apply scientific and engineering skills usually gained in postsecondary programs below the bachelor's degree level or through short-cycle bachelor's degrees. However, some university institutions award undergraduate degrees in the engineering field, which may confer the title of Engineering technician to the student, who is eligible to become a fully chartered engineer after further studies at the master's degree level. Engineering technicians are called professional engineers in the UK only.
Certification
Even though the term engineering technician is used throughout, it is mindful that these roles are often termed differently within specific jurisdictions. It also includes roles such as; certified or professional technician, which may also be called engineering associates.
Canada
The individual professional title Certified Technician and post-nominal C.Tech. are protected by provincial legislation and can only be used by registrants certified by engineering and applied science member organizations. The nine provincial professional associations are unified federally through Technology Professionals Canada, which advocates for the profession within the provincial associations and respective regulatory bodies.
United Kingdom
In the United Kingdom, the term Engineering Technician and post-nominal EngTech are protected in civil law and can only be used by technicians registered with the Engineering Council UK.
See also
Practical engineer
Drafter
American Society for Engineering Education
National Council of Examiners for Engineering and Surveying
UNESCO-UNEVOC
References
External links
List of tasks and requirements for mechanical engineering technicians
Institution of Mechanical Engineers (IMechE) UK
The Engineering Technician Forum on LinkedIn
Engineering occupations
Science occupations
Technical drawing
Technicians
Draughtsmen | Engineering technician | [
"Engineering"
] | 867 | [
"Design engineering",
"Civil engineering",
"Technical drawing",
"Draughtsmen"
] |
6,997,478 | https://en.wikipedia.org/wiki/Cys-loop%20receptor | The Cys-loop ligand-gated ion channel superfamily is composed of nicotinic acetylcholine, GABAA, GABAA-ρ, glycine, 5-HT3, and zinc-activated (ZAC) receptors. These receptors are composed of five protein subunits which form a pentameric arrangement around a central pore. There are usually 2 alpha subunits and 3 other beta, gamma, or delta subunits (some consist of 5 alpha subunits).
The name of the family refers to a characteristic loop formed by 13 highly conserved amino acids between two cysteine (Cys) residues, which form a disulfide bond near the N-terminal extracellular domain.
Cys-loop receptors are known only in eukaryotes, but are part of a larger family of pentameric ligand-gated ion channels. Only the Cys-loop clade includes the pair of bridging cysteine residues. The larger superfamily includes bacterial (e.g. GLIC) as well as non-Cys-loop eukaryotic receptors, and is referred to as "pentameric ligand-gated ion channels", or "Pro-loop receptors".
All subunits consist of a large conserved extracellular N-terminal domain, three highly conserved transmembrane domains, a cytoplasmic loop of variable size and amino acid sequence, and a fourth transmembrane region with a relatively short and variable extracellular C-terminal domain.
Neurotransmitters bind at the interface between subunits in the extracellular domain.
Each subunit contains four membrane-spanning alpha helices (M1, M2, M3, M4). The pore is formed primarily by the M2 helices. The M3-M4 linker is the intracellular domain that binds the cytoskeleton.
Binding
Most knowledge about cys-loop receptors comes from inferences made while studying various members of the family. Research on the structures of acetylcholine binding proteins (AChBP) determined that the binding sites consist of six loops, with the first three forming the principal face and the next three forming the complementary face. The last loop on the principal face wraps over the ligand in the active receptor. This site is also abundant in aromatic residues.
Recent literature indicates that the Trp residue on loop B is crucial for both agonist and antagonist binding. The neurotransmitter is taken into the binding site where it interacts (through hydrogen and cation-π bonding) with the amino acid resides in the aromatic box, located on the principal face of the binding site. Another essential interaction occurs between the agonist and a tyrosine on loop C. Upon interaction, the loop undergoes a conformational change and rotates down to cap the molecule in the binding site.
Channel gating
Through research done on nicotinic acetylcholine receptors it has been determined that the channels are activated through allosteric interactions between the binding and gating domains. Once the agonist binds it brings about conformational changes (including moving a beta sheet of the amino-terminal domain, and outward movement from loops 2, F and cys-loop which are tied to the M2-M3 linker and pull the channel open). Electron microscopy (at 9 Å) shows that the opening is caused by rotation at the M2 domain, but other studies on crystal structures of these receptors has shown that the opening could be a result from a M2 tilt which leads to pore dilation and a quaternary turn of the entire pentameric receptor.
See also
Ion channel
Nicotinic agonists
Receptor (biochemistry)
References
External links
Cys-Loop Ligand Gated Channels
Electrophysiology
Ion channels
Ionotropic receptors
Molecular neuroscience
Neurochemistry
Protein families | Cys-loop receptor | [
"Chemistry",
"Biology"
] | 786 | [
"Ionotropic receptors",
"Signal transduction",
"Protein classification",
"Molecular neuroscience",
"Molecular biology",
"Biochemistry",
"Protein families",
"Neurochemistry",
"Ion channels"
] |
6,997,526 | https://en.wikipedia.org/wiki/Terminal%20restriction%20fragment%20length%20polymorphism | Terminal restriction fragment length polymorphism (TRFLP or sometimes T-RFLP) is a molecular biology technique for profiling of microbial communities based on the position of a restriction site closest to a labelled end of an amplified gene. The method is based on digesting a mixture of PCR amplified variants of a single gene using one or more restriction enzymes and detecting the size of each of the individual resulting terminal fragments using a DNA sequencer. The result is a graph image where the x-axis represents the sizes of the fragment and the y-axis represents their fluorescence intensity.
Background
TRFLP is one of several molecular methods aimed to generate a fingerprint of an unknown microbial community. Other similar methods include DGGE, TGGE, ARISA, ARDRA, PLFA, etc.
These relatively high throughput methods were developed in order to reduce the cost and effort in analyzing microbial communities using a clone library. The method was first described by Avaniss-Aghajani et al in 1994 and later by Liu in 1997 which employed the amplification of the 16S rDNA target gene from the DNA of several isolated bacteria as well as environmental samples.
Since then the method has been applied for the use of other marker genes such as the functional marker gene pmoA to analyze methanotrophic communities.
Method
Like most other community analysis methods, TRFLP is also based on PCR amplification of a target gene. In the case of TRFLP, the amplification is performed with one or both the primers having their 5’ end labeled with a fluorescent molecule. In case both primers are labeled, different fluorescent dyes are required. While several common fluorescent dyes can be used for the purpose of tagging such as 6-carboxyfluorescein (6-FAM), ROX, carboxytetramethylrhodamine (TAMRA, a rhodamine-based dye), and hexachlorofluorescein (HEX), the most widely used dye is 6-FAM. The mixture of amplicons is then subjected to a restriction reaction, normally using a four-cutter restriction enzyme. Following the restriction reaction, the mixture of fragments is separated using either capillary or polyacrylamide electrophoresis in a DNA sequencer and the sizes of the different terminal fragments are determined by the fluorescence detector. Because the excised mixture of amplicons is analyzed in a sequencer, only the terminal fragments (i.e. the labeled end or ends of the amplicon) are read while all other fragments are ignored. Thus, T-RFLP is different from ARDRA and RFLP in which all restriction fragments are visualized. In addition to these steps the TRFLP protocol often includes a cleanup of the PCR products prior to the restriction and in case a capillary electrophoresis is used a desalting stage is also performed prior to running the sample.
Data format and artifacts
The result of a T-RFLP profiling is a graph called electropherogram which is an intensity plot representation of an electrophoresis experiment (gel or capillary). In an electropherogram the X-axis marks the sizes of the fragments while the Y-axis marks the fluorescence intensity of each fragment. Thus, what appears on an electrophoresis gel as a band appears as a peak on the electropherogram whose integral is its total fluorescence. In a T–RFLP profile each peak assumingly corresponds to one genetic variant in the original sample while its height or area corresponds to its relative abundance in the specific community. Both assumptions listed above, however, are not always met. Often, several different bacteria in a population might give a single peak on the electropherogram due to the presence of a restriction site for the particular restriction enzyme used in the experiment at the same position. To overcome this problem and to increase the resolving power of this technique a single sample can be digested in parallel by several enzymes (often three) resulting in three T-RFLP profiles per sample each resolving some variants while missing others. Another modification which is sometimes used is to fluorescently label the reverse primer as well using a different dye, again resulting in two parallel profiles per sample each resolving a different number of variants.
In addition to convergence of two distinct genetic variants into a single peak artifacts might also appear, mainly in the form of false peaks. False peaks are generally of two types: background “noises” and “pseudo” TRFs. Background (noise) peaks are peaks resulting from the sensitivity of the detector in use. These peaks are often small in their intensity and usually form a problem in case the total intensity of the profile is low (i.e. low concentration of DNA). Because these peaks result from background noise they are normally irreproducible in replicate profiles, thus the problem can be tackled by producing a consensus profile from several replicates or by eliminating peaks below a certain threshold. Several other computational techniques were also introduced in order to deal with this problem. Pseudo TRFs, on the other hand, are reproducible peaks and are linear to the amount of DNA loaded. These peaks are thought to be the result of ssDNA annealing on to itself and creating double stranded random restriction sites which are later recognized by the restriction enzyme resulting in a terminal fragment which does not represent any genuine genetic variant. It has been suggested that applying a DNA exonuclease such as the Mung bean exonuclease prior to the digestion stage might eliminate such artifact.
Interpretation of data
The data resulting from the electropherogram is normally interpreted in one of the following ways.
Pattern comparison
In pattern comparison the general shapes of electropherograms of different samples are compared for changes such as presence-absence of peaks between treatments, their relative size, etc.
Complementing with a clone library
If a clone library is constructed in parallel to the T-RFLP analysis then the clones can be used to assess and interpret the T-RFLP profile. In this method the TRF of each clone is determined either directly (i.e. performing T-RFLP analysis on each single clone) or by in silico analysis of that clone’s sequence. By comparing the T-RFLP profile to a clone library it is possible to validate each of the peaks as genuine as well as to assess the relative abundance of each variant in the library.
Peak resolving using a database
Several computer applications attempt to relate the peaks in an electropherogram to specific bacteria in a database. Normally this type of analysis is done by simultaneously resolving several profiles of a single sample obtained with different restriction enzymes. The software then resolves the profile by attempting to maximize the matches between the peaks in the profiles and the entries in the database so that the number of peaks left without a matching sequence is minimal. The software withdraws from the database only those sequences which have their TRFs in all analyzed profiles.
Multivariate analysis
A recently growing way to analyze T-RFLP profiles is use multivariate statistical methods to interpret the T-RFLP data. Usually the methods applied are those commonly used in ecology and especially in the study of biodiversity. Among them ordinations and cluster analysis are the most widely used.
In order to perform multivariate statistical analysis on T-RFLP data, the data must first be converted to table known as a “sample by species table“ which depicts the different samples (T-RFLP profiles) versus the species (T-RFS) with the height or area of the peaks as values.
Advantages and disadvantages
As T-RFLP is a fingerprinting technique its advantages and drawbacks are often discussed in comparison with other similar techniques, mostly DGGE.
Advantages
The major advantage of T-RFLP is the use of an automated sequencer which gives highly reproducible results for repeated samples. Although the genetic profiles are not completely reproducible and several minor peaks which appear are irreproducible the overall shape of the electropherogram and the ratios of the major peaks are considered reproducible. The use of an automated sequencer which outputs the results in a digital numerical format also enables an easy way to store the data and compare different samples and experiments. The numerical format of the data can and has been used for relative (though not absolute) quantification and statistical analysis. Although sequence data cannot be definitively inferred directly from the T-RFLP profile, ‘’in-silico’’ assignment of the peaks to existing sequences is possible to a certain extent.
Drawbacks
Because T-RFLP relies on DNA extraction methods and PCR, the biases inherent to both will affect the results of the analysis. Also, the fact that only the terminal fragments are being read means that any two distinct sequences which share a terminal restriction site will result in one peak only on the electropherogram and will be indistinguishable. Indeed, when T-RFLP is applied on a complex microbial community the result is often a compression of the total diversity to normally 20-50 distinct peaks only representing each an unknown number of distinct sequences. Although this phenomenon makes the T-RFLP results easier to handle, it naturally introduces biases and oversimplification of the real diversity. Attempts to minimize (but not overcome) this problem are often done by applying several restriction enzymes and/ or labeling both primers with a different fluorescent dye. The inability to retrieve sequences from T-RFLP often leads to the need to construct and analyze one or more clone libraries in parallel to the T-RFLP analysis which adds to the effort and complicates analysis. The possible appearance of false (pseudo) T-RFs, as discussed above, is yet another drawback. To handle this researchers often only consider peaks which can be affiliated to sequences in a clone library.
References
External links
Improved Protocol for T-RFLP by Capillary Electrophoresis
Statistical methods for characterizing diversity of microbial communities by analysis of terminal restriction fragment length polymorphisms of 16S rRNA genes.
FragSort: A software for ‘’in-silico’’ assignment of T-RFLP profiles from Ohio State University.
T-RFLP Analysis (APLAUS+): Another ‚‘in-silico‘‘ assignment tool on the website of the Microbial Community Analysis project at the University of Idaho
: BEsTRF: a tool for optimal resolution of terminal restriction fragment length polymorphism analysis based on user defined primer-enzyme-sequence databases
: THE FIRST DECADE OF TERMINAL RESTRICTION FRAGMENT LENGTH POLYMORPHISM (T-RFLP) IN MICROBIAL ECOLOGY
Molecular biology
Biochemistry methods
Laboratory techniques
Electrophoresis | Terminal restriction fragment length polymorphism | [
"Chemistry",
"Biology"
] | 2,212 | [
"Biochemistry methods",
"Instrumental analysis",
"Biochemical separation processes",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Biochemistry",
"Electrophoresis"
] |
6,997,706 | https://en.wikipedia.org/wiki/Triplatin%20tetranitrate | Triplatin tetranitrate (rINN; also known as BBR3464) is a platinum-based cytotoxic drug that underwent clinical trials for the treatment of human cancer. The drug acts by forming adducts with cellular DNA, preventing DNA transcription and replication, thereby inducing apoptosis. Other platinum-containing anticancer drugs include cisplatin, carboplatin, and oxaliplatin.
Drug development
Triplatin belongs to the anticancer class of polynuclear platinum complexes (PPCs), developed in the laboratory of Professor Nicholas Farrell, where one or more platinum centers are linked by amine ligands. BBR3464 was patented in the mid-1990s and clinical development and licensing was performed initially by Boehringer Mannheim Italia and eventually by the pharmaceutical company Roche, when clinical development was led by Novuspharma. In preclinical trials it demonstrated cytotoxic activity in cancer cell lines that had either intrinsic or acquired resistance to cisplatin. Triplatin remains the only “non-classical” platinum drug (not based on the cisplatin structure) to have entered human clinical trials. Phase I and Phase II clinical results have been summarized.
Mode of action
The main target of triplatin is cellular DNA, similar to cisplatin. The drug forms novel adducts with DNA, structurally distinct from those formed by cisplatin. More recently, cellular accumulation mediated by heparan sulfate proteoglycans and high-affinity glycosaminoglycan (GAG) binding indicates that cationic PPCs are intrinsically dual-function agents, acting by mechanisms discrete from the neutral, mononuclear agents.
Side effects
In phase I studies, when given once every 28 days, the main dose-limiting toxicities (DLT) of Triplatin (BBR 3464) were neutropenia and diarrhea encountered at a dose level of 1.1 mg/m2. Diarrhea was treatable with loperamide. Lack of nephrotoxicity and low urinary excretion supported use of drug without hydration.
References
Alkylating antineoplastic agents
IARC Group 2A carcinogens
Coordination complexes
Platinum(II) compounds
Ammine complexes | Triplatin tetranitrate | [
"Chemistry"
] | 474 | [
"Coordination chemistry",
"Coordination complexes"
] |
6,997,798 | https://en.wikipedia.org/wiki/Gastrocolic%20reflex | The gastrocolic reflex or gastrocolic response is a physiological reflex that controls the motility, or peristalsis, of the gastrointestinal tract following a meal. It involves an increase in motility of the colon consisting primarily of giant migrating contractions, in response to stretch in the stomach following ingestion and byproducts of digestion entering the small intestine. The reflex propels existing intestinal contents through the digestive system helps make way for ingested food, and is responsible for the urge to defecate following a meal.
Physiology
An increase in electrical activity is seen as little as 15 minutes after eating. The gastrocolic reflex is unevenly distributed throughout the colon, with the sigmoid colon exhibiting a greater phasic response to propel food distally into the rectum; however, the tonic response across the colon is uncertain. Increased pressure within the rectum acts as stimulus for defecation. Small intestine motility is also increased in response to the gastrocolic reflex.
These contractions are generated by the muscularis externa stimulated by the myenteric plexus. A number of neuropeptides have been proposed as mediators of the gastrocolic reflex. These include serotonin, neurotensin, cholecystokinin, prostaglandin E1, and gastrin.
Clinical significance
Clinically, the gastrocolic reflex has been implicated in pathogenesis of irritable bowel syndrome (IBS): the very act of eating or drinking can provoke an overreaction of the gastrocolic response in some patients with IBS due to their heightened visceral sensitivity, and this can lead to abdominal pain and distension, flatulence, and diarrhea. The gastrocolic reflex has also been implicated in pathogenesis of functional constipation, where patients with spinal cord injury and diabetics with gastroparesis secondary to diabetic neuropathy have an increased colonic transit time.
The gastrocolic reflex can also be used to optimise the treatment of constipation. Since the reflex is most active in the mornings and immediately after meals, consumption of stimulant laxatives, such as sennosides and bisacodyl, during these times will augment the reflex and help increase colonic contractions and therefore defecation.
References
External links
Reflexes
Physiology
Stomach | Gastrocolic reflex | [
"Biology"
] | 514 | [
"Physiology"
] |
6,997,890 | https://en.wikipedia.org/wiki/Polypyrimidine%20tract | The polypyrimidine tract is a region of pre-messenger RNA (mRNA) that promotes the assembly of the spliceosome, the protein complex specialized for carrying out RNA splicing during the process of post-transcriptional modification. The region is rich with pyrimidine nucleotides, especially uracil, and is usually 15–20 base pairs long, located about 5–40 base pairs before the 3' end of the intron to be spliced.
A number of protein factors bind to or associate with the polypyrimidine tract, including the spliceosome component U2AF and the polypyrimidine tract-binding protein (PTB), which plays a regulatory role in alternative splicing. PTB's primary function is in exon silencing, by which a particular exon region normally spliced into the mature mRNA is instead left out, resulting in the expression of an isoform of the protein for which the mRNA codes. Because PTB is ubiquitously expressed in many higher eukaryotes, it is thought to suppress the inclusion of "weak" exons with poorly defined splice sites. However, PTB binding is not sufficient to suppress "robust" exons.
The suppression or selection of exons is critical to the proper expression of tissue-specific isoforms. For example, smooth muscle and skeletal muscle express alternate isoforms distinguished by mutually exclusive exon selection in alpha-tropomyosin.
References
Gene expression
Spliceosome | Polypyrimidine tract | [
"Chemistry",
"Biology"
] | 314 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
6,998,364 | https://en.wikipedia.org/wiki/Symplectic%20filling | In mathematics, a filling of a manifold X is a cobordism W between X and the empty set. More to the point, the n-dimensional topological manifold X is the boundary of an (n + 1)-dimensional manifold W. Perhaps the most active area of current research is when n = 3, where one may consider certain types of fillings.
There are many types of fillings, and a few examples of these types (within a probably limited perspective) follow.
An oriented filling of any orientable manifold X is another manifold W such that the orientation of X is given by the boundary orientation of W, which is the one where the first basis vector of the tangent space at each point of the boundary is the one pointing directly out of W, with respect to a chosen Riemannian metric. Mathematicians call this orientation the outward normal first convention.
All the following cobordisms are oriented, with the orientation on W given by a symplectic structure. Let ξ denote the kernel of the contact form α.
A weak symplectic filling of a contact manifold (X,ξ) is a symplectic manifold (W,ω) with such that .
A strong symplectic filling of a contact manifold (X,ξ) is a symplectic manifold (W,ω) with such that ω is exact near the boundary (which is X) and α is a primitive for ω. That is, ω = dα in a neighborhood of the boundary .
A Stein filling of a contact manifold (X,ξ) is a Stein manifold W which has X as its strictly pseudoconvex boundary and ξ is the set of complex tangencies to X – that is, those tangent planes to X that are complex with respect to the complex structure on W. The canonical example of this is the 3-sphere where the complex structure on is multiplication by in each coordinate and W is the ball {|x| < 1} bounded by that sphere.
It is known that this list is strictly increasing in difficulty in the sense that there are examples of contact 3-manifolds with weak but no strong filling, and others that have strong but no Stein filling. Further, it can be shown that each type of filling is an example of the one preceding it, so that a Stein filling is a strong symplectic filling, for example. It used to be that one spoke of semi-fillings in this context, which means that X is one of possibly many boundary components of W, but it has been shown that any semi-filling can be modified to be a filling of the same type, of the same 3-manifold, in the symplectic world (Stein manifolds always have one boundary component).
References
Y. Eliashberg, A Few Remarks about Symplectic Filling, Geometry and Topology 8, 2004, p. 277–293
J. Etnyre, On Symplectic Fillings Algebr. Geom. Topol. 4 (2004), p. 73–80 online
H. Geiges, An Introduction to Contact Topology, Cambridge University Press, 2008
Geometric topology | Symplectic filling | [
"Mathematics"
] | 633 | [
"Topology",
"Geometric topology"
] |
6,998,695 | https://en.wikipedia.org/wiki/Phase%20I%20environmental%20site%20assessment | In the United States, an environmental site assessment is a report prepared for a real estate holding that identifies potential or existing environmental contamination liabilities. The analysis, often called an ESA, typically addresses both the underlying land as well as physical improvements to the property. A proportion of contaminated sites are "brownfield sites." In severe cases, brownfield sites may be added to the National Priorities List where they will be subject to the U.S. Environmental Protection Agency's Superfund program.
The actual sampling of soil, air, groundwater and/or building materials is typically not conducted during a Phase I ESA. The Phase I ESA is generally considered the first step in the process of environmental due diligence. Standards for performing a Phase I site assessment have been promulgated by the US EPA and are based in part on ASTM in Standard E1527-13.
If a site is considered contaminated, a Phase II environmental site assessment may be conducted, ASTM test E1903, a more detailed investigation involving chemical analysis for hazardous substances and/or petroleum hydrocarbons.
Background
As early as the 1970s specific property purchasers in the United States undertook studies resembling current Phase I ESAs, to assess risks of ownership of commercial properties which had a high degree of risk from prior toxic chemical use or disposal. Many times these studies were preparatory to understanding the nature of cleanup costs if the property was being considered for redevelopment or change of land use.
In the United States of America demand increased dramatically for this type of study in the 1980s following judicial decisions related to liability of property owners to effect site cleanup. Interpreting the Comprehensive Environmental Response, Compensation and Liability Act of 1980 (CERCLA), the U.S. courts have held that a buyer, lessor, or lender may be held responsible for remediation of hazardous substance residues, even if a prior owner caused the contamination; performance of a Phase I Environmental Site Assessment, according to the courts' reasoning, creates a safe harbor, known as the 'Innocent Landowner Defense'. The original standard under CERCLA for establishing an innocent landowner defense was based upon the requirement to perform a "all appropriate inquiry" prior to ownership transfer. At such time, engineering firms started performing professional engineering reports under a variety of monikers including, "Environmental Audits", "Property Transfer Screens", "Environmental Due-Diligence Reports" and "Environmental Site Assessments". In 1991, Impact Environmental coined the industry term, “Environmental Site Assessment” to replace the commonly used "Environmental Audit” for property transfer studies. A 1990 Court decision, No. 89-8094 (11th Cir. May 23, 1990), United States v. Fleet Factors Corp. found that a secured creditor can be liable for property contamination under the strict, joint and several liability scheme outlined in CERCLA. As a result of this decision, banks elevated their demands for pre-transfer all appropriate inquiries to hedge against financial risk. Starting in the New York market among banks and regional environmental consulting engineers, the term-of-choice evolved to be the Phase I Environmental Site Assessment.
In 1998 the necessity of performing a Phase I ESA was underscored by congressional action in passing the Superfund Cleanup Acceleration Act of 1998. This act requires purchasers of commercial property to perform a Phase I study meeting the specific standard of ASTM E1527: Standard Practice for Environmental Site Assessments: Phase I Environmental Site Assessment Process.
The most recent standard is "Standards and Practices for All Appropriate Inquiries" 40 Code of Federal Regulations, Section 312 which drew heavily from ASTM E1527-13, which is the ASTM Standard for conducting 'All Appropriate Inquiry' (AAI) for the environmental assessment of a real property. Previous guidances regarding the ASTM E1527 standard were ASTM E1527-97, ASTM E1527-00, and ASTM E1527-05.
Residential property purchasers are only required to conduct a site inspection and chain of title survey.
Triggering actions
A variety of reasons for a Phase I study to be performed exist, the most common being:
Purchase of real property by a person or entity not previously on title.
Contemplation by a new lender to provide a loan on the subject real estate.
Partnership buyout or principal redistribution of ownership.
Application to a public agency for change of use or other discretionary land use permit.
Existing property owner's desire to understand toxic history of the property.
Compulsion by a regulatory agency who suspects toxic conditions on the site.
Divestiture of properties.
Scope
Scrutiny of the land includes examination of potential soil contamination, groundwater quality, surface water quality, vapor intrusion, and sometimes issues related to hazardous substance uptake by biota. The examination of a site may include: definition of any chemical residues within structures; identification of possible asbestos containing building materials; inventory of hazardous substances stored or used on site; assessment of mold and mildew; and evaluation of other indoor air quality parameters.
Depending upon precise protocols utilized, there are a number of variations in the scope of a Phase I study. The tasks listed here are common to almost all Phase I ESAs:
Performance of an on-site visit to view present conditions (chemical spill residue, die-back of vegetation, etc.); hazardous substances or petroleum products usage (presence of above ground or underground storage tanks, storage of acids, etc.); and evaluate any likely environmentally hazardous site history.
Evaluation of risks of neighboring properties upon the subject property
Review of Federal, State, Local and Tribal Records out to distances specified by the ASTM 1528 and AAI Standards (ranging from 1/8 to 1 mile depending on the database)
Interview of persons knowledgeable regarding the property history (past owners, present owner, key site manager, present tenants, neighbors).
Examine municipal or county planning files to check prior land usage and permits granted.
Conduct file searches with public agencies (State water board, fire department, county health department, etc.) having oversight relative to water quality and soil contamination issues.
Examine historical aerial photography of the vicinity.
Examine current USGS maps to scrutinize drainage patterns and topography.
Examine chain-of-title for Environmental Liens and/or Activity and Land Use Limitations (AULs).
In most cases, the public file searches, historical research and chain-of-title examinations are outsourced to information services that specialize in such activities. Non-Scope Items in a Phase I Environmental Site Assessment can include visual inspections or records review searches for:
Asbestos Containing Building Materials (ACBM)
Lead-Based Paint
Lead in Drinking Water
Mold
Radon
Wetlands
Threatened and Endangered Species
Mercury poisoning
Debris flow
Earthquake Hazard
Vapor intrusion
Emerging contaminants
Observations of Non-scope Items can be reported as "findings" if requested by the report user, however, these items do not constitute recognized environmental conditions.
Preparers
Often a multi-disciplinary approach is taken in compiling all the components of a Phase I study, since skills in chemistry, atmospheric physics, geology, microbiology and even botany are frequently required. Many of the preparers are environmental scientists who have been trained to integrate these diverse disciplines. Many states have professional registrations which are applicable to the preparers of Phase I ESAs; for example, the state of California had a registration entitled "California Registered Environmental Assessor Class I or Class II" until July 2012, when it removed this REA certification program due to budget cuts.
Under ASTM E 1527-13 parameters were set forth as to who is qualified to perform Phase I ESAs. An Environmental Professional is someone with:
a current Professional Engineer's or Professional Geologist's license or registration from a state or U.S. territory with 3 years equivalent full-time experience; or
a Baccalaureate or higher degree from an accredited institution of higher education in a discipline of engineering or science and 5 years equivalent full-time experience; or
have the equivalent of 10 years full-time experience.
A person not meeting one or more of those qualifications may assist in the conduct of a Phase I ESA if the individual is under the supervision or responsible charge of a person meeting the definition of an Environmental Professional when concluding such activities.
Most site assessments are conducted by private companies independent of the owner or potential purchaser of the land.
Examples
While there are myriad sites that have been analyzed to date within the United States, the following list will serve as examples of the subject matter:
Auke Bay U.S. Postal Facility, Juneau, Alaska
Esso Canada Ltd. Former Bulk Fuels Facility, Owen Sound, Ontario, Canada
Dakin Building, Brisbane, California
East Elk Grove Specific Plan, Elk Grove, California
Mariners Marsh Park, Staten Island, New York
Richmond State Hospital Farm Industrial Park, Wayne County, Indiana
Sydney Steel Plant Lands, Sydney, Nova Scotia
Weyerhauser Technology Center, Federal Way, Washington
International context
In Japan, with the passage of the 2003 Soil Contamination Countermeasures Law, there is a strong movement to conduct Phase I studies more routinely. At least one jurisdiction in Canada (Ontario) now requires the completion of a Phase I prior to the transfer of some types of industrial properties. Some parts of Europe began to conduct Phase I studies on selected properties in the 1990s, but still lack the comprehensive attention given to virtually all major real estate transactions in the USA.
In the United Kingdom contaminated land regulation is outlined in the Environment Act 1995. The Environment Agency of England and Wales have produced a set of guidance; CLEA a standardized approach to the assessment of land contamination. A Phase 1 Desktop Study is often required in support of a planning application. These reports must be assembled by a "competent person".
Other environmental site assessment types
There are several other report types that have some resemblance in name or degree of detail to the Phase I Environmental Site Assessment:
Phase II Environmental Site Assessment is an "intrusive" investigation which collects original samples of soil, groundwater or building materials to analyze for quantitative values of various contaminants. This investigation is normally undertaken when a Phase I ESA determines a likelihood of site contamination. The most frequent substances tested are petroleum hydrocarbons, heavy metals, pesticides, solvents, asbestos and mold.
Phase III Environmental Site Assessment is an investigation involving remediation of a site. Phase III investigations aim to delineate the physical extent of contamination based on recommendations made in Phase II assessments. Phase III investigations may involve intensive testing, sampling, and monitoring, "fate and transport" studies and other modeling, and the design of feasibility studies for remediation and remedial plans. This study normally involves assessment of alternative cleanup methods, costs and logistics. The associated reportage details the steps taken to perform site cleanup and the follow-up monitoring for residual contaminants.
Limited Phase I Environmental Site Assessment is a truncated Phase I ESA, normally omitting one or more work segments such as the site visit or certain of the file searches. When the field visit component is deleted the study is sometimes called a Transaction Screen.
Environmental Assessment has little to do with the subject of hazardous substance liability, but rather is a study preliminary to an Environmental Impact Statement, which identifies environmental impacts of a land development action and analyzes a broad set of parameters including biodiversity, environmental noise, water pollution, air pollution, traffic, geotechnical risks, visual impacts, public safety issues and also hazardous substance issues.
SBA Phase I Environmental Site Assessment means properties purchased through the United States Small Business Administration's 504 Fixed Asset Financing Program require specific and often higher due diligence requirements than regular Real Estate transactions. Due diligence requirements are determined according to the NAICS codes associated with the prior business use of the property. There are 58 specific NAICS codes that require Phase I Investigations. These include, but are not limited to: Funeral Homes, Dry Cleaners, and Gas Stations. The SBA also requires Phase II Environmental Site Assessment to be performed on any Gas Station that has been in operation for more than 5 years. The additional cost to perform this assessment cannot be included in the amount requested in the loan and adds significant costs to the borrower.
Freddie Mac/Fannie Mae Phase I Environmental Site Assessments are two specialized types of Phase I ESAs that are required when a loan is financed through Freddie Mac or Fannie Mae. The scopes of work are based on the ASTM E1527-05 Standard but have specific requirements including the following: the percent and scope of the property inspection; requirements for radon testing; asbestos and lead-based paint testing and operations-and-maintenance (O&M) plans to manage the hazards in place; lead in drinking water; and mold inspection. For condominiums, Fannie Mae requires a Phase I ESA anytime the initial underwriting analysis indicates environmental concerns.
HUD Phase I Environmental Site Assessment
The U.S. Department of Housing and Urban Development also requires a Phase I ESA for any condominium under construction that wishes to offer an FHA insured loan to potential buyers.
See also
Environmental remediation
Environmental scientist
Institute of Environmental Management and Assessment
References
Environmental economics
Environmental law in the United States
Environmental reports
American legal terminology
Property law in the United States
Soil contamination
Environmental law legal terminology | Phase I environmental site assessment | [
"Chemistry",
"Environmental_science"
] | 2,675 | [
"Environmental economics",
"Environmental chemistry",
"Soil contamination",
"Environmental social science"
] |
6,999,096 | https://en.wikipedia.org/wiki/Reset%20%28computing%29 | In a computer or data transmission system, a reset clears any pending errors or events and brings a system to normal condition or an initial state, usually in a controlled manner. It is usually done in response to an error condition when it is impossible or undesirable for a processing activity to proceed and all error recovery mechanisms fail. A computer storage program would normally perform a "reset" if a command times out and error recovery schemes like retry or abort also fail.
Software reset
A software reset (or soft reset) is initiated by the software, for example, Control-Alt-Delete key combination have been pressed, or execute restart in Microsoft Windows.
Hardware reset
Most computers have a reset line that brings the device into the startup state and is active for a short time after powering on. For example, in the x86 architecture, asserting the RESET line halts the CPU; this is done after the system is switched on and before the power supply has asserted "power good" to indicate that it is ready to supply stable voltages at sufficient power levels. Reset places less stress on the hardware than power cycling, as the power is not removed. Many computers, especially older models, have user accessible "reset" buttons that assert the reset line to facilitate a system reboot in a way that cannot be trapped (i.e. prevented) by the operating system, or holding a combination of buttons on some mobile devices. Devices may not have a dedicated Reset button, but have the user hold the power button to cut power, which the user can then turn the computer back on. Out-of-band management also frequently provides the possibility to reset the remote system in this way.
Many memory-capable digital circuits (flip-flops, registers, counters and so on) accept the reset signal that sets them to the pre-determined state. This signal is often applied after powering on but may also be applied under other circumstances. After a hard reset, the register states of many hardware have been cleared.
The ability for an electronic device to reset itself in case of error or abnormal power loss is an important aspect of embedded system design and programming. This ability can be observed with everyday electronics such as a television, audio equipment or the electronics of a car, which are able to function as intended again even after having lost power suddenly. A sudden and strange error with a device might sometimes be fixed by removing and restoring power, making the device reset. Some devices, such as portable media players, very often have a dedicated reset button as they are prone to freezing or locking up. The lack of a proper reset ability could otherwise possibly render the device useless after a power loss or malfunction.
User initiated hard resets can be used to reset the device if the software hangs, crashes, or is otherwise unresponsive. However, data may become corrupted if this occurs. Generally, a hard reset is initiated by pressing a dedicated reset button On some systems (e.g, the PlayStation 2 video game console), pressing and releasing the power button initiates a hard reset, and holding the button turns the system off.
Hardware reset in 80x86 IBM PC
The 8086 microprocessors provide RESET pin that is used to do the hardware reset. When a HIGH is applied to the pin, the CPU immediately stops, and sets the major registers to these values:
The CPU uses the values of CS and IP registers to find the location of the next instruction to execute. Location of next instruction is calculated using this simple equation:
Location of next instruction = (CS<<4) + (IP)
This implies that after the hardware reset, the CPU will start execution at the physical address 0xFFFF0. In IBM PC compatible computers, This address maps to BIOS ROM. The memory word at 0xFFFF0 usually contains a JMP instruction that redirects the CPU to execute the initialization code of BIOS. This JMP instruction is absolutely the first instruction executed after the reset.
Hardware reset in later x86 CPUs
Later x86 processors reset the CS and IP registers similarly, refer to Reset vector.
Mac
Apple Mac computers allow various levels of resetting, including (CTL,CMD,EJECT) analogous to the three-finger salute (CTL,ALT,DEL) on Windows computers.
See also
Abnormal end
Abort (computing)
CRIU
Hang (computing)
Power-on reset
Power-on self test
Reboot (computing)
Reset vector
References
Computing terminology | Reset (computing) | [
"Technology"
] | 912 | [
"Computing terminology"
] |
6,999,304 | https://en.wikipedia.org/wiki/List%20of%20chemical%20process%20simulators | This is a list of software used to simulate the material and energy balances of chemical process plants. Applications for this include design studies, engineering studies, design audits, debottlenecking studies, control system check-out, process simulation, dynamic simulation, operator training simulators, pipeline management systems, production management systems, digital twins.
See also
Chemical engineering
Process simulation
Process engineering
References
Seader, J.D., Seider, W.D. and Pauls, A.C.: Flowtran Simulation – An Introduction, 2nd Edition, CACHE (1977).
Douglas, J.M.: Conceptual Design of Chemical Processes, McGraw-Hill, NY, USA (1988).
Smith, R., Chemical process Design and Integration, Wiley, Chichester, UK (2005).
Seider, W.D., Seader, J.D., Lewin, D.R. and Widagdo, S. Product and Process Design Principles: Synthesis, Analysis and Design, 3rd Ed., Wiley, Hoboken, NJ, USA (2015)
Chemical engineering software
Chemical Process Simulators | List of chemical process simulators | [
"Chemistry",
"Technology",
"Engineering"
] | 230 | [
"Lists of software",
"Computational chemistry software",
"Chemistry software",
"Computing-related lists",
"Chemical engineering",
"Computational chemistry",
"Chemical engineering software"
] |
8,648,594 | https://en.wikipedia.org/wiki/Consolidated%20Engineering%20Corporation | Consolidated Engineering Corporation was a chemical instrument manufacturer from 1937 to 1960 when it became a subsidiary of Bell and Howell Corp.
History
CEC was founded in 1937 by Herbert Hoover Jr., eldest son of former United States president Herbert Hoover, as sole proprietor. Harold Washburn was hired in 1938 as VP for Research, with a mandate to develop instruments applicable to petroleum prospecting.
Like his father, Hoover had trained as a mining engineer at Stanford University, studying under Washburn. He earned a PhD in Electrical Engineering from California Institute of Technology in 1932. His thesis Professor was Ernest Lawrence, a physicist at the University of California, Berkeley. Four physicists from California Institute of Technology were hired into the Research Department in a project to develop a mass spectrometer. The initial product was the 21-101 Mass Spectrometer delivered in December 1942, installed in early 1943, initial price $12,000, with no options.
CEC became a publicly held corporation in 1945, with Hoover selling all of his stock. Philip Fogg became President. The name changed to Consolidated Electrodynamics Corp. in 1955, because some states required that a service engineer for an engineering company be a licensed engineer in that state.
The mass spectrometer products and other analytical instrument products were separated from other product lines in a “Chemical Instruments” marketing department sometime between 1945 and 1948 with Harold Wiley as Manager for Chemical Instruments. The Chemical Instruments Department became the Analytical and Control Division in about 1959 with Harold Wiley as General Manager. This name was later changed to the Analytical Instruments Div.
Acquisition by Bell
CEC became a subsidiary of Bell & Howell in 1960. In 1968 the CEC Corporation was dissolved and CEC became the Electronics Instrument Group of Bell and Howell. In the mid-1970s the Analytical Instruments Division of Bell and Howell was sold to the Instrument Division of duPont.
Over the years, mass spectrometry proved to be a widely used and powerful analytical technique and a variety of laboratory instruments became available from several companies. DuPont abandoned the analytical instruments business in the late 1970s, however, CEC's mass spectrometer heritage did not end there.
Consolidated Systems Corporation
In the mid-1950s, CEC had split off a subsidiary, Consolidated Systems Corporation, to produce custom instruments and systems. Lawrence G. Hall carried CEC mass spectrometer know-how to CSC and led their team to put the first mass spectrometer in space on a National Aeronautics and Space Administration upper atmosphere research satellite, Explorer 17, in 1963. Nine more satellites and the Pioneer Venus spacecraft carried CSC magnetic sector and quadrupole mass spectrometer analyzers built for NASA's Goddard Space Flight Center.
In 1967, this business became Perkin-Elmer Corporation's Applied Sciences Division (ASD), located in Pomona. ASD mass spectrometers monitored the respiratory function of returning Apollo astronauts and were evaluated in NASA and U.S. Navy test programs for manned atmosphere monitoring. They were deployed on Skylab (the first U.S. space laboratory),
Apollo–Soyuz (the first joint U.S.-U.S.S.R. space mission), Space Shuttle/Spacelab flights and two more USN submarines. ASD research instruments also flew on two Mars Viking Landers in 1976, analyzing the Martian atmosphere and searching for chemical signs of life in its soil.
In the early 1970s, General Manager Bliss M. Bushman led ASD's expansion as a manufacturer of mass spectrometer-based submarine atmosphere monitors and commercial products. Their Central Atmosphere Monitoring System, now in its third generation, has been standard equipment on U.S. Navy submarines for over three decades.
Their commercial industrial chemical monitors are sold throughout the world today under Hamilton Sundstrand's Applied Instrument Technologies banner. They are deployed in the petrochemical, pharmaceutical, steel and oil refining industries, among others.
ASD became Orbital Sciences Corporation's Sensor Systems Division in 1993 and developed the Major Constituent Analyzer for the International Space Station's atmosphere. SSD was sold again in 2001, becoming Hamilton Sundstrand Space, Land and Sea, Pomona Site, a few months after SSD's MCA began continuous on-orbit operation aboard Space Station. The company is updating and expanding the MCA for Orion, NASA's new manned spacecraft.
Along with Oak Ridge National Laboratories, they developed an ion trap mass spectrometer chemical detection system for chemical warfare agents, and these units are now being deployed on U.S. Army reconnaissance vehicles. When fitted for bio-aerosol sampling, CBMS II has also demonstrated effective biological warfare agent detection.
References
Mass spectrometry
Companies based in Los Angeles County, California
Defunct manufacturing companies based in California | Consolidated Engineering Corporation | [
"Physics",
"Chemistry"
] | 963 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
8,648,906 | https://en.wikipedia.org/wiki/Critical%20hours | Critical hours for radio stations is the time from sunrise to two hours after sunrise, and from two hours before sunset until sunset, local time. During this time, certain American radio stations may be operating with reduced power as a result of Section 73.187 of the Federal Communications Commission's rules.
Canadian restricted hours are similar to critical hours, except that the restriction results from the January 17, 1984, U.S.-Canadian AM Agreement. Canadian restricted hours are called "critical hours" in the U.S.-Canadian Agreement, but in the AM Engineering database, the FCC calls them "Canadian restricted hours" to distinguish them from the domestically defined critical hours. Canadian restricted hours is that time from sunrise to one and one-half hours after sunrise, and from one and one-half hours before sunset until sunset, local time. U.S. stations operate with restricted hours because of Canadian stations, and vice versa.
Those radio stations that must lower their power during the critical hours are required to do so because this is when the propagation of radio waves changes from groundwave to skywave (at sunset) or vice versa (at sunrise). This can cause radio stations to be picked up much farther away, possibly causing interference with other stations on the same frequency or adjacent frequencies. Usually stations operating under the restrictions of Critical Hours must sign off the air between the end of the evening critical hours and the beginning of the morning critical hours. In effect, permission to operate during critical hours gives daytime-only stations a few more hours in their broadcast day. This is especially important in autumn and winter, when these stations might otherwise need to be off the air during the important morning and afternoon drive times, when AM radio listening is at its highest.
See also
Pre-sunrise and post-sunset authorization
References
Broadcast engineering | Critical hours | [
"Engineering"
] | 365 | [
"Broadcast engineering",
"Electronic engineering"
] |
8,649,736 | https://en.wikipedia.org/wiki/Varicella%20vaccine | Varicella vaccine, also known as chickenpox vaccine, is a vaccine that protects against chickenpox. One dose of vaccine prevents 95% of moderate disease and 100% of severe disease. Two doses of vaccine are more effective than one. If given to those who are not immune within five days of exposure to chickenpox it prevents most cases of the disease. Vaccinating a large portion of the population also protects those who are not vaccinated. It is given by injection just under the skin. Another vaccine, known as zoster vaccine, is used to prevent diseases caused by the same virus – the varicella zoster virus.
The World Health Organization (WHO) recommends routine vaccination only if a country can keep more than 80% of people vaccinated. If only 20% to 80% of people are vaccinated it is possible that more people will get the disease at an older age and outcomes overall may worsen. Either one or two doses of the vaccine are recommended. In the United States two doses are recommended starting at twelve to fifteen months of age. , twenty-three countries recommend all non-medically exempt children receive the vaccine, nine recommend it only for high-risk groups, three additional countries recommend use in only parts of the country, while other countries make no recommendation. Not all countries provide the vaccine due to its cost. In the United Kingdom, Varilrix, a live viral vaccine is approved from the age of 12 months, but only recommended for certain at risk groups.
Minor side effects may include pain at the site of injection, fever, and rash. Severe side effects are rare and occur mostly in those with poor immune function. Its use in people with HIV/AIDS should be done with care. It is not recommended during pregnancy; however, the few times it has been given during pregnancy no problems resulted. The vaccine is available either by itself or along with the MMR vaccine, in a version known as the MMRV vaccine. It is made from weakened virus.
A live attenuated varicella vaccine, the Oka strain, was developed by Michiaki Takahashi and his colleagues in Japan in the early 1970s. American vaccinologist Maurice Hilleman's team developed a chickenpox vaccine in the United States in 1981, based on the "Oka strain" of the varicella virus. The chickenpox vaccine first became commercially available in 1984. It is on the WHO Model List of Essential Medicines.
Medical uses
Varicella vaccine is 70% to 90% effective for preventing varicella and more than 95% effective for preventing severe varicella. Follow-up evaluations have taken place in the United States of children immunized that revealed protection for at least 11 years. Studies were conducted in Japan which indicated protection for at least 20 years.
People who do not develop enough protection when they get the vaccine may develop a mild case of the disease when in close contact with a person with chickenpox. In these cases, people show very little sign of illness. This has been the case of children who get the vaccine in their early childhood and later have contact with children with chickenpox. Some of these children may develop mild chickenpox also known as breakthrough disease.
Another vaccine, known as zoster vaccine, is simply a larger-than-normal dose of the same vaccine used against chickenpox and is used in older adults to reduce the risk of shingles (also called herpes zoster) and postherpetic neuralgia, which are caused by the same virus. The recombinant zoster (shingles) vaccine is recommended for adults aged 50 years and older.
Duration of immunity
The long-term duration of protection from varicella vaccine is unknown, but there are now persons vaccinated twenty years ago with no evidence of waning immunity, while others have become vulnerable in as few as six years. Assessments of the duration of immunity are complicated in an environment where natural disease is still common, which typically leads to an overestimation of effectiveness.
Some vaccinated children have been found to lose their protective antibodies in as little as five to eight years. However, according to the World Health Organization (WHO): "After observation of study populations for periods of up to 20 years in Japan and 10 years in the United States, more than 90% of immunocompetent persons who were vaccinated as children were still protected from varicella." However, since only one out of five Japanese children were vaccinated, the annual exposure of these vaccinees to children with natural chickenpox boosted the vaccinees' immune system. In the United States, where universal varicella vaccination has been practiced, the majority of children no longer receive exogenous (outside) boosting, thus, their cell-mediated immunity to VZV (varicella zoster virus) wanes – necessitating booster chickenpox vaccinations. As time goes on, boosters may be necessary. Persons exposed to the virus after vaccination tend to experience milder cases of chickenpox if they develop the disease.
Chickenpox
Prior to the widespread introduction of the vaccine in the United States in 1995 (1986 in Japan and 1988 in Korea), there were around 4,000,000 cases per year in the United States, mostly in children, with typically 10,500–13,000 hospital admissions (range, 8,000–18,000), and 100–150 deaths each year. Most of the deaths were among young children.
During 2003, and the first half of 2004, the CDC reported eight deaths from varicella, six of whom were children or adolescents. These deaths and hospital admissions have substantially declined in the US due to vaccination, though the rate of shingles infection has increased as adults are less exposed to infected children (which would otherwise help protect against shingles). Ten years after the vaccine was recommended in the US, the CDC reported as much as a 90% drop in chickenpox cases, a varicella-related hospital admission decline of 71% and a 97% drop in chickenpox deaths among those under 20.
Vaccines are less effective among high-risk patients, as well as being more dangerous because they contain attenuated live viruses. In a study performed on children with an impaired immune system, 30% had lost the antibody after five years, and 8% had already caught wild chickenpox in those five years.
Herpes zoster
Herpes zoster (shingles) most often occurs in the elderly and is only rarely seen in children. The incidence of herpes zoster in vaccinated adults is 0.9/1000 person-years, and is 0.33/1000 person-years in vaccinated children; this is lower than the overall incidence of 3.2–4.2/1000 person-years.
The risk of developing shingles is reduced for children who receive the varicella vaccine, but not eliminated. The CDC stated in 2014: "Chickenpox vaccines contain weakened live VZV, which may cause latent (dormant) infection. The vaccine-strain VZV can reactivate later in life and cause shingles. However, the risk of getting shingles from vaccine-strain VZV after chickenpox vaccination is much lower than getting shingles after natural infection with wild-type VZV."
The risk of shingles is significantly lower among children who have received varicella vaccination, including those who are immunocompromised. The risk of shingles is approximately 80% lower among healthy vaccinated children compared to unvaccinated children who had wild-type varicella. A population with high varicella vaccination also has lower incidence of shingles in unvaccinated children, due to herd immunity.
Schedule
The WHO recommends one or two doses with the initial dose given at 12 to 18 months of age. The second dose, if given, should occur at least one to three months later. The second dose, if given, provides the additional benefit of improved protection against all varicella. This vaccine is a shot given subcutaneously (under the skin). It is recommended for all children under 13 and for everyone 13 or older who has never had chickenpox.
In the United States, two doses are recommended by the CDC. For a routine vaccination, the first dose is administered at 12 to 15 months of age and the second dose at age 4–6 years. However, the second dose can be given as early as 3 months after the first dose. If an individual misses the timing for the routine vaccination, the individual is eligible to receive a catch-up vaccination. For a catch-up vaccination, individuals between 7 and 12 years old should receive a two-dose series 3 months apart (a minimum interval of 4 weeks). For individuals 13–18 years old, the catch-up vaccination should be given 4 to 8 weeks apart (a minimum interval of 4 weeks). The varicella vaccine did not become widely available in the United States until 1995.
In the UK, the vaccine is only available on the National Health Service for those who are in close contact with someone who is particularly vulnerable to chickenpox. As there is an increased risk of shingles in adults due to possible lack of contact with chickenpox-infected children providing a natural boosting to immunity, and the fact that chickenpox is usually a mild illness, the NHS cites concerns about unvaccinated children catching chickenpox as adults when it is more dangerous. However, the vaccine is approved for 12 months and up and is available privately, with a second dose to be given a year after the first.
Contraindications
The varicella vaccine is not recommended for seriously ill people, pregnant women, people who have tuberculosis, people who have experienced a serious allergic reaction to the varicella vaccine in the past, people who are allergic to gelatin, people allergic to neomycin, people receiving high doses of steroids, people receiving treatment for cancer with x-rays or chemotherapy, as well as people who have received blood products or transfusions during the past five months. Additionally, the varicella vaccine is not recommended for people who are taking salicylates (e.g. aspirin). After receiving the varicella vaccine, the use of salicylates should be avoided for at least six weeks. The varicella vaccine is also not recommended for individuals who have received a live vaccine in the last four weeks, because live vaccines that are administered too soon within one another may not be as effective. It may be usable in people with HIV infections who have a good blood count and are receiving appropriate treatment. Specific antiviral medication, such as acyclovir, famciclovir, or valacyclovir, are not recommended 24 hours before and 14 days after vaccination.
Side effects
Serious side effects are very rare. From 1998 to 2013, only one vaccine-related death was reported: an English child with pre-existent leukemia. On some occasions, severe reactions such as meningitis and pneumonia have been reported (mainly in inadvertently vaccinated immunocompromised children) as well as anaphylaxis.
The possible mild side effects include redness, stiffness, and soreness at the injection site, as well as fever. A few people may develop a mild rash, which usually appears around the injection site.
There is a short-term risk of developing herpes zoster (shingles) following vaccination. However, this risk is less than the risk due to a natural infection resulting in chickenpox. Most of the cases reported have been mild and have not been associated with serious complications.
Approximately 5% of children who receive the vaccine develop a fever or rash. Adverse reaction reports for the period 1995 to 2005 found no deaths attributed to the vaccine despite approximately 55.7 million doses being delivered. Cases of vaccine-related chickenpox have been reported in patients with a weakened immune system, but no deaths.
The literature contains several reports of adverse reactions following varicella vaccination, including vaccine-strain zoster in children and adults.
History
The varicella-zoster vaccine is made from the Oka/Merck strain of live attenuated varicella virus. The Oka virus was initially obtained from a child with natural varicella, introduced into human embryonic lung cell cultures, adapted to and propagated in embryonic guinea pig cell cultures, and finally propagated in a human diploid cell line originally derived from fetal tissues (WI-38). Takahashi and his colleagues used the Oka strain to develop a live attenuated varicella vaccine in Japan in the early 1970s. This strain was further developed by pharmaceutical companies such as Merck & Co. and GlaxoSmithKline. American vaccinologist Maurice Hilleman's team at Merck then used the Oka strain to prepare a chickenpox vaccine in 1981.
Japan was among the first countries to vaccinate for chickenpox. The vaccine developed by Hilleman was first licensed in the United States in 1995. Routine vaccination against varicella zoster virus is also performed in the United States, and the incidence of chickenpox has been dramatically reduced there (from four million cases per year in the pre-vaccine era to approximately 390,000 cases per year ).
, standalone varicella vaccines are available in all 27 European Union member countries, and 16 countries also offer a combined measles, mumps, rubella, and varicella vaccine (MMRV). Twelve European countries (Austria, Andorra, Cyprus, Czech Republic, Finland, Germany, Greece, Hungary, Italy, Latvia, Luxembourg and Spain) have universal varicella vaccination (UVV) policies, though only six of these countries have made it available at no cost via government funding. EU member states that have not implemented UVV cite reasons such as "a perceived low disease burden and low public health priority," the cost and cost-effectiveness, the possible risk of herpes zoster when vaccinating older adults, and rare fevers leading to seizures after the first dose of the MMRV vaccine. "Countries that implemented UVV experienced decreases in varicella incidence, hospitalizations, and complications, showing overall beneficial impact."
Varicella vaccination is recommended in Canada for all healthy children aged 1 to 12, as well as susceptible adolescents and adults 50 years of age and younger; "may be considered for people with select immunodeficiency disorders; and "should be prioritized" for susceptible individuals, including "non-pregnant women of childbearing age, household contacts of immunocompromised individuals, members of a household expecting a newborn, health care workers, adults who may be exposed occupationally to varicella (for example, people who work with young children), immigrants and refugees from tropical regions, people receiving chronic salicylate therapy (for example, acetylsalicylic acid [ASA])," and others.
Australia has adopted recommendations for routine immunization of children and susceptible adults against chickenpox.
Other countries, such as the United Kingdom, have targeted recommendations for the vaccine, e.g., for susceptible healthcare workers at risk of varicella exposure. In the UK, varicella antibodies are measured as part of the routine of prenatal care, and by 2005 all National Health Service personnel had determined their immunity and been immunized if they were non-immune and had direct patient contact. Population-based immunization against varicella is not otherwise practised in the UK.
Since 2013, the MMRV vaccine has been offered for free to all Brazilian citizens.
Society and culture
Catholic Church
The use of fetal tissue in vaccine development is the practice of researching, developing, and producing vaccines through growing viruses in cultured (laboratory-grown) cells that were originally derived from human fetal tissue. Since the cell strains in use originate from abortions, there has been some opposition to the practice and the resulting vaccines on religious and moral grounds.
The Roman Catholic Church is opposed to abortion. Nevertheless, the Pontifical Academy for Life stated in 2017 that "clinically recommended vaccinations can be used with a clear conscience and that the use of such vaccines does not signify some sort of cooperation with voluntary abortion". On 21 December 2020, the Vatican's doctrinal office, the Congregation for the Doctrine of the Faith, further clarified that it is "morally " for Catholics to receive vaccines derived from fetal cell lines or in which such lines were used in testing or development, because "passive material cooperation in the procured abortion from which these cell lines originate is, on the part of those making use of the resulting vaccines, remote" and "does not and should not in any way imply that there is a moral endorsement of the use of cell lines proceeding from aborted fetuses".
References
Further reading
`
External links
Chickenpox
Live vaccines
Vaccines
Drugs developed by Merck & Co.
World Health Organization essential medicines (vaccines)
Wikipedia medicine articles ready to translate
Japanese inventions | Varicella vaccine | [
"Biology"
] | 3,571 | [
"Vaccination",
"Vaccines"
] |
8,650,527 | https://en.wikipedia.org/wiki/Antonia%20Juhasz | Antonia Juhasz (born 1970) is an American oil and energy analyst, author, journalist and activist. She has authored three books: The Bush Agenda (2006), The Tyranny of Oil (2008), and Black Tide (2011).
Education
Juhasz earned her undergraduate degree in Public Policy at Brown University. She then earned her M.A. degree in Public Policy from Georgetown University.
Career
Juhasz received grants in 2014-2015 and 2013-2014 from the Max & Anna Levinson Foundation to support her ongoing work in investigative journalism in the oil and energy sectors with Media Alliance and the Investigative Reporting Program, respectively. Juhasz was a 2012-2013 Investigative Journalism Fellow at the Investigative Reporting Program, a working news room at the Graduate School of Journalism at the University of California, Berkeley. She investigated the role of oil and natural gas in the Afghanistan war.
Juhasz is a contributing writer to Rolling Stone and Harper's magazines, among other outlets.
Juhasz is also a reporter with the Investigative Fund of The Nation Institute.
According to information at her website, Juhasz has taught at the New College of California in the Activism and Social Change Masters Program and as a guest lecturer on U.S. Foreign Policy at the McMaster University Labour Studies Program in a unique educational program with the Canadian Automobile Workers Union.
As project director of the International Forum on Globalization, in 1999 Juhasz worked to inform the public about the World Trade Organization, an effort which helped build activism culminating in the 1999 Seattle WTO protests.
Juhasz worked as a legislative assistant in Washington, DC, for two U.S. members of Congress: John Conyers, Jr. (D-MI) and Elijah E. Cummings (D-MD).
Other positions
Founder and former Director of the Energy Program at Global Exchange
National Advisory Board Member, Iraq Veterans Against the War
Board Member, GI Voice/ Coffee Strong
Senior Policy Analyst, Foreign Policy in Focus
Associate Fellow, Institute for Policy Studies
Fellow, Oil Change International
Director, International Trade Program, American Lands Alliance
Writing
Juhasz is the author of three books. She wrote The Bush Agenda: Invading the World One Economy at a Time in 2006. The Georgia Straight of Canada said it was "One of the crispest, most insightful books yet to expose the Bush regime."
Juhasz's The Tyranny of Oil: the World's Most Powerful Industry and What We Must Do To Stop It (HarperCollins 2008) received the 2009 San Francisco Library Laureate Award. USA Today wrote, Juhasz "reminds us that those who don't learn the lessons of history are fated to repeat its mistakes." Kirkus Reviews finds it a "timely, blistering critique... white-hot... Explosive fuel for the raging debate on oil prices."
Her 2011 book, Black Tide: the Devastating Impact of the Gulf Oil Spill examined the human impact of the Deepwater Horizon oil spill. It was praised by Ms. magazine, which called it "masterfully reported," and by Mother Jones magazine, which said the writing was "both engaging and informative."
Juhasz is the lead author and editor of The True Cost of Chevron: An Alternative Annual Report, for which she received a 2010 Project Censored Award.
Activism
Juhasz provided testimony at the Iraq Veterans Against the War—Winter Soldier: Iraq & Afghanistan in Silver Spring, Maryland in March 2008; at the Citizens Hearing on the Legality of U.S. Actions in Iraq in support of Lt. Ehren Watada in Tacoma, Washington, in January 2007; and to the New York Session of the World Tribunal on Iraq in May 2004.
On 26 May 2010, Antonia Juhasz was removed from the Chevron Corporation shareholders' meeting in Houston and then arrested outside the meeting venue. According to people at the meeting, this happened after Juhasz blasted Chevron's environmental record and then together with a few other activists, for several minutes chanted "Chevron lies". According to Juhasz, she was charged with criminal trespass and disrupting a meeting, and was incarcerated for a twenty-four-hour period.
Project Censored awarded Juhasz Top 25 in 2005 for "Ambitions of Empire: The Radical Reconstruction of Iraq's Economy". In 2007 Peace Action placed Juhasz on their Women Peacemakers Honor Roll, "For women who have made a unique and lasting contribution to work for peace and justice in the world."
See also
Petroleum politics
References
Notes
Bibliography
Books
The Bush Agenda: Invading the World, One Economy at a Time. (HarperCollins, 2006)
The Tyranny of Oil: The World's Most Powerful Industry—and What We Must Do to Stop It. (HarperCollins, 2008)
Black Tide: the Devastating Impact of the Gulf Oil Spill (Wiley, 2011)
Articles
"Investigation: Two Years After the BP Spill, A Hidden Health Crisis Festers," The Nation, May 7, 2012
"The Deepwater Horizon Spill, Four Years On," Harper's, April 1, 2014
"Why Oil Drilling in Ecuador is 'Ticking Time Bomb' For Planet," CNN.com, February 28, 2014
"What’s Wrong with Exxon?" The Advocate Magazine, October/November cover article for The Advocate. Nominated for a GLAAD 2013 Media Award for Outstanding Magazine Article.
"Big Oil’s Big Lies About Alternatives," Rolling Stone, June 25, 2013
"Light, Sweet, Crude: A former US ambassador peddles influence in Afghanistan," Harper's Magazine, April 22, 2013
"Chevron's Refinery, Richmond's Peril," Los Angeles Times, August 14, 2012
"BP vs. Gulf Coast: It's Not Settled Yet," The Nation, March 6, 2012
"BP Oil Still Tars the Gulf," April 2012 issue Cover article, The Progressive
"Afghanistan's Energy War," Antonia Juhasz & Shukria Dellawar, Foreign Policy in Focus, October 5, 2011
"How far should we let Big Oil go?," The Guardian of London, May 24, 2010
"Whose Oil Is It, Anyway?," New York Times, March 13, 2007
External links
Antonia Juhasz: 'Tyranny of Oil' Is A Grave Threat" - Interview by NPR's Fresh Air with Terry Gross
Antonia Juhasz: "BP’s Missing Oil Washes Up in St. Mary’s Parish, LA" - video report by Democracy Now!
Huffington Post blog
"The True Cost of Chevron: An Alternative Annual Report" (2009-2011)
Living people
American anti-globalization writers
American environmentalists
American women environmentalists
American people of Hungarian descent
Anti-corporate activists
Anti-globalization activists
Brown University alumni
Deepwater Horizon oil spill
Georgetown University alumni
Petroleum politics
HuffPost writers and columnists
American women columnists
American women science writers
21st-century American women writers
1970 births | Antonia Juhasz | [
"Chemistry"
] | 1,425 | [
"Petroleum",
"Petroleum politics"
] |
8,650,889 | https://en.wikipedia.org/wiki/Flibanserin | Flibanserin, sold under the brand name Addyi, is a medication approved for the treatment of pre-menopausal women with hypoactive sexual desire disorder (HSDD). The medication improves sexual desire, increases the number of satisfying sexual events, and decreases the distress associated with low sexual desire. The most common side effects are dizziness, sleepiness, nausea, difficulty falling asleep or staying asleep and dry mouth.
Development by Boehringer Ingelheim was halted in October 2010, following a negative evaluation by the US Food and Drug Administration (FDA). The rights to the drug were then transferred to Sprout Pharmaceuticals, which achieved approval of the drug by the US FDA in August 2015.
Addyi is approved for medical use in the US for premenopausal women with HSDD and in Canada for premenopausal and postmenopausal women with HSDD.
HSDD was recognized as a distinct sexual function disorder for more than 30 years, but was removed from the Diagnostic and Statistical Manual of Mental Disorders in 2013, and replaced with a new diagnosis called female sexual interest/arousal disorder (FSIAD).
Medical uses
Flibanserin is used for hypoactive sexual desire disorder among women. The onset of the flibanserin effect was seen from the first timepoint measured after 4 weeks of treatment and maintained throughout the treatment period.
The effectiveness of flibanserin was evaluated in three phase 3 clinical trials. Each of the three trials had two co-primary endpoints, one for satisfying sexual events (SSEs) and the other for sexual desire. Each of the 3 trials also had a secondary endpoint that measured distress related to sexual desire. All three trials showed that flibanserin produced an increase in the number of SSEs and reduced distress related to sexual desire. The first two trials used an electronic diary to measure sexual desire, and did not find an increase. These two trials also measured sexual desire using the Female Sexual Function Index (FSFI) as a secondary endpoint, and an increase was observed using this latter measure. The FSFI was used as the co-primary endpoint for sexual desire in the third trial, and again showed a statistically significant increase.
Supportive analyses based on the patient's perspective of her symptoms at the end of the study showed that improvements in symptoms of HSDD were not only statistically significant but also clinically meaningful to women.
Side effects
The majority of adverse events were mild to moderate in severity. The most commonly reported adverse events included dizziness, nausea, feeling tired, sleepiness, and trouble sleeping.
Drinking alcohol while on flibanserin may increase the risk of severe low blood pressure. The Addyi Prescribing Information was updated in 2019 following the FDA's review of three postmarketing alcohol interaction studies which led to increased understanding of this drug interaction. This new data led to a removal of the contraindication with alcohol and new recommendations on how to safely consume alcohol while receiving Addyi therapy.
Current recommendations are to wait at least two hours after consuming one or two standard alcoholic drinks before taking ADDYI at bedtime or to skip their ADDYI dose if they have consumed three or more standard alcoholic drinks that evening.
Mechanism of action
Activity profile
Flibanserin acts as a full agonist in the frontal cortex and the Dorsal Raphe Nucleus, but only as a partial agonist in the CA3 region of the hippocampus of the 5-HT1A receptor (serotonin receptor) (Ki = 1 nM in cells, but only 15–50 nM in cortex, hippocampus and dorsal raphe) and, with lower affinity, as an antagonist of the 5-HT2A receptor (Ki = 49 nM) and antagonist or very weak partial agonist of the D4 receptor (Ki = 4–24 nM, Ki = 8–650 nM ). Despite the much greater affinity of flibanserin for the 5-HT1A receptor, and for reasons that are unknown (although it might be caused by the competition with endogenous serotonin), flibanserin occupies the 5-HT1A and 5-HT2A receptors in vivo with similar percentages. Flibanserin also has low affinity for the 5-HT2B receptor (Ki = 89.3 nM) and the 5-HT2C receptor (Ki = 88.3 nM), both of which it behaves as an antagonist of. Flibanserin preferentially activates 5-HT1A receptors in the prefrontal cortex, demonstrating regional selectivity, and has been found to increase dopamine and norepinephrine levels and decrease serotonin levels in the rat prefrontal cortex, actions that were determined to be mediated by activation of the 5-HT1A receptor. As such, flibanserin has been described as a norepinephrine–dopamine disinhibitor (NDDI).
The proposed mechanism of action refers to the Kinsey dual control model of sexual response. Various neurotransmitters, sex steroids, and other hormones have important excitatory or inhibitory effects on the sexual response. Among neurotransmitters, excitatory activity is driven by dopamine and norepinephrine, while inhibitory activity is driven by serotonin. The balance between these systems is of significance for a normal sexual response. By modulating serotonin and dopamine activity in certain parts of the brain, flibanserin may improve the balance between these neurotransmitter systems in the regulation of sexual response.
Society and culture
Flibanserin was originally developed as an antidepressant, but was found to have pro-sexual effects and was later repurposed for the treatment of HSDD.
Names
The brand name is Addyi.
Approval process and advocacy
In June 2010, a federal advisory panel to the US Food and Drug Administration (FDA) unanimously voted against recommending approval of flibanserin, citing an inadequate risk-benefit ratio. The Committee acknowledged the validity of hypoactive sexual desire as a diagnosis, but expressed concern with the drug's side effects and insufficient evidence for efficacy, especially the drug's failure to show a statistically significant effect on the co-primary endpoint of sexual desire. Earlier in the week, a FDA staff report also recommended non-approval of the drug. Ahead of the votes, Boehringer Ingelheim had mounted a publicity campaign to promote the controversial disorder of "hypoactive sexual desire". In 2010 the FDA issued a Complete Response Letter, stating that the New Drug Application could not be approved in its current form. The letter cited several concerns, including the failure to demonstrate a statistical effect on the co-primary endpoint of sexual desire and overly restrictive entry criteria for the two Phase 3 trials. The Agency recommended performing a new Phase 3 trial with less restrictive entry criteria. On 8 October 2010, Boehringer announced that it would discontinue its development of flibanserin in light of the FDA's decision.
Sprout responded to the FDA's cited deficiencies and refiled the NDA in 2013. The submission included data from a new Phase 3 trial and several Phase 1 drug-drug interaction studies. The FDA again refused the application, citing an uncertain risk/benefit ratio. In December 2013, a Formal Dispute Resolution was filed, which contained the requirements of the FDA for further studies. These include two studies in healthy subjects to determine if flibanserin impairs their ability to drive, and to determine if it interferes with other biochemical pathways. The Agency agreed to call a new Advisory Committee meeting to consider whether the risk-benefit ratio of flibanserin was favorable after this additional data was obtained. Sprout expected to resubmit the New Drug Application (NDA) in the 3rd quarter of 2014.
In June 2015, the US FDA Advisory Committee, which includes the Bone, Reproductive, and Urologic Drugs Advisory Committee (BRUDAC) and the Drug Safety and Risk Management Advisory Committee (DSRM), recommended approval of the drug by 18–6, with the proviso that measures be taken to inform women of the drug's side effects. On 18 August 2015, the FDA approved Addyi (Flibanserin) for the treatment of premenopausal women with low sexual desire that causes personal distress or relationship difficulties. The approval specified that flibanserin should not be used to treat low sexual desire caused by co-existing psychiatric or medical problems; low sexual desire caused by problems in the relationship; or low sexual desire due to medication side effects.
As of 21 August 2015, The Pharmaceutical Journal reported that Sprout Pharmaceuticals had not yet made an application to the European Medicines Agency for a marketing authorisation.
Advocacy groups
Even the Score, a coalition of women's groups brought together by a Sprout consultant, actively campaigned for the approval of flibanserin. The campaign emphasized that several approved treatments for male sexual dysfunction exist, while no such treatment for women was available. The group successfully obtained letters of support from the President of the National Organization for Women, the editor of the Journal of Sexual Medicine, and several members of Congress.
Other organizations supporting the approval of flibanserin included the National Council of Women's Organizations, the Black Women's Health Imperative, the Association of Reproductive Health Professionals, National Consumers League, and the American Sexual Health Association.
The approval was opposed by the National Women's Health Network, the National Center for Health Research and Our Bodies Ourselves. A representative of PharmedOut said "To approve this drug will set the worst kind of precedent — that companies that spend enough money can force the FDA to approve useless or dangerous drugs." An editorial in JAMA noted that, "Although flibanserin is not the first product to be supported by a consumer advocacy group in turn supported by pharmaceutical manufacturers, claims of gender bias regarding the FDA's regulation have been particularly noteworthy, as have the extent of advocacy efforts ranging from social media campaigns to letters from members of Congress".
The Even the Score campaign was managed by Blue Engine Message & Media, a public relations firm, and received funding from Sprout.
Acquisition by Valeant Pharmaceuticals
In August 2015, Valeant Pharmaceuticals and Sprout Pharmaceuticals announced that Valeant will acquire Sprout, on a debt-free basis, for approximately $1 billion in cash, plus a share of future profits based upon the achievement of certain milestones.
Reception
The initial response since the 2015 introduction of flibanserin to the U.S. market was slow with 227 prescriptions written during the first three weeks. The slow response may be related to a number of factors: physicians require about 10 minutes of online training to get certified; the medication has to be taken daily and costs about US$400 per month; and questions about the drug's efficacy and need. Prescriptions for the drug continue to be few with less than 4,000 being made as of February 2016.
References
Further reading
External links
5-HT1A agonists
5-HT2A antagonists
Aphrodisiacs
Benzimidazoles
Dopamine agonists
Female sexual dysfunction drugs
Meta-Trifluoromethylphenylpiperazines
Ureas | Flibanserin | [
"Chemistry"
] | 2,351 | [
"Organic compounds",
"Ureas"
] |
8,651,021 | https://en.wikipedia.org/wiki/Micellar%20liquid%20chromatography | Micellar liquid chromatography (MLC) is a form of reversed phase liquid chromatography that uses an aqueous micellar solutions as the mobile phase.
Theory
The use of micelles in high performance liquid chromatography was first introduced by Armstrong and Henry in 1980. The technique is used mainly to enhance retention and selectivity of various solutes that would otherwise be inseparable or poorly resolved. Micellar liquid chromatography (MLC) has been used in a variety of applications including separation of mixtures of charged and neutral solutes, direct injection of serum and other physiological fluids, analysis of pharmaceutical compounds, separation of enantiomers, analysis of inorganic organometallics, and a host of others.
One of the main drawbacks of the technique is the reduced efficiency that is caused by the micelles. Despite the sometimes poor efficiency, MLC is a better choice than ion-exchange LC or ion-pairing LC for separation of charged molecules and mixtures of charged and neutral species. Some of the aspects which will be discussed are the theoretical aspects of MLC, the use of models in predicting retentive characteristics of MLC, the effect of micelles on efficiency and selectivity, and general applications of MLC.
Reverse phase high-performance liquid chromatography (RP-HPLC) involves a non-polar stationary phase, often a hydrocarbon chain, and a polar mobile or liquid phase. The mobile phase generally consists of an aqueous portion with an organic addition, such as methanol or acetonitrile. When a solution of analytes is injected into the system, the components begin to partition out of the mobile phase and interact with the stationary phase. Each component interacts with the stationary phase in a different manner depending upon its polarity and hydrophobicity. In reverse phase HPLC, the solute with the greatest polarity will interact less with the stationary phase and spend more time in the mobile phase. As the polarity of the components decreases, the time spent in the column increases. Thus, a separation of components is achieved based on polarity. The addition of micelles to the mobile phase introduces a third phase into which the solutes may partition.
Micelles
Micelles are composed of surfactant, or detergent, monomers with a hydrophobic moiety, or tail, on one end, and a hydrophilic moiety, or head group, on the other. The polar head group may be anionic, cationic, zwitterionic, or non-ionic. When the concentration of a surfactant in solution reaches its critical micelle concentration (CMC), it forms micelles which are aggregates of the monomers. The CMC is different for each surfactant, as is the number of monomers which make up the micelle, termed the aggregation number (AN). Table 1 lists some common detergents used to form micelles along with their CMC and AN where available.
Many of the characteristics of micelles differ from those of bulk solvents. For example, the micelles are, by nature, spatially heterogeneous with a hydrocarbon, nearly anhydrous core and a highly solvated, polar head group. They have a high surface-to-volume ratio due to their small size and generally spherical shape. Their surrounding environment (pH, ionic strength, buffer ion, presence of a co-solvent, and temperature) has an influence on their size, shape, critical micelle concentration, aggregation number and other properties.
Another important property of micelles is the Krafft point, the temperature at which the solubility of the surfactant is equal to its CMC. For HPLC applications involving micelles, it is best to choose a surfactant with a low Krafft point and CMC. A high CMC would require a high concentration of surfactant which would increase the viscosity of the mobile phase, an undesirable condition. Additionally, a Krafft point should be well below room temperature to avoid having to apply heat to the mobile phase. To avoid potential interference with absorption detectors, a surfactant should also have a small molar absorptivity at the chosen wavelength of analysis. Light scattering should not be a concern due to the small size, a few nanometers, of the micelle.
The effect of organic additives on micellar properties is another important consideration. A small amount of organic solvent is often added to the mobile phase to help improve efficiency and to improve separations of compounds. Care needs to be taken when determining how much organic to add. Too high a concentration of the organic may cause the micelle to disperse, as it relies on hydrophobic effects for its formation. The maximum concentration of organic depends on the organic solvent itself, and on the micelle. This information is generally not known precisely, but a generally accepted practice is to keep the volume percentage of organic below 15–20%.
Research
Fischer and Jandera studied the effect of changing the concentration of methanol on CMC values for three commonly used surfactants. Two cationic, hexadecyltrimethylammonium bromide (CTAB), and N-(a-carbethoxypentadecyl) trimethylammonium bromide (Septonex), and one anionic surfactant, sodium dodecyl sulphate (SDS) were chosen for the experiment. Generally speaking, the CMC increased as the concentration of methanol increased. It was then concluded that the distribution of the surfactant between the bulk mobile phase and the micellar phase shifts toward the bulk as the methanol concentration increases. For CTAB, the rise in CMC is greatest from 0–10% methanol, and is nearly constant from 10–20%. Above 20% methanol, the micelles disaggregate and do not exist. For SDS, the CMC values remain unaffected below 10% methanol, but begin to increase as the methanol concentration is further increased. Disaggregation occurs above 30% methanol. Finally, for Septonex, only a slight increase in CMC is observed up to 20%, with disaggregation occurring above 25%.
As has been asserted, the mobile phase in MLC consists of micelles in an aqueous solvent, usually with a small amount of organic modifier added to complete the mobile phase. A typical reverse phase alkyl-bonded stationary phase is used. The first discussion of the thermodynamics involved in the retention mechanism was published by Armstrong and Nome in 1981. In MLC, there are three partition coefficients which must be taken into account. The solute will partition between the water and the stationary phase (KSW), the water and the micelles (KMW), and the micelles and the stationary phase (KSM).
Armstrong and Nome derived an equation describing the partition coefficients in terms of the retention factor, formally capacity factor, k¢. In HPLC, the capacity factor represents the molar ratio of the solute in the stationary phase to the mobile phase. The capacity factor is easily measure based on retention times of the compound and any unretained compound. The equation rewritten by Guermouche et al. is presented here:
1/k¢ = [n • (KMW-1)/(f • KSW)] • CM +1/(f • KSW)
Where:
k¢ is the capacity factor of the solute
KSW is the partition coefficient of the solute between the stationary phase and the water
KMW is the partition coefficient of the solute between the micelles and the water
f is the phase volume ratio (stationary phase volume/mobile phase volume)
n is the molar volume of the surfactant
CM is the concentration of the micelle in the mobile phase (total surfactant concentration - critical micelle concentration)
A plot of 1/k¢ verses CM gives a straight line in which KSW can be calculated from the intercept and KMW can be obtained from the ratio of the slope to the intercept. Finally, KSM can be obtained from the ratio of the other two partition coefficients:
KSM = KSW/ KMW
As can be observed from Figure 1, KMW is independent of any effects from the stationary phase, assuming the same micellar mobile phase.
The validity of the retention mechanism proposed by Armstrong and Nome has been successfully, and repeated confirmed experimentally. However, some variations and alternate theories have also been proposed. Jandera and Fischer developed equations to describe the dependence of retention behavior on the change in micellar concentrations. They found that the retention of most compounds tested decreased with increasing concentrations of micelles. From this, it can be surmised that the compounds associate with the micelles as they spend less time associated with the stationary phase.
Foley proposed a similar retentive model to that of Armstrong and Nome which was a general model for secondary chemical equilibria in liquid chromatography. While this model was developed in a previous reference, and could be used for any secondary chemical equilibria such as acid-base equilibria, and ion-pairing, Foley further refined the model for MLC. When an equilibrant (X), in this case surfactant, is added to the mobile phase, a secondary equilibria is created in which an analyte will exist as free analyte (A), and complexed with the equilibrant (AX). The two forms will be retained by the stationary phase to different extents, thus allowing the retention to be varied by adjusting the concentration of equilibrant (micelles).
The resulting equation solved for capacity factor in terms of partition coefficients is much the same as that of Armstrong and Nome:
1/k¢ = (KSM/k¢S) • [M] + 1/k¢S
Where:
k¢ is the capacity factor of the complexed solute and the free solute
k¢S is the capacity factor of the free solute
KSM is the partition coefficient of the solute between the stationary phase and the micelle
[M] may be either the concentration of surfactant or the concentration of micelle
Foley used the above equation to determine the solute-micelle association constants and free solute retention factors for a variety of solutes with different surfactants and stationary phases. From this data, it is possible to predict the type and optimum surfactant concentrations needed for a given solute or solutes.
Foley has not been the only researcher interested in determining the solute-micelle association constants. A review article by Marina and Garcia with 53 references discusses the usefulness of obtaining solute-micelle association constants. The association constants for two solutes can be used to help understand the retention mechanism. The separation factor of two solutes, a, can be expressed as KSM1/KSM2. If the experimental a coincides with the ratio of the two solute-micelle partition coefficients, it can be assumed that their retention occurs through a direct transfer from the micellar phase to the stationary phase. In addition, calculation of a would allow for prediction of separation selectivity before the analysis is performed, provided the two coefficients are known.
The desire to predict retention behavior and selectivity has led to the development of several mathematical models. Changes in pH, surfactant concentration, and concentration of organic modifier play a significant role in determining the chromatographic separation. Often one or more of these parameters need to be optimized to achieve the desired separation, yet the optimum parameters must take all three variables into account simultaneously. The review by Garcia-Alvarez-Coque et al. mentioned several successful models for varying scenarios, a few of which will be mentioned here. The classic models by Armstrong and Nome and Foley are used to describe the general cases. Foley's model applies to many cases and has been experimentally verified for ionic, neutral, polar and nonpolar solutes; anionic, cationic, and non-ionic surfactants, and C8, C¬18, and cyano stationary phases. The model begins to deviate for highly and lowly retained solutes. Highly retained solutes may become irreversibly bound to the stationary phase, where lowly retained solutes may elute in the column void volume.
Other models proposed by Arunyanart and Cline-Love and Rodgers and Khaledi describe the effect of pH on the retention of weak acids and bases. These authors derived equations relating pH and micellar concentration to retention. As the pH varies, sigmoidal behavior is observed for the retention of acidic and basic species. This model has been shown to accurately predict retention behavior. Still other models predict behavior in hybrid micellar systems using equations or modeling behavior based on controlled experimentation. Additionally, models accounting for the simultaneous effect of pH, micelle and organic concentration have been suggested. These models allow for further enhancement of the optimization of the separation of weak acids and bases.
One research group, Rukhadze, et al. derived a first order linear relationship describing the influence of micelle and organic concentration, and pH on the selectivity and resolution of seven barbiturates. The researchers discovered that a second order mathematical equation would more precisely fit the data. The derivations and experimental details are beyond the scope of this discussion. The model was successful in predicting the experimental conditions necessary to achieve a separation for compounds which are traditionally difficult to resolve.
Jandera, Fischer, and Effenberger approached the modeling problem in yet another way. The model used was based on lipophilicity and polarity indices of solutes. The lipophilicity index relates a given solute to a hypothetical number of carbon atoms in an alkyl chain. It is based and depends on a given calibration series determined experimentally. The lipophilicity index should be independent of the stationary phase and organic modifier concentration. The polarity index is a measure of the polarity of the solute-solvent interactions. It depends strongly on the organic solvent, and somewhat on the polar groups present in the stationary phase. 23 compounds were analyzed with varying mobile phases and compared to the lipophilicity and polarity indices. The results showed that the model could be applied to MLC, but better predictive behavior was found with concentrations of surfactant below the CMC, sub-micellar.
A final type of model based on molecular properties of a solute is a branch of quantitative structure-activity relationships (QSAR). QSAR studies attempt to correlate biological activity of drugs, or a class of drugs, with structures. The normally accepted means of uptake for a drug, or its metabolite, is through partitioning into lipid bilayers. The descriptor most often used in QSAR to determine the hydrophobicity of a compound is the octanol-water partition coefficient, log P. MLC provides an attractive and practical alternative to QSAR. When micelles are added to a mobile phase, many similarities exist between the micellar mobile phase/stationary phase and the biological membrane/water interface. In MLC, the stationary phase become modified by the adsorption of surfactant monomers which are structurally similar to the membranous hydrocarbon chains in the biological model. Additionally, the hydrophilic/hydrophobic interactions of the micelles are similar to that in the polar regions of a membrane. Thus, the development of quantitative structure-retention relationships (QRAR) has become widespread.
Escuder-Gilabert et al. tested three different QRAR retention models on ionic compounds. Several classes of compounds were tested including catecholamines, local anesthetics, diuretics, and amino acids. The best model relating log K and log P was found to be one in which the total molar charge of a compound at a given pH is included as a variable. This model proved to give fairly accurate predictions of log P, R > 0.9. Other studies have been performed which develop predictive QRAR models for tricyclic antidepressants and barbiturates.
Efficiency
The main limitation in the use of MLC is the reduction in efficiency (peak broadening) that is observed when purely aqueous micellar mobile phases are used. Several explanations for the poor efficiency have been theorized. Poor wetting of the stationary phase by the micellar aqueous mobile phase, slow mass transfer between the micelles and the stationary phase, and poor mass transfer within the stationary phase have all been postulated as possible causes. To enhance efficiency, the most common approaches have been the addition of small amounts of isopropyl alcohol and increase in temperature. A review by Berthod studied the combined theories presented above and applied the Knox equation to independently determine the cause of the reduced efficiency. The Knox equation is commonly used in HPLC to describe the different contributions to overall band broadening of a solute. The Knox equation is expressed as:
h = An^(1/3)+ B/n + Cn
Where:
h = the reduced plate height count (plate height/stationary phase particle diameter)
n = the reduced mobile phase linear velocity (velocity times stationary phase particle diameter/solute diffusion coefficient in the mobile phase)
A, B, and C are constants related to solute flow anisotropy (eddy diffusion), molecular longitudinal diffusion, and mass transfer properties respectively.
Berthod's use of the Knox equation to experimentally determine which of the proposed theories was most correct led him to the following conclusions. The flow anisotropy in micellar phase seems to be much greater than in traditional hydro-organic mobile phases of similar viscosity. This is likely due to the partial clogging of the stationary phase pores by adsorbed surfactant molecules. Raising the column temperature served to both decrease viscosity of the mobile phase and the amount of adsorbed surfactant. Both results reduce the A term and the amount of eddy diffusion, and thereby increase efficiency.
The increase in the B term, as related to longitudinal diffusion, is associated with the decrease in the solute diffusion coefficient in the mobile phase, DM, due to the presence of the micelles, and an increase in the capacity factor, k¢. Again, this is related to surfactant adsorption on the stationary phase causing a dramatic decrease in the solute diffusion coefficient in the stationary phase, DS. Again an increase in temperature, now coupled with an addition of alcohol to the mobile phase, drastically decreases the amount of the absorbed surfactant. In turn, both actions reduce the C term caused by a slow mass transfer from the stationary phase to the mobile phase. Further optimization of efficiency can be gained by reducing the flow rate to one closely matched to that derived from the Knox equation. Overall, the three proposed theories seemed to have contributing effects of the poor efficiency observed, and can be partially countered by the addition of organic modifiers, particularly alcohol, and increasing the column temperature.
Applications
Despite the reduced efficiency verses reversed phase HPLC, hundreds of applications have been reported using MLC. One of the most advantageous is the ability to directly inject physiological fluids. Micelles have an ability to solubilize proteins which enables MLC to be useful in analyzing untreated biological fluids such as plasma, serum, and urine. Martinez et al. found MLC to be highly useful in analyzing a class of drugs called b-antagonists, so called beta-blockers, in urine samples. The main advantage of the use of MLC with this type of sample, is the great time savings in sample preparation. Alternative methods of analysis including reversed phase HPLC require lengthy extraction and sample work up procedures before analysis can begin. With MLC, direct injection is often possible, with retention times of less than 15 minutes for the separation of up to nine b-antagonists.
Another application compared reversed phase HPLC with MLC for the analysis of desferrioxamine in serum. Desferrioxamine (DFO) is a commonly used drug for removal of excess iron in patients with chronic and acute levels. The analysis of DFO along with its chelated complexes, Fe(III) DFO and Al(III) DFO has proven to be difficult at best in previous attempts. This study found that direct injection of the serum was possible for MLC, verses an ultrafiltration step necessary in HPLC. This analysis proved to have difficulties with the separation of the chelated DFO compounds and with the sensitivity levels for DFO itself when MLC was applied. The researcher found that, in this case, reverse phase HPLC, was a better, more sensitive technique despite the time savings in direct injection.
Analysis of pharmaceuticals by MLC is also gaining popularity. The selectivity and peak shape of MLC over commonly used ion-pair chromatography is much enhanced. MLC mimics, yet enhances, the selectivity offered by ion-pairing reagents for the separation of active ingredients in pharmaceutical drugs. For basic drugs, MLC improves the excessive peak tailing frequently observed in ion-pairing. Hydrophilic drugs are often unretained using conventional HPLC, are retained by MLC due to solubilization into the micelles. Commonly found drugs in cold medications such as acetaminophen, L-ascorbic acid, phenylpropanolamine HCL, tipepidine hibenzate, and chlorpheniramine maleate have been successfully separated with good peak shape using MLC. Additional basic drugs like many narcotics, such as codeine and morphine, have also been successfully separated using MLC.
Another novel application of MLC involves the separation and analysis of inorganic compounds, mostly simple ions. This is a relatively new area for MLC, but has seen some promising results. MLC has been observed to provide better selectivity of inorganic ions that ion-exchange or ion-pairing chromatography. While this application is still in the beginning stages of development, the possibilities exist for novel, much enhanced separations of inorganic species.
Since the technique was first reported on in 1980, micellar liquid chromatography has been used in hundreds of applications. This micelle controlled technique provides for unique opportunities for solving complicated separation problems. Despite the poor efficiency of MLC, it has been successfully used in many applications. The use of MLC in the future appears to be extremely advantages in the areas of physiological fluids, pharmaceuticals, and even inorganic ions. The technique has proven to be superior over ion-pairing and ion-exchange for many applications. As new approaches are developed to combat the poor efficiency of MLC, its application is sure to spread and gain more acceptance.
References
Chromatography | Micellar liquid chromatography | [
"Chemistry"
] | 4,759 | [
"Chromatography",
"Separation processes"
] |
8,651,984 | https://en.wikipedia.org/wiki/Plant%20perception%20%28physiology%29 | Plant perception is the ability of plants to sense and respond to the environment by adjusting their morphology and physiology. Botanical research has revealed that plants are capable of reacting to a broad range of stimuli, including chemicals, gravity, light, moisture, infections, temperature, oxygen and carbon dioxide concentrations, parasite infestation, disease, physical disruption, sound, and touch. The scientific study of plant perception is informed by numerous disciplines, such as plant physiology, ecology, and molecular biology.
Aspects of perception
Light
Many plant organs contain photoreceptors (phototropins, cryptochromes, and phytochromes), each of which reacts very specifically to certain wavelengths of light. These light sensors tell the plant if it is day or night, how long the day is, how much light is available, and where the light is coming from. Shoots generally grow towards light, while roots grow away from it, responses known as phototropism and skototropism, respectively. They are brought about by light-sensitive pigments like phototropins and phytochromes and the plant hormone auxin.
Many plants exhibit certain behaviors at specific times of the day; for example, flowers that open only in the mornings. Plants keep track of the time of day with a circadian clock. This internal clock is synchronized with solar time every day using sunlight, temperature, and other cues, similar to the biological clocks present in other organisms. The internal clock coupled with the ability to perceive light also allows plants to measure the time of the day and so determine the season of the year. This is how many plants know when to flower (see photoperiodism). The seeds of many plants sprout only after they are exposed to light. This response is carried out by phytochrome signalling. Plants are also able to sense the quality of light and respond appropriately. For example, in low light conditions, plants produce more photosynthetic pigments. If the light is very bright or if the levels of harmful ultraviolet radiation increase, plants produce more of their protective pigments that act as sunscreens.
Studies on the vine Boquila trifoliata has raised questions on the mode by which they are able to perceive and mimic the shape of the leaves of the plant upon which they climb. Experiments have shown that they even mimic the shape of plastic leaves when trained on them. Suggestions have even been made that plants might have a form of vision.
Gravity
To orient themselves correctly, plants must be able to sense the direction of gravity. The subsequent response is known as gravitropism.
In roots, gravity is sensed and translated in the root tip, which then grows by elongating in the direction of gravity. In shoots, growth occurs in the opposite direction, a phenomenon known as negative gravitropism. Poplar stems can detect reorientation and inclination (equilibrioception) through gravitropism.
At the root tip, amyloplasts containing starch granules fall in the direction of gravity. This weight activates secondary receptors, which signal to the plant the direction of the gravitational pull. After this occurs, auxin is redistributed through polar auxin transport and differential growth towards gravity begins. In the shoots, auxin redistribution occurs in a way to produce differential growth away from gravity.
For perception to occur, the plant often must be able to sense, perceive, and translate the direction of gravity. Without gravity, proper orientation will not occur and the plant will not effectively grow. The root will not be able to uptake nutrients or water, and the shoot will not grow towards the sky to maximize photosynthesis.
Touch
All plants are able to sense touch. Thigmotropism is directional movement that occurs in plants responding to physical touch. Climbing plants, such as tomatoes, exhibit thigmotropism, allowing them to curl around objects. These responses are generally slow (on the order of multiple hours), and can best be observed with time-lapse cinematography, but rapid movements can occur as well. For example, the so-called "sensitive plant" (Mimosa pudica) responds to even the slightest physical touch by quickly folding its thin pinnate leaves such that they point downwards, and carnivorous plants such as the Venus flytrap (Dionaea muscipula) produce specialized leaf structures that snap shut when touched or landed upon by insects. In the Venus flytrap, touch is detected by cilia lining the inside of the specialized leaves, which generate an action potential that stimulates motor cells and causes movement to occur.
Smell
Wounded or infected plants produce distinctive volatile odors, (e.g. methyl jasmonate, methyl salicylate, green leaf volatiles), which can in turn be perceived by neighboring plants. Plants detecting these sorts of volatile signals often respond by increasing their chemical defences and/or prepare for attack by producing chemicals which defend against insects or attract insect predators.
Vibration
Plants upregulate chemical defenses such as glucosinolate and anthocyanin in response to vibrations created during herbivory.
Signal transduction
Plant hormones and chemical signals
Plants systematically use hormonal signalling pathways to coordinate their development and morphology.
Plants produce several signal molecules usually associated with animal nervous systems, such as glutamate, GABA, acetylcholine, melatonin, and serotonin. They may also use ATP, NO, and ROS for signaling in similar ways as animals do.
Electrophysiology
Plants have a variety of methods of delivering electrical signals. The four commonly recognized propagation methods include action potentials (APs), variation potentials (VPs), local electric potentials (LEPs), and systemic potentials (SPs)
Although plant cells are not neurons, they can be electrically excitable and can display rapid electrical responses in the form of APs to environmental stimuli. APs allow for the movement of signaling ions and molecules from the pre-potential cell to the post-potential cell(s). These electrophysiological signals are constituted by gradient fluxes of ions such as H+, K+, Cl−, Na+, and Ca2+ but it is also thought that other electrically charge ions such as Fe3+, Al3+, Mg2+, Zn2+, Mn2+, and Hg2+ may also play a role in downstream outputs. The maintenance of each ions electrochemical gradient is vital in the health of the cell in that if the cell would ever reach equilibrium with its environment, it is dead. This dead state can be due to a variety of reasons such as ion channel blocking or membrane puncturing.
These electrophysiological ions bind to receptors on the receiving cell causing downstream effects result from one or a combination of molecules present. This means of transferring information and activating physiological responses via a signaling molecule system has been found to be faster and more frequent in the presence of APs.
These action potentials can influence processes such as actin-based cytoplasmic streaming, plant organ movements, wound responses, respiration, photosynthesis, and flowering. These electrical responses can cause the synthesis of numerous organic molecules, including ones that act as neuroactive substances in other organisms such as calcium ions.
The ion flux across cells also influence the movement of other molecules and solutes. This changes the osmotic gradient of the cell, resulting in changes to turgor pressure in plant cells by water and solute flux across cell membranes. These variations are vital for nutrient uptake, growth, many types of movements (tropisms and nastic movements) among other basic plant physiology and behavior. (Higinbotham 1973; Scott 2008; Segal 2016).
Thus, plants achieve behavioural responses in environmental, communicative, and ecological contexts.
Signal perception
Plant behavior is mediated by phytochromes, kinins, hormones, antibiotic or other chemical release, changes of water and chemical transport, and other means.
Plants have many strategies to fight off pests. For example, they can produce a slew of different chemical toxins against predators and parasites or they can induce rapid cell death to prevent the spread of infectious agents. Plants can also respond to volatile signals produced by other plants. Jasmonate levels also increase rapidly in response to mechanical perturbations such as tendril coiling.
In plants, the mechanism responsible for adaptation is signal transduction. Adaptive responses include:
Active foraging for light and nutrients. They do this by changing their architecture, e.g. branch growth and direction, physiology, and phenotype.
Leaves and branches being positioned and oriented in response to a light source.
Detecting soil volume and adapting growth accordingly, independently of nutrient availability.
Defending against herbivores.
See also
Auxin
Chemotropism
Ethylene
Gravitropism
Heliotropism
Hydrotropism
Hypersensitive response
Kairomone
Kinesis (biology)
Nastic movements
Phytosemiotics
Plant defense against herbivory
Plant evolutionary developmental biology
Plant intelligence
Plant tolerance to herbivory
Rapid plant movement
Statocyte
Stoma
Systemic acquired resistance
Taxis
Thermotropism
Tropism
References
Further reading
Baluška F (ed) (2009). Plant-Environment Interactions: From Sensory Plant Biology to Active Plant Behavior. Springer Verlag.
Gilroy S, Masson PH (2007). Plant Tropisms. Iowa State University Press.
Karban R (2015). Plant Sensing and Communication. University of Chicago Press.
Mancuso S, Shabala S (2006). Rhythms in Plants. Springer Verlag.
Scott P (2008). Physiology and Behaviour of Plants. John Wiley & Sons Ltd.
Volkov AG (2006). Plant Electrophysiology. Springer Verlag.
External links
How Does a Venus Flytrap Work?
Plant physiology
Plant ecology | Plant perception (physiology) | [
"Biology"
] | 2,030 | [
"Plant physiology",
"Plant ecology",
"Plants"
] |
8,652,125 | https://en.wikipedia.org/wiki/Tare%20weight | Tare weight , sometimes called unladen weight, is the weight of an empty vehicle or container.
By subtracting tare weight from gross weight (laden weight), one can determine the weight of the goods carried or contained (the net weight).
Etymology
The word tare originates from the Middle French word 'wastage in goods, deficiency, imperfection' (15th ), from Italian , from Arabic , lit. 'thing deducted or rejected', from 'to reject'.
Usage
This can be useful in computing the cost of the goods carried for purposes of taxation or for tolls related to barge, rail, road, or other traffic, especially where the toll will vary with the value of the goods carried (e.g., tolls on the Erie Canal). Tare weight is often published upon the sides of railway cars and transport vehicles to facilitate the computation of the load carried. Tare weight is also used in body composition assessment when doing underwater weighing.
Tare weight is accounted for in kitchen scales, analytical (scientific) and other weighing scales which include a button that resets the display of the scales to zero when an empty container is placed on the weighing platform, in order subsequently to display only the weight of the contents of the container.
See also
Curb weight
Dry weight
Gross vehicle weight rating
Hydrostatic weighing
Trett (obsolete)
References
Further reading
Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009,
SOLAS: container weighing method 1 & 2
Mass
Containers | Tare weight | [
"Physics",
"Mathematics"
] | 314 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Mass",
"Size",
"Wikipedia categories named after physical quantities",
"Matter"
] |
8,652,895 | https://en.wikipedia.org/wiki/CCL21 | Chemokine (C-C motif) ligand 21 (CCL21) is a small cytokine belonging to the CC chemokine family. This chemokine is also known as 6Ckine (because it has six conserved cysteine residues instead of the four cysteines typical to chemokines), exodus-2, and secondary lymphoid-tissue chemokine (SLC). CCL21 elicits its effects by binding to a cell surface chemokine receptor known as CCR7. The main function of CCL21 is to guide CCR7 expressing leukocytes to the secondary lymphoid organs, such as lymph nodes and Peyer´s patches.
Gene
The gene for CCL21 is located on human chromosome 9. CCL21 is classified as a homeostatic chemokine, it is produced constitutively. However, its expression increases during inflammation.
Protein structure
Chemokine CCL21 contains an extended C-terminus which is not found in CCL19, another ligand of CCR7. C-terminal tail is composed of 37 amino acids rich in positively charged residues and therefore, it has high affinity for negatively charged molecules of the extracellular matrix. The cleavage of the C-terminal tail by peptidases produces a soluble form of CCL21. The soluble CCL21 occurs also in physiological conditions. It does not bind to extracellular matrix and therefore, its function differs from the function of the full-length CCL21.
Function
Migration to secondary lymphoid organs
Naïve T cells circulate through secondary lymphoid organs until they encounter the antigen. CCL21 is a chemokine involved in the recruitment of T cells into secondary lymphoid organs. It is produced by lymphatic endothelial cells and lymph node stromal cells. Full-length CCL21 is bound to glycosaminoglycans, and endothelial cells and it induces the chemotactic migration of T cells and the cell adhesion caused by integrin activation. In contrast, the soluble CCL21 is not involved in the induction of the cell adhesion. After T cells enter the lymph nodes through high endothelial venules, they are attracted to the T cell zone, where fibroblastic reticular cells are the abundant source of CCL21.
CCL21/CCR7 interaction also plays a role in the migration of dendritic cells to the secondary lymphoid organs. Dendritic cells upregulate the expression of CCR7 during their maturation. CCL21 is bound to the lymphatic vessels and attracts CCR7 expressing dendritic cells from peripheral tissues. Then they migrate along the chemokine gradient to the lymph node where they present the antigen to T cells. Interactions between dendritic cells and T cells are necessary for the initiation of the adaptive immune response. When CCL21 is not recognized by the cells (for example in CCR7-deficient mice), a delayed and reduced adaptive immune response occurs due to reduced interactions between dendritic cells and T cells in the lymph nodes. Semi-mature dendritic cells express CCR7 in the absence of a danger signal. They use CCL21 chemokine gradient for the migration to the lymph nodes even when there is no inflammation in the body, and they contribute to peripheral tolerance.
Other cells using chemokine CCL21 for the migration to the lymph nodes are B cells. However, they are less dependent on it in comparison to T cells.
T cell development in the thymus
CCL21/CCR7 interaction plays a role in the T cell development in the thymus. CCL21 is produced in the thymus medulla by medullary thymic epithelial cells, and it attracts single positive thymocytes from the thymus cortex to the medulla, where they undergo negative selection to delete autoreactive thymocytes.
References
External links
Further reading
Cytokines | CCL21 | [
"Chemistry"
] | 860 | [
"Cytokines",
"Signal transduction"
] |
8,652,943 | https://en.wikipedia.org/wiki/Distemper%20%28paint%29 | Distemper is a decorative paint and a historical medium for painting pictures, and contrasted with tempera. The binder may be glues of vegetable or animal origin (excluding egg). Soft distemper is not abrasion resistant and may include binders such as chalk, ground pigments, and animal glue. Hard distemper is stronger and wear-resistant and can include casein or linseed oil as binders.
Soft distemper
Distemper is an early form of whitewash, also used as a medium for artistic painting, usually made from powdered chalk or lime and size (a gelatinous substance). Alternatives to chalk include the toxic substance white lead.
Distempered surfaces can be easily marked and discoloured, and cannot be washed down, so distemper is best suited to temporary and interior decoration. The technique of painting on distempered surfaces blends watercolours with whiting and glue. "The colours are mixed with whitening, or finely-ground chalk, and tempered with size. The whitening makes them opaque and gives them 'body,' but is also the cause of their drying light ... a source of considerable embarrassment to the inexperienced eye is that the colours when wet present such a different appearance from what they do when dry."
Many Medieval and Renaissance painters used distemper painting rather than oil paint for some of their works. The earliest paintings on canvas were mostly in distemper, which was (and is) also widely used in Asia, especially in Tibetan thankas. Distemper paintings suffer more than oil paintings as they age, and relatively few have survived. It was the most common medium for painting banners and decorations for temporary celebrations, both of which attracted artists of the highest quality, especially when they were official court artists. In distemper painting, "the carbonate of lime, or whitening employed as a basis, is less active than the pure lime of fresco ... to give adhesion to the tints and colours in distemper painting, and to make them keep their place, they are variously mixed with the size of glue (prepared commonly by dissolving about of glue in of water). Too much of the glue disposes the painting to crack and peel from the ground; while, with too little, it is friable and deficient in strength."
The National Gallery, London, distinguishes between the techniques of glue, glue size, or glue-tempera, which is how they describe their three Andrea Mantegnas in the medium, and distemper, which is how they describe their Dirk Bouts and two Édouard Vuillards (see below). Other sources would describe the Mantegnas as also being in distemper.
In modern practice, distemper painting is often employed for scenery painting in theatrical productions and other short-term applications, where it may be preferred to oil paint for reasons of economy. Contemporary artist John Connell was known for using distemper in paintings sometimes as large as ten feet.
In architecture, distemper paints usually consist of a glue binder with calcium carbonate as the base pigment.
Military use
Distemper was used extensively by German and Soviet forces for winter camouflage during World War II. Because ordinary camouflage patterns were ineffective in the heavy snow conditions on the Eastern front, aircraft, tanks, and other military vehicles were hastily brush-painted with plain white distemper during the winter of 1941–1942. Because distemper is water-soluble, photographs showing winter camouflage often show it badly eroded.
During the invasion of Normandy on June 6, 1944, all Allied aircraft participating in the invasion were marked on the wings and fuselage with "invasion stripes" painted with distemper so that naval or ground-based gunners would not misidentify them as German and fire on them, as had happened during the invasion of Sicily in 1943.
Examples of paintings in distemper
Fayum mummy portraits, from Late Antique Egypt (some in encaustic)
Dirk Bouts Entombment, 1450s. National Gallery, London
Many paintings by Mantegna
The Raphael Cartoons, London
Scottish Renaissance painted ceilings
Édouard Vuillard Lunch at Vasouy, 1901, Tate Modern.
Mark Tobey's White Journey, modern
Le Grand Teddy by Édouard Vuillard
References
External links
Removing Distemper.
Soft Distemper
The Problem with Distemper
Painting techniques
Paints
ja:絵具#水性の絵具 | Distemper (paint) | [
"Chemistry"
] | 936 | [
"Paints",
"Coatings"
] |
8,654,244 | https://en.wikipedia.org/wiki/Karl%20Malus | Dr. Karl Malus () is a fictional mad scientist and criminal appearing in American comic books published by Marvel Comics. He played a part in the origins of Armadillo, Hornet, Falcon II, and many other characters.
Dr. Karl Malus appeared in the second season of the Marvel Cinematic Universe television series Jessica Jones, portrayed by Callum Keith Rennie.
Publication history
Malus first appeared in Spider-Woman #30 (Sept. 1980) and was created by Michael Fleisher, Steve Leialoha and Jim Mooney. He was featured several times opposite of Captain America and Sam Wilson as Captain America. He was also briefly a member of the Frightful Four.
Fictional character biography
Karl Malus was born in Mud Butte, South Dakota. He became a surgeon and researcher. He was later the founder of the Institute for Supranormality Research and became a criminal scientist.
In his first appearance, Malus was performing illegal medical experiments on human beings, funded by the criminal Enforcer, to find out more about superhumans and their abilities. He was approached by the Human Fly, a supervillain who was losing his powers. Malus sent the Human Fly to steal some equipment, attracting the attention of the original Spider-Woman. Since Spider-Woman's friend Scotty McDowell had been rendered comatose by one of the Enforcer's poison bullets, Malus offered to cure McDowell in exchange for leniency. Spider-Woman agreed, but while curing McDowell, Malus also experimented on him using Human Fly's DNA, which would later cause McDowell to transform into the villainous Hornet. Malus went to prison, but was released to "help" authorities against the threat of the Hornet. In actuality, he wanted to capture and study Spider-Woman. Malus slipped away and made contact with the Hornet, whom he kept drugged and aggressive for his own purposes.
Malus then contacted Jack Russell, a man cursed with lycanthropy, and told him he could help cure his transformations into a werewolf. Instead, he placed a control collar on Russell and sent both Hornet and the Werewolf after Spider-Woman. She was able to defeat both of them and freed the Werewolf, who then attacked Malus.
Malus had studied the bizarre criminal Daddy Longlegs, who had gained his powers from a modified growth-serum used by Black Goliath and had thus acquired a sample of Pym Particles, which could alter a person's size and mass. With this knowledge, Malus hoped to restore the powers of Erik Josten. He gave Josten growth powers and enhanced strength, turning him into the supervillain Goliath, but Goliath rejected Malus' offer of partnership and was in turn defeated by the West Coast Avengers.
Malus also transformed Antonio Rodriguez into the Armadillo by combining his human genes with the genetic material of an armadillo. Malus encountered Captain America soon after that.
Eventually, Malus went to work for the Power Broker Curtiss Jackson, using his technology to augment the strength of paying customers to superhuman levels. The strength augmenting process was tremendously risky with half the subjects dying or becoming severely deformed, but this information was kept a closely guarded secret. Power Broker and Malus also used highly addictive drugs on their subjects, telling them that the chemical was necessary to stabilize their powers, but in fact it only served to keep the subjects working for—and paying—the Power Broker. One such victim was Sharon Ventura, who was also sexually abused while drugged. Many wrestlers of the Unlimited Class Wrestling Federation, which is only open to those with super-strength, have used the Power Broker's services and wound up indebted to them.
Malus soon almost strength-augmented Captain America, but was overpowered by D-Man. He escaped with Captain America from the sealed Power Broker laboratory, but was captured by the Night Shift.
When Power Broker, Inc. was attacked by the vigilante known as Scourge of the Underworld, Curtiss Jackson was exposed to his own augmentation device to try to gain super-strength to defend himself. The process went awry, leaving him so grotesquely muscle-bound that he could not move. Malus decided to take advantage of this situation by using Bludgeon and Mangler to abduct Vagabond. Malus sent Vagabond, who knew Jackson, to obtain a copy of his fingerprints so that Malus could access all of Jackson's personal accounts and vaults. He used an explosive wristband to force Vagabond's cooperation, but she managed to knock Malus out, destroy the fingerprint mold, place the band on his wrist, and inject him with the drug he had planned to use on her.
Malus then attempted to learn the secret of Daredevil's prowess and Madcap's invulnerability. Malus fought Hawkeye alongside Triphammer, Pick-Axe, Vice, and Handsaw.
The Power Broker had Malus' legs broken for his betrayal, then promptly re-hired him to try to cure the Power Broker's condition. Malus captured and experimented on several augmented individuals to perfect the de-augmentation process, including Battlestar, which drew the involvement of the U.S. Agent. Together, Battlestar and the Agent freed the captured wrestlers and forced Malus to restore their strength. The U.S. Agent then destroyed Malus' equipment and records.
Malus has since worked for a variety of criminal organizations, including the Corporation, and the Maggia. He even worked with the Avengers and the Thunderbolts in their efforts to defeat Count Nefaria in exchange for a reduced sentence.
During the "Dark Reign" storyline, Karl Malus was researched by Quasimodo as part of its work for Norman Osborn. Quasimodo stated that he would come in handy creating villains and "heroes" for Norman to use.
Karl Malus was recruited by the Wizard to become a new member of the Frightful Four, only to find himself being forcibly made the new host of the Carnage Symbiote. He was eaten by Carnage, who was possessing Wizard at the time.
As part of the "All-New, All-Different Marvel", Karl Malus returned following his devouring by Carnage. Malus explained that he was indeed eaten by Carnage, but somehow was able to pull himself back together once deposited as waste. As a human/Symbiote hybrid, Malus now exhibits symbiote-like abilities and behavior. Since this incident, he continued his experiments on humans as an employee of Serpent Solutions as seen in their flashback. Most of Karl Malus' experiments were on illegal immigrants that were given to him by the Sons of the Serpent upon apprehending them at the Mexican border. The experiments he performed on them turned them into animal hybrids. When Captain America tracked down his operation, Karl Malus used his symbiotic abilities to subdue Captain America and experiment on him, which turned Captain America into a werewolf. Using Redwing's DNA, Karl Malus turned a Mexican teenager named Joaquin Torres into a bird/human hybrid. Upon being liberated by Misty Knight, Captain America followed and subdued Karl Malus by using Redwing's high-pitched sounds on him and remanded him to S.H.I.E.L.D. custody. While Captain America and the others that were experimented upon by Karl Malus were restored to normal, Joaquin was unable to be restored to normal due to Redwing's DNA being vampiric, which granted him a healing factor.
Powers and abilities
Dr. Karl Malus possesses a gifted intellect. He has an MD specializing in surgery and a master's degree in biochemistry. Malus is a brilliant surgeon with a great knowledge of chemistry, genetic manipulation, and radiology.
Following his ingestion by Carnage, Malus has gained the ability to mimic the powers and weaknesses of alien symbiote creatures.
In other media
Dr. Karl Malus appears in the second season of Jessica Jones, portrayed by Callum Keith Rennie. This version is one of several doctors who runs IGH, a biotech clinic specializing in state-of-the-art reconstructive surgery. Years prior, when Jessica Jones and her mother Alisa are critically injured in a car accident, Malus arranged for them to be transferred to his clinic so he could save them. Jessica was released after three weeks while Alisa's recovery took several years as she suffered more extensive damage. Over the course of treating her, Malus developed romantic feelings for Alisa, going so far as to cover up her accidentally killing of one of his nurses by framing a janitor for it and eventually marrying her. When Trish Walker begins investigating IGH in the present, Malus allows Alisa to murder the clinic's subjects and associates. Jessica later spots Malus and Alisa at an aquarium, though they escape. Malus and Jessica get into an argument, with the former claiming he truly loves Alisa despite her condition before fleeing after the mother and daughter argue and fight. After Jessica turns in Alisa, she tracks down Malus again and manages to convince him to work with her to permanently end his work so Alisa can find peace. However, Trish abducts him in an attempt to force him to give her superpowers similar to Jessica's, but Jessica foils the procedure before it can be finished. Not wanting to kill Malus, Jessica spares him. With nothing left to live for, Malus commits suicide by destroying his lab.
References
External links
Characters created by Jim Mooney
Characters created by Michael Fleisher
Characters created by Steve Leialoha
Comics characters introduced in 1980
Fictional biochemists
Fictional characters from South Dakota
Fictional geneticists
Fictional mad scientists
Fictional surgeons
Marvel Comics male supervillains
Marvel Comics scientists | Karl Malus | [
"Chemistry"
] | 2,028 | [
"Fictional biochemists",
"Biochemists"
] |
8,654,442 | https://en.wikipedia.org/wiki/List%20of%20culinary%20nuts | A culinary nut is a dry, edible fruit or seed that usually, but not always, has a high fat content. Nuts are used in a wide variety of edible roles, including in baking, as snacks (either roasted or raw), and as flavoring. In addition to botanical nuts, fruits and seeds that have a similar appearance and culinary role are considered to be culinary nuts. Culinary nuts are divided into fruits or seeds in one of four categories:
True, or botanical nuts: dry, hard-shelled, uncompartmented fruit that do not split on maturity to release seeds; (e.g. hazelnuts)
Drupes: seed contained within a pit (stone or pyrena) that itself is surrounded by a fleshy fruit (e.g. almonds, walnuts);
Gymnosperm seeds: naked seeds, with no enclosure (e.g. pine nuts);
Angiosperm: seeds surrounded by an enclosure, such as a pod or a fruit (e.g. peanuts).
Nuts have a rich history as food. For many indigenous peoples of the Americas, a wide variety of nuts, including acorns, American beech, and others, served as a major source of starch and fat over thousands of years. Similarly, a wide variety of nuts have served as food for Indigenous Australians for many centuries. Other culinary nuts, though known from ancient times, have seen dramatic increases in use in modern times. The most striking such example is the peanut. Its usage was popularized by the work of George Washington Carver, who discovered and popularized many applications of the peanut after employing peanut plants for soil amelioration in fields used to grow cotton.
True nuts
The following are both culinary and botanical nuts.
Acorn (Quercus, Lithocarpus and Cyclobalanopsis spp.), used from ancient times among indigenous peoples of the Americas as a staple food, in particular for making bread and porridge.
Beech (Fagus spp.)
American beech (Fagus grandifolia), used by indigenous peoples of the Americas as food. Several tribes sought stores of beech nuts gathered by chipmunks and deer mice, thus obtaining nuts that were already sorted and shelled.
European beech (Fagus sylvatica), although edible, have never been popular as a source of food. They have been used as animal feed and to extract a popular edible oil.
Breadnut (Brosimum alicastrum), used by the ancient Maya peoples as animal fodder, and as an alternative food when yield of other crops was insufficient.
Candlenut (Aleurites moluccana), used in many South East Asian cuisines.
Chestnuts (Castanea spp.)
Chinese chestnuts (Castanea mollissima), have been eaten in China since ancient times.
Sweet chestnuts (Castanea sativa), unlike most nuts, are high in starch and sugar. Extensively grown in Europe and the Himalayas.
Note that the 'water chestnut' is a tuber, not a nut.
Guinea peanut (Pachira glabra), like those of the related Malabar chestnut, the seeds taste similar to peanuts and are typically boiled or roasted, with the roasted seeds sometimes ground to make a hot drink.
Hazelnuts (Corylus spp.), most commercial varieties of which descend from the European hazelnut (Corylus avellana). Hazelnuts are used to make pralines, in the popular Nutella spread, in liqueurs, and in many other foods.
American hazelnut (Corylus americana), appealing for breeding because of its relative hardiness.
Deeknut (Corylus dikana), grows in hot, excessively dry areas. An occasional garnish used in Middle Eastern dishes.
Eastern and western beaked hazel (Corylus cornuta), native to the United States.
European hazelnut (Corylus avellana), source of most commercial hazelnuts.
Filbert (Corylus maxima), commonly used as "filler" in mixed nut combinations.
Several other species are edible, but not commercially cultivated to any significant extent. These include the cold-tolerant Siberian hazelnut (C. heterophylla), C. kweichowensis, which grows in the warmer parts of China, C. sieboldiana, which grows in Japan and China, and other minor Corylus species.
Johnstone River almond (Elaeocarpus bancroftii), prized food among northern Indigenous Australians.
Karuka (Pandanus spp.), native to Papua New Guinea. Both the planted and wild species are eaten raw, roasted or boiled, providing food security when other foods are less available.
Planted karuka (Pandanus julianettii), cultivated species, planted by roughly half the rural population of Papua New Guinea.
Wild karuka (Pandanus brosimos), important food source in villages at higher altitudes in New Guinea.
Kola nut (Cola spp.), from a West African relative of the cocoa tree, is the origin of the cola flavor in soft drinks.
Kurrajong (Brachychiton spp.), native to Australia, highly regarded as a bush food among northern Indigenous Australians.
Malabar chestnut (Pachira aquatica), have a taste reminiscent of peanuts when raw, and of cashews or European chestnuts (which they strongly resemble) when roasted.
Mongongo (Ricinodendron rautanenii), abundant source of protein among Bushmen in the Kalahari Desert. Also of interest as a source of oil for skin care.
Sacha inchi (Plukenetia volubilis), the roasted seeds can be consumed as nuts.
Palm nuts (Elaeis spp.), important famine food among the Himba people in Africa
Red bopple nut (Hicksbeachia pinnatifolia), native to the east coast of Australia. Low in fat, high in calcium and potassium. Eaten as bush food. Considered similar, but inferior to the macadamia.
Yellow walnut (Beilschmiedia bancroftii), native to Australia where it served as a staple food among Indigenous Australians.
Drupe seeds
A drupe is a fleshy fruit surrounding a stone, or pit, containing a seed. Some of these seeds are culinary nuts as well.
Almonds (Prunus dulcis) have a long and important history of religious, social and cultural significance as a food. Speculated to have originated as a natural hybrid in Central Asia, almonds spread throughout the Middle East in ancient times and thence to Eurasia. The almond is one of only two nuts mentioned in the Bible.
Apricot kernels are sometimes used as an almond substitute, an Apricot seed derived ersatz-Marzipan is known as "Persipan" in German and is extensively used in foods like Stollen.
Australian cashew nut (Semecarpus australiensis) is a source of food for Indigenous Australians of north-eastern Queensland and Australia's Northern Territory.
Baru nut (Dipteryx alata) is a source of food for indigenous Afro-Brazilian communities living in the Brazilian Cerrado. The nut is eaten toasted or boiled.
Betel or areca nuts (Areca catechu) are chewed in many cultures as a psychoactive drug. They are also used in Indian cuisine to make sweet after-dinner treats () and breath-fresheners ().
Borneo tallow nuts (Shorea spp.) are grown in the tropical rain forests of Southeast Asia, as a source of edible oil.
Canarium spp.
Canarium nut (Canarium harveyi, Canarium indicum, or Canarium commune) has long been an important food source in Melanesia.
Chinese olive (Canarium album) pits are processed before use as an ingredient in Chinese cooking.
Pili nuts (Canarium ovatum) are native to the Philippines, where they have been cultivated for food from ancient times.
Cashews (Anacardium occidentale) grow as a drupe that is attached to the cashew apple, the fruit of the cashew tree. Native to northeastern Brazil, the cashew was introduced to India and East Africa in the sixteenth century, where they remain a major commercial crop. The nut must be roasted (or steamed) to remove the caustic shell oil before being consumed.
Chilean hazel (Gevuina avellana), from an evergreen native to South America, similar in appearance and taste to the hazelnut.
Coconut (Cocos nucifera), used worldwide as a food. The fleshy part of the seed is edible, and used either desiccated or fresh as an ingredient in many foods. The pressed oil from the coconut is used in cooking as well.
Gabon nut (Coula edulis) has a taste comparable to hazelnut or chestnut. It is eaten raw, grilled or boiled.
Hickory (Carya spp.)
Mockernut hickory (Carya tomentosa), native to North America, named after the heavy hammer ( in Dutch) required to crack the heavy shell and remove the tasty nutmeat.
Pecans (Carya illinoinensis) are the only major commercial nut tree native to North America. Pecans are eaten as a snack food, and used as an ingredient in baking and other food preparation.
Shagbark hickory (Carya ovata) has over 130 named cultivars. They are a valuable source of food for wildlife, and were eaten by indigenous peoples of the Americas and settlers alike.
Shellbark hickory (Carya laciniosa) nuts are sweet, and are the largest of the hickories. They are also eaten by a wide variety of wildlife.
Irvingia spp. are native to Africa
Bush mango (Irvingia gabonensis) has both edible fruit and an edible nut, which is used as a thickening agent in stews and soups in West African cuisines.
Ogbono nut (Irvingia wombolu) is similar to the bush mango, but the fruit is not edible.
Jack nuts (Artocarpus heterophyllus) are the seeds of the jack fruit. With a taste like chestnuts, they have an extremely low fat content of less than 1%.
Jelly palm nut (Butia capitata), sweet edible fruit, and edible nut.
Bread nuts (Artocarpus camansi) similarly have a chesnut taste and very low fat content
Panda oleosa is used in Gabon in a similar way to bush mango nuts, as well as to extract an edible oil.
Pekea nut, or butter-nut of Guyana (Caryocar nuciferum), harvested locally for its highly prized edible oil.
Pistachio (Pistacia vera L.), cultivated for thousands of years, native to West Asia. It is one of only two nuts mentioned in the Bible.
Walnut (Juglans spp.)
Black walnut (Juglans nigra), also popular as food for wildlife, with an appealing, distinctive flavor. Native of North America.
Butternut (Juglans cinerea) (or white walnut) is native to North America. Used extensively, in the past, by Native American tribes as food.
English walnut (Juglans regia) (or Persian walnut) was introduced to California around 1770. California now represents 99% of US walnut growth. It is often combined with salads, vegetables, fruits or desserts because of its distinctive taste.
Heartnut, or Japanese walnut (Juglans aitlanthifolia), native to Japan, with a characteristic cordate shape. Heartnuts are often toasted or baked, and can be used as a substitute for English walnuts.
Nut-like gymnosperm seeds
A gymnosperm, from the Greek (), meaning "naked seed", is a seed that does not have an enclosure. The following gymnosperms are culinary nuts. All but the ginkgo nut are from evergreens.
Cycads (Macrozamia spp.)
Burrawang nut (Macrozamia communis), a major source of starch for Indigenous Australians around Sydney.
Ginkgo nuts (Ginkgo biloba) are a common ingredient in Chinese cooking. They are starchy, low in fat, protein and calories, but high in vitamin C.
Araucaria spp.
Bunya nut (Araucaria bidwillii) is native to Queensland, Australia. Nuts are the size of walnuts, and rich in starch.
Monkey-puzzle nut (Araucaria araucana) has nuts twice the size of almonds. Rich in starch. Roasted, boiled, eaten raw, or fermented in Chile and Argentina.
Paraná pine nut (Araucaria angustifolia) (or Brazil pine nut) is an edible seed similar to pine nuts.
Pine nuts (Pinus spp.) Pine nuts can be toasted and added to salads and are used as an ingredient in pesto, among other regional uses.
Chilgoza pine (Pinus gerardiana), common in Central Asia. Nuts are used raw, roasted or in confectionery products.
Colorado pinyon (Pinus edulis), in great demand as an edible nut, with average annual production of 454 to 900 tonnes.
Korean pine (Pinus koraiensis), a pine-nut yielding species native to Asia.
Mexican pinyon (Pinus cembroides), found in Mexico and Arizona. Nuts are eaten raw, roasted, or made into flour.
Single-leaf pinyon (Pinus monophylla) grows in foothills from Mexico to Idaho. Eaten as other pine nuts. Also sometimes ground and made into pancakes.
Stone pine, or pignolia nut (Pinus pinea) is the most commercially important pine nut.
Nut-like angiosperm seeds
These culinary nuts are seeds contained within a larger fruit, and are flowering plants.
Brazil nut (Bertholletia excelsa) is harvested from an estimated 250,000–400,000 trees per year. Highly valued, and used in the confectionery and baking trades. Excellent dietary source of selenium.
Macadamia (Macadamia spp.) are primarily produced in Hawaii and Australia. Both species are native to Australia. They are a highly valued nut. Waste nuts are commonly used to extract an edible oil.
Macadamia nut (Macadamia tetraphylla) has a rough shell, and is the subject of some commercialization.
Queensland macadamia nut (Macadamia integrifolia) has a smooth shell, and is the principal commercial macadamia nut.
Paradise nut (Lecythis usitata), native to the Amazon rainforest, highly regarded by indigenous tribal people.
Peanut, or groundnut (Arachis hypogaea), a legume and grown on the ground, not on a tree or bush, originally from South America, has grown from a relatively minor crop to one of the most important commercial nut crops, in part due to the work of George Washington Carver at the beginning of the 20th century.
Peanut tree (Sterculia quadrifida) or bush peanut, native to Australia. Requires no preparation.
Soybean (Glycine max), a legume and grown on the ground, not on a tree or bush, is used as a nut, secondary to its use as an oil seed.
See also
List of edible seeds
Tiger nut (not a nut, despite its name)
Notes
References
Works cited
External links
Culinary nuts
+ | List of culinary nuts | [
"Biology"
] | 3,223 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
8,654,845 | https://en.wikipedia.org/wiki/Rubber%20technology | Rubber Technology is the subject dealing with the transformation of rubbers or elastomers into useful products, such as automobile tires, rubber mats and, exercise rubber stretching bands. The materials includes latex, natural rubber, synthetic rubber and other polymeric materials, such as thermoplastic elastomers. Rubber processed through such methods are components of a wide range of items.
Rubber products can be categorized into two main categories.
Dry rubber products (dry form)
Latex (wet rubber) products (liquid form)
Vulcanization
Most rubber products are vulcanized, a process which involves heating with a small quantity of sulphur (or equivalent cross-linking agent) so as to stabilise the polymer chains, over a wide range of temperature. This discovery was made by Charles Goodyear in the 1844, but is a process restricted to polymer chains having a double bond in the backbone. Such materials include natural rubber and polybutadiene. The range of materials available is much wider however, since all polymers become elastomeric above their glass transition temperature. However, the elastomeric state is unstable because chains can slip past one another resulting in creep or stress relaxation under static or dynamic load conditions. Chemical cross links add the stability to the network that is needed for most practical applications.
Processing of rubber
Methods for processing rubber include mastication and various operations like mixing, calendering, extrusion, all processes being essential to bring crude rubber into a state suitable for shaping the final product. The former breaks down the polymer chains, and lowers their molecular mass so that viscosity is low enough for further processing. After this has been achieved, various additions can be made to the material ready for cross-linking. Rubber may be masticated on a two-roll mill or in an industrial mixer, which come in different types.
Rubber is first compounded with additives like sulfur, carbon black and accelerators. It is converted into a dough-like mixture which is called "compound" then milled into sheets of desired thickness. Rubber may then be extruded or molded before being cured.
See also
Brazilian Rubber Technology Association
Charles Goodyear Medal
Synthetic rubber
Thermoplastic elastomer
Vulcanization
References
External links
Latex Technology - What Is Wet Rubber
Manufacturing
Rubber products
Rubber industry | Rubber technology | [
"Engineering"
] | 467 | [
"Manufacturing",
"Mechanical engineering"
] |
8,655,598 | https://en.wikipedia.org/wiki/2-Methylhexane | 2-Methylhexane (C7H16, also known as isoheptane, ethylisobutylmethane) is an isomer of heptane. It is structurally a hexane molecule with a methyl group attached to its second carbon atom. It exists in most commercially available heptane merchandises as an impurity but is usually not considered as impurity in terms of reactions since it has very similar physical and chemical properties when compared to n-heptane (straight-chained heptane).
Being an alkane, 2-methylhexane is insoluble in water, but is soluble in many organic solvents, such as alcohols and ether. However, 2-methylhexane is more commonly considered as a solvent itself. Therefore, even though it is present in many commercially available heptane products, it is not considered as a destructive impurity, as heptane is usually used as a solvent. Nevertheless, by concise processes of distillation and refining, it is possible to separate 2-methylhexane from n-heptane.
Within a group of isomers, those with more branches tend to ignite more easily and combust more completely. Therefore, 2-methylhexane has a lower Autoignition temperature and flash point when compared to heptane. Theoretically 2-methylhexane also burns with a less sooty flame, emitting higher-frequency radiation; however, as heptane and 2-methylhexane differ by only one carbon atom, in terms of branching, both burn with a bright yellow flame when ignited.
Compared to n-heptane, 2-methylhexane also has lower melting and boiling points. A lower density of liquid is found in 2-Methylhexane than heptane.
On the NFPA 704 scale, 2-methylhexane is listed as a reactivity level-0 chemical, along with various other alkanes. In fact, most alkanes are unreactive except in extreme conditions, such as combustion or strong sunlight. At the presence of oxygen and flame, 2-methylhexane, like heptane, combusts mostly completely into water and carbon dioxide. With UV-light and mixed with halogens in solvents, usually bromine in 1,1,1-trichloroethane, a substitution reaction occurs.
See also
3-Methylhexane
References
Alkanes | 2-Methylhexane | [
"Chemistry"
] | 515 | [
"Organic compounds",
"Alkanes"
] |
8,655,614 | https://en.wikipedia.org/wiki/Starter%20solenoid | A starter solenoid is an electromagnet which is actuated to engage the starter motor of an internal combustion engine. It is normally attached directly to the starter motor which it controls.
Its primary function is as the actuating coil of a contactor (a relay designed for large electric currents) which connects the battery to the starter motor proper.
All modern cars also use the starter solenoid to move the starter pinion into engagement with the ring gear of the engine.
The starter solenoid is sometimes called the starter relay, but many cars reserve that name for a separate relay which supplies power to the starter solenoid. In these cases, the ignition switch energizes the starter relay, which energizes the starter solenoid, which energizes the starter motor.
Operation
An idle starter solenoid can receive a large electric current from the car battery and a small electric current from the ignition switch. When the ignition switch is turned on, a small electric current is sent through the starter solenoid. This causes the starter solenoid to close a pair of heavy contacts, thus relaying a large electric current through the starter motor, which in turn sets the engine in motion.
The starter motor is a series, compound, or permanent magnet type electric motor with a solenoid and solenoid operated switch mounted on it. When low-current power from the starting battery is applied to the starter solenoid, usually through a key-operated switch, the solenoid closes high-current contacts for the starter motor and it starts to run. Once the engine starts, the key-operated switch is opened and the solenoid opens the contacts to the starter motor.
All modern starters rely on the solenoid to engage the starter drive with the ring gear of the flywheel. When the solenoid is energized, it operates a plunger or lever which forces the pinion into mesh with the ring gear. The pinion incorporates a one way clutch so that when the engine starts and runs it will not attempt to drive the starter motor at excessive RPM.
Some older starter designs, such as the Bendix drive, used the rotational inertia of the pinion to force it along a helical groove cut into the starter drive-shaft, and thus no mechanical linkage with the solenoid was required.
Problems
If a starter solenoid receives insufficient power from the battery, it will fail to start the motor, and may produce a rapid clicking sound. The lack of power can be caused by a low battery, by corroded or loose connections in the battery cable, or by a damaged positive (red) cable from the battery. Any of these problems will result in some, but not enough, power being sent to the solenoid, which means that the solenoid will only begin to push the engagement gear, making the metallic click sound.
See also
List of auto parts
References
Relays
Vehicle parts | Starter solenoid | [
"Technology"
] | 607 | [
"Vehicle parts",
"Components"
] |
8,655,788 | https://en.wikipedia.org/wiki/Reciprocity%20%28evolution%29 | Reciprocity in evolutionary biology refers to mechanisms whereby the evolution of cooperative or altruistic behaviour may be favoured by the probability of future mutual interactions. A corollary is how a desire for revenge can harm the collective and therefore be naturally selected against.
Main types
Three types of reciprocity have been studied extensively:
Direct reciprocity
Indirect
Network reciprocity
Direct reciprocity
Direct reciprocity was proposed by Robert Trivers as a mechanism for the evolution of cooperation. If there are repeated encounters between the same two players in an evolutionary game in which each of them can choose either to "cooperate" or "defect", then a strategy of mutual cooperation may be favoured even if it pays each player, in the short term, to defect when the other cooperates. Direct reciprocity can lead to the
evolution of cooperation only if the probability, w, of another encounter between the same two individuals exceeds the cost-to-benefit ratio of the altruistic act: w > c / b
Indirect reciprocity
"In the standard framework of indirect reciprocity, there are randomly chosen pairwise encounters between members of a population; the same two individuals need not meet again. One individual acts as donor, the other as recipient. The donor can decide whether or not to cooperate. The interaction is observed by a subset of the population who might inform others. Reputation allows evolution of cooperation by indirect reciprocity. Natural selection favors strategies that base the decision to help on the reputation of the recipient: studies show that people who are more helpful are more likely to receive help." In many situations cooperation is favoured and it even benefits an individual to forgive an occasional defection but cooperative societies are always unstable because mutants inclined to defect can upset any balance.
The calculations of indirect reciprocity are complicated, but again a simple rule has emerged. Indirect reciprocity can only promote cooperation if the probability, q, of knowing someone’s reputation exceeds the cost-to-benefit ratio of the altruistic act:
q > c / b
One important problem with this explanation is that individuals may be able to evolve the capacity to obscure their reputation, reducing the probability, q, that it will be known.
Individual acts of indirect reciprocity may be classified as "upstream" or "downstream":
Upstream reciprocity occurs when an act of altruism causes the recipient to perform a later act of altruism in the benefit of a third party. In other words: A helps B, which then motivates B to help C.
Downstream reciprocity occurs when the performer of an act of altruism is more likely to be the recipient of a later act of altruism. In other words: A helps B, making it more likely that C will later help A.
Network reciprocity
Real populations are not well mixed, but have spatial structures or social networks which imply that some individuals interact more often than others. One approach of capturing this effect is evolutionary graph theory, in which individuals occupy the vertices of a graph. The edges determine who interacts with whom. If a cooperator pays a cost, c, for each neighbor to receive a benefit, b, and defectors have no costs, and their neighbors receive no benefits, network reciprocity can favor cooperation. The benefit-to-cost ratio must exceed the average number of people, k, per individual:
b / c > k (See below, however.)
Recent work shows that the benefit-to-cost ratio must exceed the mean degree of nearest neighbors, :
b / c >
Reciprocity in social dynamics
An ethical concept known as "generalized reciprocity" holds that people should show kindness to others without anticipating prompt return favors. This kind of reciprocity emphasizes the intrinsic value of humanitarian acts and goes beyond transactional expectations. In the field of social dynamics, generalized reciprocity encourages people to have a culture of giving and unity. When people engage in this type of reciprocity, they give without thinking about what they could get back, showing that they care about the general welfare of the community. It portrays a kind of social connection in where individuals give, share, or assist without anticipating anything in return.
This selfless involvement spreads outside of close circles, creating a domino effect that improves the well-being of everybody. Therefore, generalized reciprocity is evidence of the persistent value of selfless contributions in building strong, cohesive communities. Adopting this idea means being committed to the timeless values of giving and having faith in the natural flow of advantages for both parties.
See also
Ethic of reciprocity
Generalized exchange
Polytely
Reciprocal altruism
References
Further reading
Martin Nowak Evolutionary Dynamics: Exploring the Equations of Life Harvard 2006
Martin Nowak Five Rules for the Evolution of Cooperation Science 314, 1560 (2006)
Panchanathan K. & Boyd, R. (2004). Indirect reciprocity can stabilize cooperation without the second-order free rider problem. Nature 432: 499–502. Full text
Panchanathan K. & Boyd, R. (2003) A Tale of Two Defectors: The Importance of Standing for the Evolution of Indirect Reciprocity. Journal of Theoretical Biology, 224: 115–126. Full text
Evolutionary biology | Reciprocity (evolution) | [
"Biology"
] | 1,064 | [
"Evolutionary biology"
] |
8,656,285 | https://en.wikipedia.org/wiki/Night%20hunting | "Night hunting", known in Bhutan as Bomena, is a traditional courtship custom that is practiced in some parts of Bhutan.
Similar customs have also existed in other cultures, namely in Japan.
Practice
"Night hunting" is a traditional culture of nightly courtship and romance that is practiced mostly in eastern and central rural Bhutan. There is neither the word "night" nor the word "hunting" in the original terms. The original words can be best rendered as "prowling for girls".
Young men go out at night to sneak into girls' windows to engage in sexual activities. The prowling can be solo or in groups depending on whether or not the man has a fixed date. It is the rural equivalent of an urban date. If one has talked with the girl in advance then it can be a solo activity but usually it happens after a gathering when friends decide to go prowling for girls. Most boys would have a girl in mind. Although they set out as a group, they disperse gradually as they find a partner.
Traditional two-story buildings makes the prowling difficult but the sliding window shutter with only wooden latches from inside makes it easier. Strategies vary from sneaking in the door to climbing up the side of a house to enter a window or even dropping in from the roof. The uniform architecture of Bhutanese houses, with same design of doors and windows also make it easier. The age old tradition has also come up with special tools to undo doors and windows. If the boy successfully infiltrates the dwelling, he still may be rejected by the girl he is pursuing.
The prowling may be foiled due to wrong footing, which may wake up the whole family. The intruder may get chased away with hot water splashed on him, or be thrown out of the window. Strict parents chase the intruder or threaten him with marriage or a stick while liberal ones pretend to be asleep even if they know the prowler is around. This is more likely if they know the prowler is a suitor they would like to have for their daughter. It is not difficult to guess who the prowler might be in small close-knit villages.
Boys generally attempt to complete the task and make a quick exit if the parents of the girl are in and may stay longer if the girl is alone. It is in some places a custom that a boy discovered in the morning by the parents shall become the husband of the girl, but usually the boy and the girl make sure that the boy exits before the parents get up in the morning. If he oversleeps, they may still find a way to sneak out.
The practice is far more dramatic because this happens under pitch darkness and traditionally the whole family sleep in one large room, which is the kitchen and living room. The prowler must know pretty well where the girl sleeps in order to find the right bed. There are stories of boys getting into the wrong bed and the grannies yelling the boy out or having a good laugh or even quietly enjoying the visit.
The culture of night prowling is fading away due to socio-economic changes. With new metal latches and locks in many houses, it is difficult for young boys now to get into the house. With modern education, modern western form of romance and dating tradition is growing and young people are no longer keen on this traditional practice, preferring to exchange love letters and fix dates.
Issues
One potential issue is the abuse of this cultural practice leading to sexual assault and rape. Perhaps a more common downside of night prowling has been rampant bastardy. Bastardy and single motherhood were less of a problem in the traditional setting with extended families and grandparents always around to look after the child.
However, the growing culture of nuclear families, the requirement for marriage certificates, requirement of a father to register the child as citizen, the increasing practice of western styled wedding culture are leading to an increased stigma for single motherhood. This subsequently is leading to the fall in sex outside wedlock and practices such as prowling for girls.
Modern education and the literature associated with it are spreading fast and with them a worldview and culture heavily influenced by a Western, Christian moral ethos. This is fast replacing a more liberal Buddhist attitude toward sex which was prevalent in Bhutan.
In literature
In the book “Love, Courtship and Marriage in Rural Bhutan” by Kyle Bauer, the Centre for Bhutan Studies, discusses night hunting. According to the author, Bomena, a “custom whereby a boy stealthily enters a girl’s house at night for courtship or coitus with or without prior consultation”, is commonly misunderstood in Bhutan as ‘night hunting’. The use of a vernacular word Bomena, not ‘night hunting’, a term loaded with ethnocentrism and ignorance of the custom, tells a lot of this original village ethnography.
The current discourse and understanding of Bomena, according to the author, are naïve, biased and misrepresented, heavily influenced by changing values especially among the urban societies. One common notion is that any rural culture is ‘inferior’ and all urban cultures are ‘superior’, and replacing the rural culture with urban culture is seen as a way of emancipating the Bhutanese farmers from their ‘primitive’ culture and advancing the country.
See also
Yobai, Japan
References
Gender in Bhutan
Human sexuality
Night in culture
Sexuality in Bhutan
Sleep
Society of Bhutan
Social history of Bhutan
Women in Bhutan | Night hunting | [
"Biology"
] | 1,109 | [
"Human sexuality",
"Behavior",
"Sexuality",
"Sleep",
"Human behavior"
] |
8,656,374 | https://en.wikipedia.org/wiki/Magnetic%20induction%20tomography | Magnetic induction tomography (MIT) is an imaging technique used to image electromagnetic properties of an object by using the eddy current effect. It is also called electromagnetic induction tomography, electromagnetic tomography (EMT), eddy current tomography, and eddy current testing.
Applications
The method is used in nondestructive testing and geophysics, and has potential applications in medicine. It is also used to generate 3D images of passive electromagnetic properties, which has applications in brain imaging, cryosurgery monitoring in medical imaging, and metal flow visualization in metalworking processes. Recently, eddy current sensors has been used to scan additive manufacturing for metal process layer-by-layer, producing eddy current tomography images. The company AMiquam has been developing this technology since 2020.
References
Peyton, A. J., et al. "An overview of electromagnetic inductance tomography: description of three different systems." Measurement Science and Technology 7.3 (1996): 261.
Magnetic resonance imaging | Magnetic induction tomography | [
"Chemistry"
] | 202 | [
"Magnetic resonance imaging",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance stubs"
] |
8,656,824 | https://en.wikipedia.org/wiki/Jordanian%20Engineers%20Association | The Jordanian Engineers Association (in Arabic: نقابة المهندسين الأردنيين) was established as a society for engineers in 1948, and was licensed in 1949. The first general assembly of the Engineering Professionals Association was established in 1958. Tawfiq Marar became the first Engineers' Association president. The Association has 11 branches in Jordan. There are two centers in Amman and Jerusalem. The first law of the Association was enacted in 1972.
The Association has an independent legal personality run by a board elected by the general assembly in accordance with the provisions of the association law, and the association president represents it before the courts, administrative entities, and other departments.
The Association makes an annual report stating its achievements and clarifying its financial position in financial reports. Every fund of the association also makes its respective annual report, and the administrative and financial reports are presented to the general assemblies for approval.
Membership
As of 2017, the JEA had 143549 Registered Members, divided into six engineering chapters. Approximately 25% of its members are female, although this is expected to rise to 30% within the next 5 years.
References
External links
JEA website
Engineering societies
Trade unions in Jordan
Organisations based in Jordan | Jordanian Engineers Association | [
"Engineering"
] | 256 | [
"Engineering societies"
] |
8,657,731 | https://en.wikipedia.org/wiki/Depigmentation | Depigmentation is the lightening of the skin or loss of pigment. Depigmentation of the skin can be caused by a number of local and systemic conditions. The pigment loss can be partial (injury to the skin) or complete (caused by vitiligo). It can be temporary (from tinea versicolor) or permanent (from albinism).
Most commonly, depigmentation of the skin is linked to people born with vitiligo, which produces differing areas of light and dark skin. Monobenzone also causes skin depigmentation.
Increasingly, people who are not afflicted with vitiligo experiment with lower concentrations of monobenzone creams in the hope of lightening their skin tone evenly. An alternate method of lightening is to use the chemical mequinol over an extended period of time. Both monobenzone and mequinol produce dramatic skin whitening, but react very differently. Mequinol leaves the skin looking extremely pale. However, tanning is still possible. It is important to notice that the skin will not go back to its original color after the none treatment of mequinol. Mequinol should not be used by people that are allergic to any ingredient in mequinol; people that are pregnant; people that have eczema, irritated or inflamed skin; people that have an increased number of white blood cells or people that are sensitive to sunlight or must be outside for prolonged periods of time. Mequinol is used in Europe in concentrations ranging from 2-20% and is approved in many countries for the treatment of solar lentigines. Monobenzone applied topically completely removes pigment in the long term and vigorous sun-safety must to be adhered to for life to avoid severe sun burn and melanomas. People using monobenzone without previously having vitiligo do so because standard products containing hydroquinone or other lightening agents are not effective for their skin and due to price and active ingredient strength. However, monobenzone is not recommended for skin conditions other than vitiligo.
For stubborn pigmented lesions the Q-switched ruby laser, cryotherapy or TCA peels can be used to ensure the skin remains pigment-free.
See also
Skin whitening
References
Disturbances of pigmentation | Depigmentation | [
"Biology"
] | 477 | [
"Disturbances of pigmentation",
"Pigmentation"
] |
8,658,125 | https://en.wikipedia.org/wiki/Convergence%20problem | In the analytic theory of continued fractions, the convergence problem is the determination of conditions on the partial numerators ai and partial denominators bi that are sufficient to guarantee the convergence of the infinite continued fraction
This convergence problem is inherently more difficult than the corresponding problem for infinite series.
Elementary results
When the elements of an infinite continued fraction consist entirely of positive real numbers, the determinant formula can easily be applied to demonstrate when the continued fraction converges. Since the denominators Bn cannot be zero in this simple case, the problem boils down to showing that the product of successive denominators BnBn+1 grows more quickly than the product of the partial numerators a1a2a3...an+1. The convergence problem is much more difficult when the elements of the continued fraction are complex numbers.
Periodic continued fractions
An infinite periodic continued fraction is a continued fraction of the form
where k ≥ 1, the sequence of partial numerators {a1, a2, a3, ..., ak} contains no values equal to zero, and the partial numerators {a1, a2, a3, ..., ak} and partial denominators {b1, b2, b3, ..., bk} repeat over and over again, ad infinitum.
By applying the theory of linear fractional transformations to
where Ak-1, Bk-1, Ak, and Bk are the numerators and denominators of the k-1st and kth convergents of the infinite periodic continued fraction x, it can be shown that x converges to one of the fixed points of s(w) if it converges at all. Specifically, let r1 and r2 be the roots of the quadratic equation
These roots are the fixed points of s(w). If r1 and r2 are finite then the infinite periodic continued fraction x converges if and only if
the two roots are equal; or
the k-1st convergent is closer to r1 than it is to r2, and none of the first k convergents equal r2.
If the denominator Bk-1 is equal to zero then an infinite number of the denominators Bnk-1 also vanish, and the continued fraction does not converge to a finite value. And when the two roots r1 and r2 are equidistant from the k-1st convergent – or when r1 is closer to the k-1st convergent than r2 is, but one of the first k convergents equals r2 – the continued fraction x diverges by oscillation.
The special case when period k = 1
If the period of a continued fraction is 1; that is, if
where b ≠ 0, we can obtain a very strong result. First, by applying an equivalence transformation we see that x converges if and only if
converges. Then, by applying the more general result obtained above it can be shown that
converges for every complex number z except when z is a negative real number and z < −. Moreover, this continued fraction y converges to the particular value of
that has the larger absolute value (except when z is real and z < −, in which case the two fixed points of the LFT generating y have equal moduli and y diverges by oscillation).
By applying another equivalence transformation the condition that guarantees convergence of
can also be determined. Since a simple equivalence transformation shows that
whenever z ≠ 0, the preceding result for the continued fraction y can be restated for x. The infinite periodic continued fraction
converges if and only if z2 is not a real number lying in the interval −4 < z2 ≤ 0 – or, equivalently, x converges if and only if z ≠ 0 and z is not a pure imaginary number with imaginary part between -2 and 2. (Not including either endpoint)
Worpitzky's theorem
By applying the fundamental inequalities to the continued fraction
it can be shown that the following statements hold if |ai| ≤ for the partial numerators ai, i = 2, 3, 4, ...
The continued fraction x converges to a finite value, and converges uniformly if the partial numerators ai are complex variables.
The value of x and of each of its convergents xi lies in the circular domain of radius 2/3 centered on the point z = 4/3; that is, in the region defined by
The radius is the largest radius over which x can be shown to converge without exception, and the region Ω is the smallest image space that contains all possible values of the continued fraction x.
Because the proof of Worpitzky's theorem employs Euler's continued fraction formula to construct an infinite series that is equivalent to the continued fraction x, and the series so constructed is absolutely convergent, the Weierstrass M-test can be applied to a modified version of x. If
and a positive real number M exists such that |ci| ≤ M (i = 2, 3, 4, ...), then the sequence of convergents {fi(z)} converges uniformly when
and f(z) is analytic on that open disk.
Śleszyński–Pringsheim criterion
In the late 19th century, Śleszyński and later Pringsheim showed that a continued fraction, in which the as and bs may be complex numbers, will converge to a finite value if for
Van Vleck's theorem
Jones and Thron attribute the following result to Van Vleck. Suppose that all the ai are equal to 1, and all the bi have arguments with:
with epsilon being any positive number less than . In other words, all the bi are inside a wedge which has its vertex at the origin, has an opening angle of , and is symmetric around the positive real axis. Then fi, the ith convergent to the continued fraction, is finite and has an argument:
Also, the sequence of even convergents will converge, as will the sequence of odd convergents. The continued fraction itself will converge if and only if the sum of all the |bi| diverges.
Notes
References
Oskar Perron, Die Lehre von den Kettenbrüchen, Chelsea Publishing Company, New York, NY 1950.
H. S. Wall, Analytic Theory of Continued Fractions, D. Van Nostrand Company, Inc., 1948
Continued fractions
Convergence (mathematics) | Convergence problem | [
"Mathematics"
] | 1,328 | [
"Sequences and series",
"Functions and mappings",
"Convergence (mathematics)",
"Mathematical structures",
"Continued fractions",
"Mathematical objects",
"Mathematical relations",
"Number theory"
] |
8,658,158 | https://en.wikipedia.org/wiki/Sofar%20bomb | In oceanography, a sofar bomb (Sound Fixing And Ranging bomb), occasionally referred to as a sofar disc, is a long-range position-fixing system that uses impulsive sounds in the deep sound channel (SOFAR channel) of the ocean to enable pinpointing of the location of ships or crashed planes. The deep sound channel is ideal for the device, as the minimum speed of sound at that depth improves the signal's traveling ability. A position is determined from the differences in arrival times at receiving stations of known geographic locations. The useful range from the signal sources to the receiver can exceed .
Design
For this device to work as intended, it must have several qualities. Firstly, the bomb needs to detonate at the correct depth, so that it can take full advantage of the deep sound channel. The sofar bomb has to sink fast enough so that it reaches the required depth within a reasonable amount of time (usually about 5 minutes).
To determine the position of a sofar bomb that has been detonated, three or more naval stations combine their reports of when they received the signal.
Benefits of the deep sound channel
Detonating the sofar bomb in the deep sound channel gives it huge benefits. The channel itself helps keep the sound waves contained within the same depth, as the rays of sound that have an upward or downward velocity are pushed back towards the deep sound channel because of refraction. Because the sound waves do not spread out vertically, the horizontal sound rays maintain far more strength than they would otherwise. This makes it far easier for the stations on shore to pick up and analyze the signal. Usually, the blasts use frequencies between 30 and 150 Hz, which also helps stop the signal from weakening too much. A side effect of this is that the slightly higher frequencies of sound waves emitted move a bit faster than the lower frequencies, making the signal that the naval stations hear have a longer duration.
History
Dr. Maurice Ewing, a pioneer of oceanography and geophysics, first suggested putting small hollow metal spheres in pilots' emergency kits during World War II. The spheres would implode when they sank to the sofar channel, acting as a secret homing beacon to be received by microphones on coastlines that could pinpoint downed pilots' positions. This technology proved to be useful for the naval conflicts during World War II by providing a method for ships to accurately report their position without use of radio, or to find crashed planes and ships. During the war, the primary model of sofar bomb used by the United States was the Mk-22. It worked exceptionally well, and had an adjustable fuse for different depth detonations. The bomb was used with a chart that detailed the depth of the deep sound channel, so that the of TNT would explode at the correct time for its location (as the deep sound channel's actual depth varies with areas of the ocean). Its main safety mechanism was the detonator that could not trigger without a water pressure that corresponded to at least .
References
https://web.archive.org/web/20070516095529/http://muller.lbl.gov/teaching/Physics10/old%20physics%2010/pages/SoundChannel.html
Sonar
Anti-submarine warfare
Oceanography | Sofar bomb | [
"Physics",
"Environmental_science"
] | 666 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
8,658,702 | https://en.wikipedia.org/wiki/Joint%20Network%20Node | For military communications, the Joint Network Node system, or JNN as it is commonly called, is a communications system the United States Military uses for remote, satellite-based communication. It is described by General Dynamics and the US Army Signal School as "the next generation of battlefield communications."
The JNN is a system developed to replace the Mobile Subscriber Equipment (MSE) for the United States Military. It provides Beyond Line of Sight (BLOS) capabilities for the Warfighter.
The JNN system includes communication equipment mounted in shelters on HMMWVs, called JNN shelters, satellite terminals mounted on trailers, and communication equipment mounted in transit cases. There are two classes of transit case equipment: Brigade Cases and Battalion Cases.
The system's core is a Promina switch and cisco routers, with NIPRNet and SIPRNet capabilities, plus secure and non-secure voice systems, VTC, and the ability to link in older "legacy" systems, such as MSE, into the global network.
As a JNN operator, the JNN is run by a generator, which can be on any of 3 phases for power while operating the JNN. The JNN is most commonly with a battalion or brigade element in the field. It is an ATH(at the halt) piece of equipment.
References
Military communications | Joint Network Node | [
"Engineering"
] | 272 | [
"Military communications",
"Telecommunications engineering"
] |
8,658,994 | https://en.wikipedia.org/wiki/Single%20version%20of%20the%20truth | In computerized business management, single version of the truth (SVOT), is a technical concept describing the data warehousing ideal of having either a single centralised database, or at least a distributed synchronised database, which stores all of an organisation's data in a consistent and non-redundant form. This contrasts with the related concept of single source of truth (SSOT), which refers to a data storage principle to always source a particular piece of information from one place.
Applied to message sequencing
In some systems and in the context of message processing systems (often real-time systems), this term also refers to the goal of establishing a single agreed sequence of messages within a database formed by a particular but arbitrary sequencing of records. The key concept is that data combined in a certain sequence is a "truth" which may be analyzed and processed giving particular results, and that although the sequence is arbitrary (and thus another correct but equally arbitrary sequencing would ultimately provide different results in any analysis), it is desirable to agree that the sequence enshrined in the "single version of the truth" is the version that will be considered "the truth", and that any conclusions drawn from analysis of the database are valid and unarguable, and (in a technical context) the database may be duplicated to a backup environment to ensure a persistent record is kept of the "single version of the truth".
The key point is when the database is created using an external data source (such as a sequence of trading messages from a stock exchange) an arbitrary selection is made of one possibility from two or more equally valid representations of the input data, but henceforth the decision sets "in stone" one and only one version of the truth.
As applied to message sequencing
Critics of SVOT as applied to message sequencing argue that this concept is not scalable. As the world moves towards systems spread over many processing nodes, the effort involved in negotiating a single agreed-upon sequence becomes prohibitive.
But as pointed out by Owen Rubel at his API World talk 'The New API Pattern', the SVOT is always going to be the service layer in a distributed architecture where input/output (I/O) meet; this also is where the endpoint binding belongs to allow for modularization and better abstraction of the I/O data across the architecture to avoid the architectural cross cutting concern.
See also
Closed world assumption, formal-logic assumption that any statement that is not known to be true, is considered false
Open world assumption, formal-logic assumption that the truth-value of a statement is independent of whether it is known to be true by any single observer or agent
The Kimball lifecycle, a high-level sequence tasks used to design, develop and deploy a data warehouse or business intelligence system
Dimensional modeling, a "bottom-up" approach to data warehousing pioneered by Ralph Kimball, in contrast to the older Bill Inmon method "top-down" approach
References
Bibliography
https://web.archive.org/web/20061015232747/http://www.industryweek.com/EventDetail.aspx?EventID=179
Data modeling
Database normalization | Single version of the truth | [
"Technology",
"Engineering"
] | 651 | [
"Computer science stubs",
"Data modeling",
"Computer science",
"Data engineering",
"Computing stubs"
] |
8,659,429 | https://en.wikipedia.org/wiki/Programming%20language%20reference | In computing, a programming language reference or language reference manual is part of the documentation associated with most mainstream programming languages. It is written for users and developers, and describes the basic elements of the language and how to use them in a program. For a command-based language, for example, this will include details of every available command and of the syntax for using it.
The reference manual is usually separate and distinct from a more detailed programming language specification meant for implementors of the language rather than those who simply use it to accomplish some processing task.
There may also be a separate introductory guide aimed at giving newcomers enough information to start writing programs, after which they can consult the reference manual for full details. Frequently, however, a single publication contains both the introductory material and the language reference.
External links
Ada 2005 Language Reference Manual (at adaic.com)
The Python Language Reference (at python.org)
The Python Language Reference Manual by Guido van Rossum and Fred L. Drake, Jr. () (at network-theory.co.uk)
References
Reference | Programming language reference | [
"Engineering"
] | 215 | [
"Software engineering",
"Programming language topics"
] |
13,290,757 | https://en.wikipedia.org/wiki/Metagame%20analysis | Metagame analysis involves framing a problem situation as a strategic game in which participants try to realise their objectives by means of the options available to them. The subsequent meta-analysis of this game gives insight in possible strategies and their outcome.
Origin
Metagame theory was developed by Nigel Howard in the 1960s as a reconstruction of mathematical game theory on a non-quantitative basis, hoping that it would thereby make more practical and intuitive sense . Metagame analysis reflects on a problem in terms of decision issues, and stakeholders who may exert different options to gain control over these issues. The analysis reveals what likely scenarios exist, and who has the power to control the course of events. The practical application of metagame theory is based on the analysis of options method, first applied to study problems like the strategic arms race and nuclear proliferation.
Method
Metagame analysis proceeds in three phases: analysis of options, scenario development, and scenario analysis.
Analysis of options
The first phase of analysis of options consists of the following four steps:
Structure the problem by identifying the issues to be decided.
Identify the stakeholders who control the issues, either directly or indirectly.
Make an inventory of policy options by means of which the stakeholders control the issues.
Determine the dependencies between the policy options.
The dependencies between options should typically be formulated as "option X can only be implemented if option Y is also implemented", or "options Y and Z are mutually exclusive". The result is a metagame model, which can then be analysed in different ways.
Scenario development
The possible outcomes of the game, based on the combination of options, are called scenarios. In theory, a game with N stakeholders s1, ..., sN who have Oi options (i = 1, ..., N), there are O1×...×ON possible outcomes. As the number of stakeholders and the number of the options they have increase, the number of scenarios will increase steeply due to a combinatorial explosion. Conversely, the dependencies between options will reduce the number of scenarios, because they rule out those containing logically or physically impossible combinations of options.
If the set of feasible scenarios is too large to be analysed in full, some combinations may be eliminated because the analyst judges them to be not worth considering. When doing so, the analyst should take care to preserve these particular types of scenarios :
The Status Quo, representing the future as it was previously expected.
The present scenario, which may differ from the Status Quo as it incorporates the intentions that are expressed by the stakeholders to change their plans; the Status Quo necessarily remains the same, but the present scenario may change as stakeholders interact and influence each other's plans.
The positions of different stakeholders, being the scenarios they would like others to agree to. Similar to the present scenario, positions may change through interaction.
Compromises between two stakeholders, defined as scenarios that, while not the position of either, are preferred by both to the other's position. A compromise does not necessarily have to involve all stakeholders.
Conflict points, defined as scenarios that stakeholders might move to in trying to force others to accept their positions.
Scenario analysis
The next step in the metagame analysis consists of the actual analysis of the scenarios generated so far. This analysis centres around stability and is broken down in the following four steps :
Choose a particular scenario to analyse for stability. A scenario is stable if "each stakeholder expects to do its part and expects others to do theirs." Note that stable scenarios are accepted by all stakeholders, but that acceptance does not need to be voluntary. There may be more than one stable scenario, the stability of a scenario may change, and unstable scenarios can also happen.
Identify all unilateral improvements for stakeholders and subsets of stakeholders from the particular scenario. These are all the scenarios that are both preferred by all members of a certain subset and 'reachable' by them alone changing their selection of individual options.
Identify all sanctions that exist to deter the unilateral improvements. A sanction against an improvement is a possible reaction to an improvement by the stakeholders who were not involved in the improvement. It is such that the stakeholder who was involved in the improvement finds the sanction not preferred to the particular scenario, making it not worthwhile for that stakeholder to have helped with the improvement. The general "law of stability" to be used in scenario analysis is: for a scenario to be stable, it is necessary for each credible improvement to be deterred by a credible sanction Steps 1 to 3 need to be repeated to analyse some additional scenarios. When a number of scenarios have been analysed, one can proceed to the next step:
Draw a strategic map, laying out all the threats and promises stakeholders can make to try to stabilise the situation at scenarios they prefer. Strategic maps are diagrams in which scenarios are shown by balloons, with arrows from one balloon to another representing unilateral improvements. Dotted arrows from improvement arrows to balloons represent sanctions by which the improvements may be deterred, thus changing the destination of the improvement arrow.
This analysis procedure shows that the credibility of threats and promises (sanctions and improvements) is of importance in metagame analysis. A threat or promise, one that the stakeholder prefers to carry out for its own sake, is inherently credible. Sometimes a stakeholder may want to make credible an 'involuntary' threat or promise, to use this to move the situation in the desired direction. Such threats and promises can be made credible in three basic ways: preference change, irrationality, and deceit .
Development
Metagame analysis is still used as a technique in its own right. However it has been further developed in distinct ways as the basis of more recent approaches:
the graph model
confrontation analysis
References
Game theory | Metagame analysis | [
"Mathematics"
] | 1,170 | [
"Game theory"
] |
13,290,844 | https://en.wikipedia.org/wiki/Bicentric%20polygon | In geometry, a bicentric polygon is a tangential polygon (a polygon all of whose sides are tangent to an inner incircle) which is also cyclic — that is, inscribed in an outer circle that passes through each vertex of the polygon. All triangles and all regular polygons are bicentric. On the other hand, a rectangle with unequal sides is not bicentric, because no circle can be tangent to all four sides.
Triangles
Every triangle is bicentric. In a triangle, the radii r and R of the incircle and circumcircle respectively are related by the equation
where x is the distance between the centers of the circles. This is one version of Euler's triangle formula.
Bicentric quadrilaterals
Not all quadrilaterals are bicentric (having both an incircle and a circumcircle). Given two circles (one within the other) with radii R and r where , there exists a convex quadrilateral inscribed in one of them and tangent to the other if and only if their radii satisfy
where x is the distance between their centers. This condition (and analogous conditions for higher order polygons) is known as Fuss' theorem.
Polygons with n > 4
A complicated general formula is known for any number n of sides for the relation among the circumradius R, the inradius r, and the distance x between the circumcenter and the incenter. Some of these for specific n are:
where and
Regular polygons
Every regular polygon is bicentric. In a regular polygon, the incircle and the circumcircle are concentric—that is, they share a common center, which is also the center of the regular polygon, so the distance between the incenter and circumcenter is always zero. The radius of the inscribed circle is the apothem (the shortest distance from the center to the boundary of the regular polygon).
For any regular polygon, the relations between the common edge length a, the radius r of the incircle, and the radius R of the circumcircle are:
For some regular polygons which can be constructed with compass and ruler, we have the following algebraic formulas for these relations:
Thus we have the following decimal approximations:
Poncelet's porism
If two circles are the inscribed and circumscribed circles of a particular bicentric n-gon, then the same two circles are the inscribed and circumscribed circles of infinitely many bicentric n-gons. More precisely,
every tangent line to the inner of the two circles can be extended to a bicentric n-gon by placing vertices on the line at the points where it crosses the outer circle, continuing from each vertex along another tangent line, and continuing in the same way until the resulting polygonal chain closes up to an n-gon. The fact that it will always do so is implied by Poncelet's closure theorem, which more generally applies for inscribed and circumscribed conics.
Moreover, given a circumcircle and incircle, each diagonal of the variable polygon is tangent to a fixed circle.
References
External links
Elementary geometry
Types of polygons | Bicentric polygon | [
"Mathematics"
] | 690 | [
"Elementary mathematics",
"Elementary geometry"
] |
13,291,400 | https://en.wikipedia.org/wiki/Selective%20relaxant%20binding%20agents | Selective relaxant binding agents (SRBAs) are a new class of drugs that selectively encapsulates and binds neuromuscular blocking agents (NMBAs). The first drug introduction of an SRBA is sugammadex. SRBAs exert a chelating action that effectively terminates an NMBA ability to bind to nicotinic receptors.
Examples of SRBAs include:
1. Sugammadex is a modified gamma cyclodextrin that specifically encapsulates and binds the aminosteroid NMBAs: affinity is highest for rocuronium, followed by vecuronium, and relatively low affinity for pancuronium.
2. Adamgammadex is also a modified gamma-cyclodextrin, with acetyl-amino groups replacing the carboxylic acid groups of sugammadex. Early research suggests it may have a lower incidence of adverse reactions than sugammadex
3. Calabadion 1 and calabadion 2 are cucurbituril molecules with high binding affinity for both aminosteroid and benzylisoquinoline muscle relaxants. Calabadion 2 has 89 times the affinity for rocuronium than does sugammadex.
Discovery of SRBAs
The discovery of SRBA as a new class of drug is the result of work done by Organon laboratories at the Newhouse research site in Scotland. Cyclodextrins were explored as a means to solubilize rocuronium bromide (a steroidal NMBA) in a neutral aqueous solution. Upon creating numerous modified cyclodextrins, one particular molecule was found to possess extremely high affinity for the rocuronium molecule. Originally known as Org25969, it is now generically named sugammadex sodium.
References
Anesthesia
Antidotes
Polysaccharides | Selective relaxant binding agents | [
"Chemistry"
] | 381 | [
"Carbohydrates",
"Polysaccharides"
] |
13,291,525 | https://en.wikipedia.org/wiki/Loma%20%28microsporidian%29 | Loma is a genus of microsporidian parasites, infecting fish. The taxonomic position of Loma in the family Glugeidae has been questioned by DNA sequencing results.
Species include
Loma acerinae - formerly placed in Glugea
Loma branchialis - the type species
Loma camerounensis - a parasite of the cichlid fish, Oreochromis niloticus
Loma dimorpha
Loma morhua
Loma myriophis - parasite of the ophichthid fish, Myrophis platyrhynchus
Loma salmonae - a parasite of Pacific salmon, Oncorhynchus spp.
Loma trichiuri - a parasite of a marine trichiurid fish, Trichiurus savala''
References
Microsporidia genera
Parasites of fish | Loma (microsporidian) | [
"Biology"
] | 172 | [
"Fungus stubs",
"Fungi"
] |
13,293,162 | https://en.wikipedia.org/wiki/Abelspora | Abelspora is a genus of microsporidian parasites. The genus is monotypic, and contains the single species Abelspora portucalensis. The species parasitizes the shore crab, Carcinus maenas, within which it infects the hepatopancreas. The genus was first described by Carlos Azevedo in 1987 and is endemic to the Midwestern United States.
References
Microsporidia genera
Monotypic fungus genera | Abelspora | [
"Biology"
] | 95 | [
"Fungus stubs",
"Fungi"
] |
13,293,216 | https://en.wikipedia.org/wiki/Alternate-Phase%20Return-to-Zero | Alternate-Phase Return-to-Zero (APRZ) is an optical line code.
In APRZ the field intensity drops to zero between consecutive bits, and the field phase alternates between neighbouring bits, so that if the phase of the signal is, for example, 0 in even bits (bit number 2n), the phase in odd bit slots (bit number 2n+1) will be ΔΦ, the phase alternation amplitude.
Special cases
Return-to-zero can be seen as a special case of APRZ in which ΔΦ=0, while Carrier-Suppressed Return-to-Zero (CSRZ) can be viewed as a special case of APRZ in which ΔΦ=π (and the duty cycle is 67%, at least in the standard form of CSRZ).
APRZ can be used to generate specific optical modulation formats, for example, APRZ-OOK, in which data is coded on the intensity of the signal using a binary scheme (light on=1, light off=0). APRZ is often used to designate APRZ-OOK.
Characteristics
The characteristic properties of an APRZ signal are those to have a spectrum similar to that of an RZ signal, except that frequency peaks at a spacing of BR/2 as opposed to BR are observed (where BR is the bit rate).
Line codes
Fiber-optic communications | Alternate-Phase Return-to-Zero | [
"Materials_science"
] | 283 | [
"Materials science stubs",
"Electromagnetism stubs"
] |
13,293,546 | https://en.wikipedia.org/wiki/Feynman%20checkerboard | The Feynman checkerboard, or relativistic chessboard model, was Richard Feynman's sum-over-paths formulation of the kernel for a free spin- particle moving in one spatial dimension. It provides a representation of solutions of the Dirac equation in (1+1)-dimensional spacetime as discrete sums.
The model can be visualised by considering relativistic random walks on a two-dimensional spacetime checkerboard. At each discrete timestep the particle of mass moves a distance to the left or right ( being the speed of light). For such a discrete motion, the Feynman path integral reduces to a sum over the possible paths. Feynman demonstrated that if each "turn" (change of moving from left to right or conversely) of the space–time path is weighted by (with denoting the reduced Planck constant), in the limit of infinitely small checkerboard squares the sum of all weighted paths yields a propagator that satisfies the one-dimensional Dirac equation. As a result, helicity (the one-dimensional equivalent of spin) is obtained from a simple cellular-automata-type rule.
The checkerboard model is important because it connects aspects of spin and chirality with propagation in spacetime and is the only sum-over-path formulation in which quantum phase is discrete at the level of the paths, taking only values corresponding to the 4th roots of unity.
History
Richard Feynman invented the model in the 1940s while developing his spacetime approach to quantum mechanics. He did not publish the result until it appeared in a text on path integrals coauthored by Albert Hibbs in the mid 1960s. The model was not included with the original path-integral article because a suitable generalization to a four-dimensional spacetime had not been found.
One of the first connections between the amplitudes prescribed by Feynman for the Dirac particle in 1+1 dimensions, and the standard interpretation of amplitudes in terms of the kernel, or propagator, was established by Jayant Narlikar in a detailed analysis. The name "Feynman chessboard model" was coined by Harold A. Gersch when he demonstrated its relationship to the one-dimensional Ising model. B. Gaveau et al. discovered a relationship between the model and a stochastic model of the telegraph equations due to Mark Kac through analytic continuation. Ted Jacobson and Lawrence Schulman examined the passage from the relativistic to the non-relativistic path integral. Subsequently, G. N. Ord showed that the chessboard model was embedded in correlations in Kac's original stochastic model and so had a purely classical context, free of formal analytic continuation. In the same year, Louis Kauffman and Pierres Noyes produced a fully discrete version related to bit-string physics, which has been developed into a general approach to discrete physics.
Extensions
Although Feynman did not live to publish extensions to the chessboard model, it is evident from his archived notes that he was interested in establishing a link between the 4th roots of unity (used as statistical weights in chessboard paths) and his discovery, with John Archibald Wheeler, that antiparticles are equivalent to particles moving backwards in time. His notes contain several sketches of chessboard paths with added spacetime loops. The first extension of the model to explicitly contain such loops was the "spiral model", in which chessboard paths were allowed to spiral in spacetime. Unlike the chessboard case, causality had to be implemented explicitly to avoid divergences, however with this restriction the Dirac equation emerged as a continuum limit. Subsequently, the roles of zitterbewegung, antiparticles and the Dirac sea in the chessboard model have been elucidated, and the implications for the Schrödinger equation considered through the non-relativistic limit.
Further extensions of the original 2-dimensional spacetime model include features such as improved summation rules and generalized lattices. There has been no consensus on an optimal extension of the chessboard model to a fully four-dimensional spacetime. Two distinct classes of extensions exist, those working with a fixed underlying lattice and those that embed the two-dimensional case in higher dimension. The advantage of the former is that the sum-over-paths is closer to the non-relativistic case, however the simple picture of a single directionally independent speed of light is lost. In the latter extensions the fixed-speed property is maintained at the expense of variable directions at each step.
References
Quantum field theory
Spinors
Dirac equation
Lattice models
Richard Feynman | Feynman checkerboard | [
"Physics",
"Materials_science"
] | 968 | [
"Quantum field theory",
"Equations of physics",
"Eponymous equations of physics",
"Quantum mechanics",
"Lattice models",
"Computational physics",
"Condensed matter physics",
"Statistical mechanics",
"Dirac equation"
] |
13,294,262 | https://en.wikipedia.org/wiki/Climate%20change%20in%20the%20Arctic | Due to climate change in the Arctic, this polar region is expected to become "profoundly different" by 2050. The speed of change is "among the highest in the world", with the rate of warming being 3-4 times faster than the global average. This warming has already resulted in the profound Arctic sea ice decline, the accelerating melting of the Greenland ice sheet and the thawing of the permafrost landscape. These ongoing transformations are expected to be irreversible for centuries or even millennia.
Natural life in the Arctic is affected greatly. As the tundra warms, its soil becomes more hospitable to earthworms and larger plants, and the boreal forests spread to the north - yet this also makes the landscape more prone to wildfires, which take longer to recover from than in the other regions. Beavers also take advantage of this warming to colonize the Arctic rivers, and their dams contributing to methane emissions due to the increase in stagnant waters. The Arctic Ocean has experienced a large increase in the marine primary production as warmer waters and less shade from sea ice benefit phytoplankton. At the same time, it is already less alkaline than the rest of the global ocean, so ocean acidification caused by the increasing concentrations is more severe, threatening some forms of zooplankton such as pteropods.
The Arctic Ocean is expected to see its first ice-free events in the near future - most likely before 2050, and potentially in the late 2020s or early 2030s. This would have no precedent in the last 700,000 years. Some sea ice regrows every Arctic winter, but such events are expected to occur more and more frequently as the warming increases. This has great implications for the fauna species which are dependent on sea ice, such as polar bears. For humans, trade routes across the ocean will become more convenient. Yet, multiple countries have infrastructure in the Arctic which is worth billions of dollars, and it is threatened with collapse as the underlying permafrost thaws. The Arctic's indigenous people have a long relationship with its icy conditions, and face the loss of their cultural heritage.
Further, there are numerous implications which go beyond the Arctic region. Sea ice loss not only enhances warming in the Arctic but also adds to global temperature increase through the ice-albedo feedback. Permafrost thaw results in emissions of and methane that are comparable to those of major countries. Greenland melting is a significant contributor to global sea level rise. If the warming exceeds - or thereabouts, there is a significant risk of the entire ice sheet being lost over an estimated 10,000 years, adding up to global sea levels. Warming in the Arctic may affect the stability of the jet stream, and thus the extreme weather events in midlatitude regions, but there is only "low confidence" in that hypothesis.
Impacts on the physical environment
Warming
The period of 1995–2005 was the warmest decade in the Arctic since at least the 17th century, with temperatures above the 1951–1990 average. Alaska and western Canada's temperature rose by during that period. 2013 research has shown that temperatures in the region haven't been as high as they currently are since at least 44,000 years ago and perhaps as long as 120,000 years ago. Since 2013, Arctic annual mean surface air temperature (SAT) has been at least warmer than the 1981-2010 mean.
In 2016, there were extreme anomalies from January to February with the temperature in the Arctic being estimated to be between more than it was between 1981 and 2010. In 2020, mean SAT was warmer than the 1981–2010 average. On 20 June 2020, for the first time, a temperature measurement was made inside the Arctic Circle of 38 °C, more than 100 °F. This kind of weather was expected in the region only by 2100. In March, April and May the average temperature in the Arctic was higher than normal. This heat wave, without human – induced warming, could happen only one time in 80,000 years, according to an attribution study published in July 2020. It is the strongest link of a weather event to anthropogenic climate change that had been ever found, for now.
Arctic amplification
Precipitation
An observed impact of climate change is a strong increase in the number of lightnings in the Arctic. Lightnings increase the risk for wildfires. Some research suggests that globally, a warming greater than over the preindustrial level could change the type of precipitation in the Arctic from snow to rain in summer and autumn.
Cryosphere loss
Sea ice
Greenland ice sheet
Biological environment
Impacts on Arctic flora
Climate change is expected to have a strong effect on the Arctic's flora, some of which is already being seen. NASA and NOAA have continuously monitored Arctic vegetation with satellite instruments such as Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced very-high-resolution radiometer (AVHRR). Their data allows scientists to calculate so-called "Arctic greening" and "Arctic browning". From 1985 to 2016, greening has occurred in 37.3% of all sites sampled in the tundra, whereas browning was observed only in 4.7% of the sites - typically the ones that were still experiencing cooling and drying, as opposed to warming and wettening for the rest.
This expansion of vegetation in the Arctic is not equivalent across types of vegetation. A major trend has been from shrub-type plants taking over areas previously dominated by moss and lichens. This change contributes to the consideration that the tundra biome is currently experiencing the most rapid change of any terrestrial biomes on the planet. The direct impact on mosses and lichens is unclear as there exist very few studies at species level, but climate change is more likely to cause increased fluctuation and more frequent extreme events. While shrubs may increase in range and biomass, warming may also cause a decline in cushion plants such as moss campion, and since cushion plants act as facilitator species across trophic levels and fill important ecological niches in several environments, this could cause cascading effects in these ecosystems that could severely affect the way in which they function and are structured.
The expansion of these shrubs can also have strong effects on other important ecological dynamics, such as the albedo effect. These shrubs change the winter surface of the tundra from undisturbed, uniform snow to mixed surface with protruding branches disrupting the snow cover, this type of snow cover has a lower albedo effect, with reductions of up to 55%, which contributes to a positive feedback loop on regional and global climate warming. This reduction of the albedo effect means that more radiation is absorbed by plants, and thus, surface temperatures increase, which could disrupt current surface-atmosphere energy exchanges and affect thermal regimes of permafrost. Carbon cycling is also being affected by these changes in vegetation, as parts of the tundra increase their shrub cover, they behave more like boreal forests in terms of carbon cycling. This is speeding up the carbon cycle, as warmer temperatures lead to increased permafrost thawing and carbon release, but also carbon capturing from plants that have increased growth. It is not certain whether this balance will go in one direction or the other, but studies have found that it is more likely that this will eventually lead to increased in the atmosphere.
However, boreal forests, particularly those in North America, showed a different response to warming. Many boreal forests greened, but the trend was not as strong as it was for tundra of the circumpolar Arctic, mostly characterized by shrub expansion and increased growth. In North America, some boreal forests actually experienced browning over the study period. Droughts, increased forest fire activity, animal behavior, industrial pollution, and a number of other factors may have contributed to browning.
Impacts on terrestrial fauna
Arctic warming negatively affects the foraging and breeding ecology of native Arctic mammals, such as Arctic foxes or Arctic reindeer. In July 2019, 200 Svalbard reindeer were found starved to death apparently due to low precipitation related to climate change. This was only one episode in the long-term decline of the species. United States Geological Survey research suggests that the shrinkage of Arctic sea ice would eventually extirpate polar bears from Alaska, but leave some of their habitat in the Canadian Arctic Archipelago and areas off the northern Greenland coast.
As the pure Arctic climate is gradually replaced by the subarctic climate, animals adapted to those conditions spread to the north. For instance, beavers have been actively colonizing Arctic regions, and as they create dams, they flood areas which used to be permafrost, contributing to its thaw and methane emissions from it. These colonizing species can outright replace native species, and they may also interbreed with their southern relations, like in the case of the Grizzly–polar bear hybrid. This usually has the effect of reducing the genetic diversity of the genus. Infectious diseases, such as brucellosis or phocine distemper virus, may spread to populations previously separated by the cold, or, in case of the marine mammals, the sea ice.
Marine ecosystems
The reduction of sea ice has brought more sunlight to the phytoplankton and increased the annual marine primary production in the Arctic by over 30% between 1998 and 2020. As the result, the Arctic Ocean became a stronger carbon sink over this period; yet, it still accounts for only 5% to 14% of the total ocean carbon sink, although it is expected to play a larger role in the future. By 2100, phytoplankton biomass in the Arctic Ocean is generally expected to increase by ~20% relative to 2000 under the low-emission scenario, and by 30-40% under the high-emission scenario.
Atlantic cod have been able to move deeper into the Arctic due to the warming waters, while the Polar cod and local marine mammals have been losing habitat. Many copepod species appear to be declining, which is also likely to reduce the numbers of fish which prey on them, such as walleye pollock or the arrowtooth flounder. This also affects Arctic shorebirds. For instance, around 9000 puffins and other shorebirds in Alaska died of starvation in 2016, because too many fish have moved to the north. While the shorebirds also appear to nest more successfully due to the observed warming, this benefit may be more than offset by phenological mismatch between shorebirds' and other species' life cycles. Marine mammals such as ringed seals and walruses are also being negatively affected by the warming.
Greenhouse gas emissions from the Arctic
In 2024, the Arctic has transformed from a carbon sink to a carbon source due to the impacts of climate change, mainly rising temperatures and wildfires.
Permafrost thaw
Permafrost is an important component of hydrological systems and ecosystems within the Arctic landscape. In the Northern Hemisphere the terrestrial permafrost domain comprises around 18 million km2. Within this permafrost region, the total soil organic carbon (SOC) stock is estimated to be 1,460-1,600 Pg (where 1 Pg = 1 billion tons), which constitutes double the amount of carbon currently contained in the atmosphere.
Black carbon
Black carbon deposits (from the combustion of heavy fuel oil (HFO) of Arctic shipping) absorb solar radiation in the atmosphere and strongly reduce the albedo when deposited on snow and ice, thus accelerating the effect of the melting of snow and sea ice. A 2013 study quantified that gas flaring at petroleum extraction sites contributed over 40% of the black carbon deposited in the Arctic. 2019 research attributed the majority (56%) of Arctic surface black carbon to emissions from Russia, followed by European emissions, and Asia also being a large source. In 2015, research suggested that reducing black carbon emissions and short-lived greenhouse gases by roughly 60 percent by 2050 could cool the Arctic up to 0.2 °C. However, a 2019 study indicates that "Black carbon emissions will continuously rise due to increased shipping activities", specifically fishing vessels.
The number of wildfires in the Arctic Circle has increased. In 2020, Arctic wildfire emissions broke a new record, peaking at 244 megatonnes of carbon dioxide emitted. This is due to the burning of peatlands, carbon-rich soils that originate from the accumulation of waterlogged plants which are mostly found at Arctic latitudes. These peatlands are becoming more likely to burn as temperatures increase, but their own burning and releasing of contributes to their own likelihood of burning in a positive feedback loop.The smoke from wildfires defined as "brown carbon" also increases arctic warming, with its warming effect is around 30% that of black carbon. As wildfires increases with warming this creates a positive feedback loop.
Methane clathrate deposits
Effects on other parts of the world
On ocean circulation
On mid-latitude weather
Impacts on people
Territorial claims
Growing evidence that global warming is shrinking polar ice has added to the urgency of several nations' Arctic territorial claims in hopes of establishing resource development and new shipping lanes, in addition to protecting sovereign rights.
As ice sea coverage decreases more and more, year on year, Arctic countries (Russia, Canada, Finland, Iceland, Norway, Sweden, the United States and Denmark representing Greenland) are making moves on the geopolitical stage to ensure access to potential new shipping lanes, oil and gas reserves, leading to overlapping claims across the region.
There is more activity in terms of maritime boundaries between countries, where overlapping claims for internal waters, territorial seas and particularly Exclusive Economic Zones (EEZs) can cause frictions between nations. Currently, official maritime borders have an unclaimed triangle of international waters lying between them, that is at the centerpoint of international disputes.
This unclaimed land can be obtainable by submitting a claim to the United Nations Convention on the Law of the Sea, these claims can be based on geological evidence that continental shelves extend beyond their current maritime borders and into international waters.
Some overlapping claims are still pending resolution by international bodies, such as a large portion containing the north pole that is both claimed by Denmark and Russia, with some parts of it also contested by Canada. Another example is that of the Northwest Passage, globally recognized as international waters, but technically in Canadian waters. This has led to Canada wanting to limit the number of ships that can go through for environmental reasons but the United States disputes that they have the authority to do so, favouring unlimited passage of vessels.
Navigation
The Transpolar Sea Route is a future Arctic shipping lane running from the Atlantic Ocean to the Pacific Ocean across the center of the Arctic Ocean. The route is also sometimes called Trans-Arctic Route. In contrast to the Northeast Passage (including the Northern Sea Route) and the North-West Passage it largely avoids the territorial waters of Arctic states and lies in international high seas.
Governments and private industry have shown a growing interest in the Arctic. Major new shipping lanes are opening up: the northern sea route had 34 passages in 2011 while the Northwest Passage had 22 traverses, more than any time in history. Shipping companies may benefit from the shortened distance of these northern routes. Access to natural resources will increase, including valuable minerals and offshore oil and gas. Finding and controlling these resources will be difficult with the continually moving ice. Tourism may also increase as less sea ice will improve safety and accessibility to the Arctic.
The melting of Arctic ice caps is likely to increase traffic in and the commercial viability of the Northern Sea Route. One study, for instance, projects, "remarkable shifts in trade flows between Asia and Europe, diversion of trade within Europe, heavy shipping traffic in the Arctic and a substantial drop in Suez traffic. Projected shifts in trade also imply substantial pressure on an already threatened Arctic ecosystem."
Infrastructure
Toxic pollution
Impacts on indigenous peoples
As climate change speeds up, it is having more and more of a direct impact on societies around the world. This is particularly true of people that live in the Arctic, where increases in temperature are occurring at faster rates than at other latitudes in the world, and where traditional ways of living, deeply connected with the natural arctic environment are at particular risk of environmental disruption caused by these changes.
The warming of the atmosphere and ecological changes that come alongside it presents challenges to local communities such as the Inuit. Hunting, which is a major way of survival for some small communities, will be changed with increasing temperatures. The reduction of sea ice will cause certain species populations to decline or even become extinct. Inuit communities are deeply reliant on seal hunting, which is dependent on sea ice flats, where seals are hunted.
Unsuspected changes in river and snow conditions will cause herds of animals, including reindeer, to change migration patterns, calving grounds, and forage availability. In good years, some communities are fully employed by the commercial harvest of certain animals. The harvest of different animals fluctuates each year and with the rise of temperatures it is likely to continue changing and creating issues for Inuit hunters, as unpredictability and disruption of ecological cycles further complicate life in these communities, which already face significant problems, such as Inuit communities being the poorest and most unemployed of North America.
Other forms of transportation in the Arctic have seen negative impacts from the current warming, with some transportation routes and pipelines on land being disrupted by the melting of ice. Many Arctic communities rely on frozen roadways to transport supplies and travel from area to area. The changing landscape and unpredictability of weather is creating new challenges in the Arctic. Researchers have documented historical and current trails created by the Inuit in the Pan Inuit Trails Atlas, finding that the change in sea ice formation and breakup has resulted in changes to the routes of trails created by the Inuit.
Adaptation
Research
Individual countries within the Arctic zone, Canada, Denmark (Greenland), Finland, Iceland, Norway, Russia, Sweden, and the United States (Alaska) conduct independent research through a variety of organizations and agencies, public and private, such as Russia's Arctic and Antarctic Research Institute. Countries who do not have Arctic claims, but are close neighbors, conduct Arctic research as well, such as the Chinese Arctic and Antarctic Administration (CAA). The United States's National Oceanic and Atmospheric Administration (NOAA) produces an Arctic Report Card annually, containing peer-reviewed information on recent observations of environmental conditions in the Arctic relative to historical records. International cooperative research between nations has also become increasingly important:
Arctic climate change is summarized by the Intergovernmental Panel on Climate Change (IPCC) in its series of Assessment Reports and the Arctic Climate Impact Assessment.
European Space Agency (ESA) launched CryoSat-2 on 8 April 2010. It provides satellite data on Arctic ice cover change rates.
International Arctic Buoy Program: deploys and maintains buoys that provide real-time position, pressure, temperature, and interpolated ice velocity data
International Arctic Research Center: Main participants are the United States and Japan.
International Arctic Science Committee: non-governmental organization (NGO) with diverse membership, including 23 countries from 3 continents.
'Role of the Arctic Region', in conjunction with the International Polar Year, was the focus of the second international conference on Global Change Research, held in Nynäshamn, Sweden, October 2007.
SEARCH (Study of Environmental Arctic Change): A research framework originally promoted by several US agencies; an international extension is ISAC (the International Study of Arctic Change).
The 2021 Arctic Monitoring and Assessment Programme (AMAP) report by an international team of more than 60 experts, scientists, and indigenous knowledge keepers from Arctic communities, was prepared from 2019 to 2021. It is a follow-up report of the 2017 assessment, "Snow, Water, Ice and Permafrost in the Arctic" (SWIPA). The 2021 IPCC AR6 WG1 Technical Report confirmed that "[o]bserved and projected warming" were ""strongest in the Arctic". According to an 11 August 2022 article published in Nature, there have been numerous reports that the Arctic is warming from twice to three times as fast as the global average since 1979, but the co-authors cautioned that the recent report of the "four-fold Arctic warming ratio" was potentially an "extremely unlikely event". The annual mean Arctic Amplification (AA) index had "reached values exceeding four" from c. 2002 through 2022, according to a July 2022 article in Geophysical Research Letters.
The 14 December 2021 16th Arctic Report Card produced by the United States's National Oceanic and Atmospheric Administration (NOAA) and released annually, examined the "interconnected physical, ecological and human components" of the circumpolar Arctic. The report said that the 12 months between October 2020 and September 2021 were the "seventh warmest over Arctic land since the record began in 1900". The 2017 report said that the melting ice in the warming Arctic was unprecedented in the past 1500 years. NOAA's State of the Arctic Reports, starting in 2006, updates some of the records of the original 2004 and 2005 Arctic Climate Impact Assessment (ACIA) reports by the intergovernmental Arctic Council and the non-governmental International Arctic Science Committee.
A 2022 United Nations Environment Programme (UNEP) report "Spreading Like Wildfire: The Rising Threat Of Extraordinary Landscape Fires" said that smoke from wildfires around the world created a positive feedback loop that is a contributing factor to Arctic melting. The 2020 Siberian heatwave was "associated with extensive burning in the Arctic Circle". Report authors said that this extreme heat event was the first to demonstrate that it would have been "almost impossible" without anthropogenic emissions and climate change.
See also
Arctic cooperation and politics
Arctic haze
Arctic sea ice ecology and history
Atlantification of the Arctic
Atmospheric Brown Cloud
Climate of the Arctic
Climate and vegetation interactions in the Arctic
Northern Sea Route
Climate change in Antarctica
Ozone depletion and climate change
Save the Arctic
References
Works cited
. Climate Change 2013 Working Group 1 website.
Further reading
External links
Arctic Change website, in near-realtime
Arctic Sea Ice News & Analysis
The Arctic ice sheet, satellite map with daily updates.
NOAA: Arctic Theme Page – A comprehensive resource focused on the Arctic
Killing the Arctic Origins: Current Events in Historical Perspective (October 2020), by John McCannon
G
Arctic research
Effects of climate change
Environment of the Arctic
Arctic Sea, Global warming
Arctic Sea, Global warming
Regional effects of climate change
Arctic | Climate change in the Arctic | [
"Physics"
] | 4,576 | [
"Physical phenomena",
"Earth phenomena",
"Sea ice"
] |
13,294,412 | https://en.wikipedia.org/wiki/Orientation%20of%20churches | The orientation of a building refers to the direction in which it is constructed and laid out, taking account of its planned purpose and ease of use for its occupants, its relation to the path of the sun and other aspects of its environment. Within Christian church architecture, orientation is an arrangement by which the point of main interest in the interior is towards the east (). The east end is where the altar is placed, often within an apse. The façade and main entrance are accordingly at the west end.
The opposite arrangement, in which the church is entered from the east and the sanctuary is at the other end, is called occidentation.
Since the eighth century most churches are oriented. Hence, even in the many churches where the altar end is not actually to the east, terms such as "east end", "west door", "north aisle" are commonly used as if the church were oriented, treating the altar end as the liturgical east.
History
The first Christians faced east when praying, likely an outgrowth of the ancient Jewish custom of praying in the direction of the Holy Temple in Jerusalem. Due to this established custom, Tertullian says some non-Christians thought they worshipped the sun. Origen says: "The fact that ... of all the quarters of the heavens, the east is the only direction we turn to when we pour out prayer, the reasons for this, I think, are not easily discovered by anyone." Later on, various Church Fathers advanced mystical reasons for the custom. One such explanation is that Christ's Second Coming was expected to be from the east: "For as the lightning comes from the east and shines as far as the west, so will be the coming of the Son of Man".
At first, the orientation of the building in which Christians met was unimportant, but after the legalization of the religion in the fourth century, customs developed in this regard. These differed in Eastern and Western Christianity.
The Apostolic Constitutions, a work of Eastern Christianity written between 375 and 380 AD, gave it as a rule that churches should have the sanctuary (with apse and sacristies) at the east end, to enable Christians to pray eastward in church as in private or in small groups. In the middle of the sanctuary was the altar, behind which was the bishop's throne, flanked by the seats of the presbyters, while the laity were on the opposite side. However, even in the East there were churches (for example, in Tyre, Lebanon) that had the entrance at the east end, and the sanctuary at the west end. During the readings all looked towards the readers, the bishop and presbyters looking westward, the people eastward. The Apostolic Constitutions, like the other documents that speak of the custom of praying towards the east, do not indicate on which side of the altar the bishop stood for "the sacrifice".
The earliest Christian churches in Rome were all built with the entrance to the east, like the Jewish temple in Jerusalem. Only in the 8th or 9th century did Rome accept the orientation that had become obligatory in the Byzantine Empire and was also generally adopted in the Frankish Empire and elsewhere in northern Europe. The original Constantinian Church of the Holy Sepulchre in Jerusalem also had the altar in the west end.
The old Roman custom of having the altar at the west end and the entrance at the east was sometimes followed as late as the 11th century even in areas under Frankish rule, as seen in Petershausen (Constance), Bamberg Cathedral, Augsburg Cathedral, Regensburg Cathedral, and Hildesheim Cathedral (all in present-day Germany).
The importance attached to orientation of churches declined after the 15th century. In his instructions on the building and arrangement of churches, Charles Borromeo, archbishop of Milan from 1560 to 1584, expressed a preference for having the apse point exactly east, but accepted that, where that is impractical, a church could be built even on a north–south axis, preferably with the façade at the southern end. He stated that the altar can also be at the west end, where "in accordance with the rite of the Church it is customary for Mass to be celebrated at the main altar by a priest facing the people".
The medieval mendicant orders generally built their churches inside towns and had to fit them into the town plans, regardless of orientation. Later, in the Spanish and Portuguese colonial empires they made no attempt to observe orientation, as is seen in San Francisco de Asis Mission Church near Taos, New Mexico. Today in the West, orientation is little observed in building churches, even by the Catholic church, and still less by Protestant denominations.
Inexactitude of orientation
Charles Borromeo stated that churches ought to be oriented exactly east, in line with the rising sun at the equinoxes, not at the solstices, but some churches seem to be oriented to sunrise on the feast day of their patron saint. Thus St. Stephen's Cathedral, Vienna is oriented in line with sunrise on St. Stephen's Day, 26 December, in Julian calendar 1137, when it began to be built. However, a survey of old English churches published in 2006 showed practically no relationship with the feast days of the saints to whom they are dedicated. The results also did not conform to a theory that compass readings could have caused the variants. Taken as a body, those churches can only be said to have been oriented approximately but not exactly to the geographical east.
Another survey of a smaller number of English churches examined other possible alignments also and found that, if sunset as well as sunrise is taken into account, the saint's day hypothesis covered 43% of the cases considered, and that there was a significant correspondence also with sunrise on Easter morning of the year of foundation. The results provided no support for the compass readings hypothesis.
Yet another study of English churches found that a significant proportion of churches that showed a considerable deviation from true east were constrained by neighbouring buildings in town and perhaps by site topography in rural areas.
Similarly, a survey of a total of 32 medieval churches with reliable metadata in Lower Austria and northern Germany discovered only a few aligned in accordance with the saint's feast, with no general trend. There was no evidence of the use of compasses; and there was a preferred alignment towards true east, with variations due to town and natural topography.
A notable example of an (approximately) oriented church building that – to match the contours of its location and to avoid an area that was swampy at the time of its construction – bends slightly in the middle is Quimper Cathedral in Brittany.
Also the modern Coventry Cathedral faces north–south, perpendicular to the old cathedral that was bombed by the Luftwaffe during the blitz. The porch over the main entrance extends over the old wall and, while not connected to the original building does make a nod towards continuity of the structure.
See also
Ad orientem
Direction of prayer
References
Church architecture
Orientation (geometry)
Solar alignment | Orientation of churches | [
"Physics",
"Mathematics"
] | 1,428 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
13,294,827 | https://en.wikipedia.org/wiki/PowerPC%20e300 | The PowerPC e300 is a family of 32-bit PowerPC microprocessor cores developed by Freescale for primary use in system-on-a-chip (SoC) designs with speed ranging up to 800 MHz, thus making them ideal for embedded applications.
The e300 is a superscalar RISC core with 16/16 or 32/32 kB L1 data/instruction caches, a four-stage pipeline with load/store, system register, branch prediction and integer unit with optional double precision FPU.
The e300 core is completely backwards compatible with the G2 and PowerPC 603e cores from which it derives.
The e300 core is the CPU part of several SoC processors from Freescale:
The MPC83xx PowerQUICC II Pro family of telecom and network processors.
The MPC51xx and MPC52xx family of automotive and industrial control processors.
MSC7120 GPON, optical network processor integrated DSP unit.
References
E300
E300 | PowerPC e300 | [
"Technology"
] | 213 | [
"Computing stubs",
"Computer hardware stubs"
] |
13,295,107 | https://en.wikipedia.org/wiki/Transversal%20%28geometry%29 | In geometry, a transversal is a line that passes through two lines in the same plane at two distinct points. Transversals play a role in establishing whether two or more other lines in the Euclidean plane are parallel. The intersections of a transversal with two lines create various types of pairs of angles: consecutive interior angles, consecutive exterior angles, corresponding angles, and alternate angles. As a consequence of Euclid's parallel postulate, if the two lines are parallel, consecutive interior angles are supplementary, corresponding angles are equal, and alternate angles are equal.
Angles of a transversal
A transversal produces 8 angles, as shown in the graph at the above left:
4 with each of the two lines, namely α, β, γ and δ and then α1, β1, γ1 and δ1; and
4 of which are interior (between the two lines), namely α, β, γ1 and δ1 and 4 of which are exterior, namely α1, β1, γ and δ.
A transversal that cuts two parallel lines at right angles is called a perpendicular transversal. In this case, all 8 angles are right angles
When the lines are parallel, a case that is often considered, a transversal produces several congruent supplementary angles. Some of these angle pairs have specific names and are discussed below: corresponding angles, alternate angles, and consecutive angles.
Alternate angles
Alternate angles are the four pairs of angles that:
have distinct vertex points,
lie on opposite sides of the transversal and
both angles are interior or both angles are exterior.
If the two angles of one pair are congruent (equal in measure), then the angles of each of the other pairs are also congruent.
Proposition 1.27 of Euclid's Elements, a theorem of absolute geometry (hence valid in both hyperbolic and Euclidean Geometry), proves that if the angles of a pair of alternate angles of a transversal are congruent then the two lines are parallel (non-intersecting).
It follows from Euclid's parallel postulate that if the two lines are parallel, then the angles of a pair of alternate angles of a transversal are congruent (Proposition 1.29 of Euclid's Elements).
Corresponding angles
Corresponding angles are the four pairs of angles that:
have distinct vertex points,
lie on the same side of the transversal and
one angle is interior and the other is exterior.
Two lines are parallel if and only if the two angles of any pair of corresponding angles of any transversal are congruent (equal in measure).
Proposition 1.28 of Euclid's Elements, a theorem of absolute geometry (hence valid in both hyperbolic and Euclidean Geometry), proves that if the angles of a pair of corresponding angles of a transversal are congruent then the two lines are parallel (non-intersecting).
It follows from Euclid's parallel postulate that if the two lines are parallel, then the angles of a pair of corresponding angles of a transversal are congruent (Proposition 1.29 of Euclid's Elements).
If the angles of one pair of corresponding angles are congruent, then the angles of each of the other pairs are also congruent. In the various images with parallel lines on this page, corresponding angle pairs are: α=α1, β=β1, γ=γ1 and δ=δ1.
Consecutive interior angles
Consecutive interior angles are the two pairs of angles that:
have distinct vertex points,
lie on the same side of the transversal and
are both interior.
Two lines are parallel if and only if the two angles of any pair of consecutive interior angles of any transversal are supplementary (sum to 180°).
Proposition 1.28 of Euclid's Elements, a theorem of absolute geometry (hence valid in both hyperbolic and Euclidean Geometry), proves that if the angles of a pair of consecutive interior angles are supplementary then the two lines are parallel (non-intersecting).
It follows from Euclid's parallel postulate that if the two lines are parallel, then the angles of a pair of consecutive interior angles of a transversal are supplementary (Proposition 1.29 of Euclid's Elements).
If one pair of consecutive interior angles is supplementary, the other pair is also supplementary.
Other characteristics of transversals
If three lines in general position form a triangle are then cut by a transversal, the lengths of the six resulting segments satisfy Menelaus' theorem.
Related theorems
Euclid's formulation of the parallel postulate may be stated in terms of a transversal. Specifically, if the interior angles on the same side of the transversal are less than two right angles then lines must intersect. In fact, Euclid uses the same phrase in Greek that is usually translated as "transversal".
Euclid's Proposition 27 states that if a transversal intersects two lines so that alternate interior angles are congruent, then the lines are parallel. Euclid proves this by contradiction: If the lines are not parallel then they must intersect and a triangle is formed. Then one of the alternate angles is an exterior angle equal to the other angle which is an opposite interior angle in the triangle. This contradicts Proposition 16 which states that an exterior angle of a triangle is always greater than the opposite interior angles.
Euclid's Proposition 28 extends this result in two ways. First, if a transversal intersects two lines so that corresponding angles are congruent, then the lines are parallel. Second, if a transversal intersects two lines so that interior angles on the same side of the transversal are supplementary, then the lines are parallel. These follow from the previous proposition by applying the fact that opposite angles of intersecting lines are equal (Prop. 15) and that adjacent angles on a line are supplementary (Prop. 13). As noted by Proclus, Euclid gives only three of a possible six such criteria for parallel lines.
Euclid's Proposition 29 is a converse to the previous two. First, if a transversal intersects two parallel lines, then the alternate interior angles are congruent. If not, then one is greater than the other, which implies its supplement is less than the supplement of the other angle. This implies that there are interior angles on the same side of the transversal which are less than two right angles, contradicting the fifth postulate. The proposition continues by stating that on a transversal of two parallel lines, corresponding angles are congruent and the interior angles on the same side are equal to two right angles. These statements follow in the same way that Prop. 28 follows from Prop. 27.
Euclid's proof makes essential use of the fifth postulate, however, modern treatments of geometry use Playfair's axiom instead. To prove proposition 29 assuming Playfair's axiom, let a transversal cross two parallel lines and suppose that the alternate interior angles are not equal. Draw a third line through the point where the transversal crosses the first line, but with an angle equal to the angle the transversal makes with the second line. This produces two different lines through a point, both parallel to another line, contradicting the axiom.
In higher dimensions
In higher dimensional spaces, a line that intersects each of a set of lines in distinct points is a transversal of that set of lines. Unlike the two-dimensional (plane) case, transversals are not guaranteed to exist for sets of more than two lines.
In Euclidean 3-space, a regulus is a set of skew lines, , such that through each point on each line of , there passes a transversal of and through each point of a transversal of there passes a line of . The set of transversals of a regulus is also a regulus, called the opposite regulus, . In this space, three mutually skew lines can always be extended to a regulus.
References
Elementary geometry | Transversal (geometry) | [
"Mathematics"
] | 1,678 | [
"Elementary mathematics",
"Elementary geometry"
] |
13,295,282 | https://en.wikipedia.org/wiki/Interference%20microscopy | Interference microscopy involving measurements of differences in the path between two beams of light that have been split.
Types include:
Classical interference microscopy
Differential interference contrast microscopy
Fluorescence interference contrast microscopy
Interference reflection microscopy
See also
Phase contrast microscopy
References
Microscopy | Interference microscopy | [
"Chemistry"
] | 46 | [
"Microscopy"
] |
13,295,572 | https://en.wikipedia.org/wiki/Cyberocracy | In futurology, cyberocracy describes a hypothetical form of government that rules by the effective use of information. The exact nature of a cyberocracy is largely speculative as, apart from Project Cybersyn, there have been no cybercratic governments; however, a growing number of cybercratic elements can be found in many developed nations. Cyberocracy theory is largely the work of David Ronfeldt, who published several papers on the theory. Some sources equate cyberocracy with algorithmic governance, although algorithms are not the only means of processing information.
Overview
Cyberocracy, from the roots 'cyber-' and '-cracy' signifies rule by way of information, especially when using interconnected computer networks. The concept involves information and its control as the source of power and is viewed as the next stage of the political evolution.
The fundamental feature of a cyberocracy would be the rapid transmission of relevant information from the source of a problem to the people in a position able to fix said problem, most likely via a system of interconnected computer networks and automated information sorting software, with human decision makers only being called into use in the case of unusual problems, problem trends, or through an appeal process pursued by an individual. Cyberocracy is the functional antithesis of traditional bureaucracies which sometimes notoriously suffer from fiefdom, slowness, and a list of other unfortunate qualities. A bureaucracy forces and limits the flow of information through defined channels that connect discrete points while cyberocracy transmits volumes of information accessible to many different parties. In addition, bureaucracy deploys brittle practices such as programs and budgets whereas cyberocracy is more adaptive with its focus on management and cultural contexts. Ultimately a cyberocracy may use administrative AIs if not an AI as head of state forming a machine rule government.
According to Ronfeldt and Valda, it is still too early to determine the exact form of cyberocracy but that it could lead to new forms of the traditional systems of governance such as democracy, totalitarianism, and hybrid governments. Some noted that cyberocracy is still speculative since there is currently no existing cybercratic government, although it is acknowledged that some of its components are already adopted by governments in a number of developed countries.
While the outcome or the results of cyberocracy is still challenging to identify, there are those who cite that it will lead to new forms of governmental and political systems, particularly amid the emergence of new sensory apparatuses, networked society, and modes of networked governance.
References
Further reading
Forms of government
Information society
Information revolution
Politics and technology | Cyberocracy | [
"Technology"
] | 540 | [
"Computing and society",
"Information society"
] |
13,297,028 | https://en.wikipedia.org/wiki/People%20of%20the%20Web | People of the Web was a weekly Yahoo! News feature series that profiled the faces behind the Internet. It reported on World Wide Web content creators, particularly the ones "who are really changing the political, social and religious fabric," according to the show's host, award winning journalist Kevin Sites.
The format included both video and text stories. Sites cites the focus on the business aspect of web content, and the relative lack of coverage of web personalities as the motivation for the series.
Sites' previous project was a year-long series of stories for Yahoo! from various war zones around the world called "Kevin Sites in the Hot Zone", and he brought the same traveling video journalist style to this project. He used the same support crew from that series. Neeraj Khemlani, VP, Programming & Development for Yahoo! News & Info said they simply moved from putting a human face on war, to putting a human face on the Internet.
Gal Beckerman wrote in the Columbia Journalism Review that, "the idea of the series is so brilliant and filled with seemingly endless possibility that it’s amazing no one thought of it before."
The series premiered May 29, 2007. It posted several feature stories including profiles on:
lifecasters Justin Kan and iJustine from Justin.tv
former child star turned evangelical Christian Kirk Cameron
gay-activist blogger Mike Rogers who specializes in outing closeted gay politicians
a grandfather/grandson duo hosting an online worldwide group hug
a convicted felon that leads a police watch dog group
San Francisco based video blogger Josh Wolf that addressed the question, "Can bloggers be journalists?"
The web series has now ended.
References
External links
POTW web site on Yahoo!
People
Journalism in the United States | People of the Web | [
"Technology"
] | 349 | [
"Computing stubs",
"World Wide Web stubs"
] |
13,297,359 | https://en.wikipedia.org/wiki/H-infinity%20loop-shaping | H-infinity loop-shaping is a design methodology in modern control theory. It combines the traditional intuition of classical control methods, such as Bode's sensitivity integral, with H-infinity optimization techniques to achieve controllers whose stability and performance properties hold despite bounded differences between the nominal plant assumed in design and the true plant encountered in practice. Essentially, the control system designer describes the desired responsiveness and noise-suppression properties by weighting the plant transfer function in the frequency domain; the resulting 'loop-shape' is then 'robustified' through optimization. Robustification usually has little effect at high and low frequencies, but the response around unity-gain crossover is adjusted to maximise the system's stability margins. H-infinity loop-shaping can be applied to multiple-input multiple-output (MIMO) systems.
H-infinity loop-shaping can be carried out using commercially available software.
H-infinity loop-shaping has been successfully deployed in industry. In 1995, R. Hyde, K. Glover and G. T. Shanks published a paper describing the successful application of the technique to a VTOL aircraft. In 2008, D. J. Auger, S. Crawshaw and S. L. Hall published another paper describing a successful application to a steerable marine radar tracker, noting that the technique had the following benefits:
Easy to apply – commercial software handles the hard math.
Easy to implement – standard transfer functions and state-space methods can be used.
Plug and play – no need for re-tuning on an installation-by-installation basis.
A closely related design methodology, developed at about the same time, was based on the theory of the gap metric. It was applied in 1993 for designing controllers to dampen vibrations in large flexible structures at Wright-Patterson Air Force Base and Jet Propulsion Laboratory
See also
Control theory
H-infinity control
References
Further reading
Auger, D. J., Crawshaw, S., and Hall, S. L. (2008). Robust H-infinity Control of a Steerable Marine Radar Tracker. In Proceedings of the UKACC International Conference on Control 2008. Manchester: UKACC.
Chiang, R., Safonov, M. G., Balas, G., and Packard, A. (2007). Robust Control Toolbox, 3rd ed. Natick, MA: The Mathworks, Inc.
Glad, T. and Ljung, L. (2000). Control Theory: Multivariable and Nonlinear Methods. London: Taylor & Francis.
Georgiou T.T. and Smith M.C., Linear systems and robustness: a graph point of view, in Lecture Notes in Control and Information Sciences, Springer-Verlag, 1992, 183, pp. 114–121.
Georgiou T.T. and Smith M.C., Topological Approaches to Robustness, Lecture Notes in Control and Information Sciences, 185, pp. 222–241, Springer-Verlag, 1993.
Hyde, R.A., Glover, K. and Shanks, G. T. (1995). VSTOL first flight of an H-infinity control law. Computing and Control Engineering Journal, 6(1):11–16.
McFarlane, D. C. and Glover, K. (1989). Robust Controller Design Using Normalized Coprime Factor Plant Descriptions (Lecture Notes in Control and Information Sciences), 1st ed. New York: Springer.
Vinnicombe, G. (2000). Uncertainty and feedback: H-Infinity Loop-Shaping and the V-Gap Metric, 1st ed. London: Imperial College Press.
Zhou, K., Doyle, J. C. and Glover, K. (1995). Robust and Optimal Control. New York: Prentice-Hall.
Zhou, K. and Doyle, J. C. (1998). Essentials of Robust Control. New York: Prentice-Hall.
Control theory | H-infinity loop-shaping | [
"Mathematics"
] | 804 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
13,297,390 | https://en.wikipedia.org/wiki/I.play | i.play, also known as the Intelligent Play System is an interactive playground designed in collaboration between Progressive Sports Technologies Ltd and Playdale Playgrounds.
The i.play has switches and lights which light up and make sounds, requiring participants to push them to earn points.
The development and evaluation of the i.play system is being conducted by Phil Hodgkins as part of a Ph.D. in sports technology at Loughborough University.
The world's first i.play system was opened to the public at Barrow Park, Barrow-in-Furness, UK on 20 July 2007.
References
External links
i.play site
Playdale website
Play (activity)
Loughborough University | I.play | [
"Biology"
] | 138 | [
"Play (activity)",
"Behavior",
"Human behavior"
] |
13,297,769 | https://en.wikipedia.org/wiki/Whipple%20formulae | In the theory of special functions, Whipple's transformation for Legendre functions, named after Francis John Welsh Whipple, arise from a general expression, concerning associated Legendre functions. These formulae have been presented previously in terms of a viewpoint aimed at spherical harmonics, now that we view the equations in terms of toroidal coordinates, whole new symmetries of Legendre functions arise.
For associated Legendre functions of the first and second kind,
and
These expressions are valid for all parameters and . By shifting the complex degree and order in an appropriate fashion, we obtain Whipple formulae for general complex index interchange of general associated Legendre functions of the first and second kind. These are given by
and
Note that these formulae are well-behaved for all values of the degree and order, except for those with integer values. However, if we examine these formulae for toroidal harmonics, i.e. where the degree is half-integer, the order is integer, and the argument is positive and greater than unity one obtains
and
.
These are the Whipple formulae for toroidal harmonics. They show an important property of toroidal harmonics under index (the integers associated with the order and the degree) interchange.
External links
References
Special functions | Whipple formulae | [
"Mathematics"
] | 257 | [
"Special functions",
"Combinatorics"
] |
13,297,861 | https://en.wikipedia.org/wiki/Patch%20test%20%28finite%20elements%29 | The patch test in the finite element method is a simple indicator of the quality of a finite element, developed by Bruce Irons.
The patch test uses a partial differential equation on a domain consisting from several elements set up so that the exact solution is known and can be reproduced, in principle, with zero error. Typically, in mechanics, the prescribed exact solution consists of displacements that vary as piecewise linear functions in space (called a constant strain solution). The elements pass the patch test if the finite element solution is the same as the exact solution.
It was long conjectured by engineers that passing the patch test is sufficient for the convergence of the finite element, that is, to ensure that the solutions from the finite element method converge to the exact solution of the partial differential equation as the finite element mesh is refined. However, this is not the case, and the patch test is neither sufficient nor necessary for convergence.
A broader definition of patch test (applicable to any numerical method, including and beyond finite elements) is any test problem having an exact solution that can, in principle, be exactly reproduced by the numerical approximation. Therefore, a finite-element simulation that uses linear shape functions has patch tests for which the exact solution must be piecewise linear, while higher-order finite elements have correspondingly higher-order patch tests.
References
Finite element method | Patch test (finite elements) | [
"Mathematics"
] | 272 | [
"Applied mathematics",
"Applied mathematics stubs"
] |
13,298,334 | https://en.wikipedia.org/wiki/Methyl%20cyanoformate | Methyl cyanoformate is the organic compound with the formula CH3OC(O)CN. It is used as a reagent in organic synthesis as a source of the methoxycarbonyl group, in which context it is also known as Mander's reagent. When a lithium enolate is generated in diethyl ether or methyl t-butyl ether, treatment with Mander's reagent will selectively afford the C-acylation product. Thus, for enolate acylation reactions in which C- vs. O-selectivity is a concern, methyl cyanoformate is often used in place of more common acylation reagent like methyl chloroformate.
Methyl cyanoformate is also an ingredient in Zyklon A. It has lachrymatory effects.
References
Methyl esters
Carboxylate esters
Reagents for organic chemistry
Nitriles
Fumigants
Blood agents
Lachrymatory agents | Methyl cyanoformate | [
"Chemistry"
] | 204 | [
"Chemical weapons",
"Functional groups",
"Lachrymatory agents",
"Reagents for organic chemistry",
"Nitriles",
"Blood agents"
] |
13,298,486 | https://en.wikipedia.org/wiki/Potassium%20chlorochromate | Potassium chlorochromate is an inorganic compound with the formula KCrO3Cl. It is the potassium salt of chlorochromate, [CrO3Cl]−. It is a water-soluble orange compound is used occasionally for oxidation of organic compounds. It is sometimes called Péligot's salt, in recognition of its discoverer Eugène-Melchior Péligot.
Structure and synthesis
Potassium chlorochromate was originally prepared by treating potassium dichromate with hydrochloric acid. An improved route involves the reaction of chromyl chloride and potassium chromate:
K2CrO4 + CrO2Cl2 → 2KCrO3Cl
The salt consists of the tetrahedral chlorochromate anion. The average Cr=O bond length is 159 pm, and the Cr-Cl distance is 219 pm.
Reactions
Although air-stable, its aqueous solutions undergo hydrolysis in the presence of strong acids. With concentrated hydrochloric acid, it converts to chromyl chloride, which in turn reacts with water to form chromic acid and additional hydrochloric acid. When treated with 18-crown-6, it forms the lipophilic salt [K(18-crown-6)]CrO3Cl.
Peligot's salt can oxidize benzyl alcohol, a reaction which can be catalyzed by acid. A related salt, pyridinium chlorochromate, is more commonly used for this reaction.
Safety
Potassium chlorochromate is toxic upon ingestion, and may cause irritation, chemical burns, and even ulceration on contact with the skin or eyes. . Like other hexavalent chromium compounds, it is also carcinogenic and mutagenic.
References
Oxidizing agents
Chromates
Potassium compounds | Potassium chlorochromate | [
"Chemistry"
] | 387 | [
"Inorganic compounds",
"Redox",
"Oxidizing agents",
"Salts",
"Inorganic compound stubs",
"Chromates"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.