id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
30,724,348 | https://en.wikipedia.org/wiki/Kardar%E2%80%93Parisi%E2%80%93Zhang%20equation | In mathematics, the Kardar–Parisi–Zhang (KPZ) equation is a non-linear stochastic partial differential equation, introduced by Mehran Kardar, Giorgio Parisi, and Yi-Cheng Zhang in 1986. It describes the temporal change of a height field with spatial coordinate and time coordinate :
Here, is white Gaussian noise with average
and second moment
, , and are parameters of the model, and is the dimension.
In one spatial dimension, the KPZ equation corresponds to a stochastic version of Burgers' equation with field via the substitution .
Via the renormalization group, the KPZ equation is conjectured to be the field theory of many surface growth models, such as the Eden model, ballistic deposition, and the weakly asymmetric single step solid on solid process (SOS) model. A rigorous proof has been given by Bertini and Giacomin in the case of the SOS model.
KPZ universality class
Many interacting particle systems, such as the totally asymmetric simple exclusion process, lie in the KPZ universality class. This class is characterized by the following critical exponents in one spatial dimension (1 + 1 dimension): the roughness exponent , growth exponent , and dynamic exponent . In order to check if a growth model is within the KPZ class, one can calculate the width of the surface:
where is the mean surface height at time and is the size of the system. For models within the KPZ class, the main properties of the surface can be characterized by the Family–Vicsek scaling relation of the roughness
with a scaling function satisfying
In 2014, Hairer and Quastel showed that more generally, the following KPZ-like equations lie within the KPZ universality class:
where is any even-degree polynomial.
A family of processes that are conjectured to be universal limits in the (1+1) KPZ universality class and govern the long time fluctuations are the Airy processes and the KPZ fixed point.
Solving the KPZ equation
Due to the nonlinearity in the equation and the presence of space-time white noise, solutions to the KPZ equation are known to not be smooth or regular, but rather 'fractal' or 'rough.' Even without the nonlinear term, the equation reduces to the stochastic heat equation, whose solution is not differentiable in the space variable but satisfies a Hölder condition with exponent less than 1/2. Thus, the nonlinear term is ill-defined in a classical sense.
In 2013, Martin Hairer made a breakthrough in solving the KPZ equation by an extension of the Cole–Hopf transformation and constructing approximations using Feynman diagrams. In 2014, he was awarded the Fields Medal for this work on the KPZ equation, along with rough paths theory and regularity structures. There were 6 different analytic self-similar solutions found for the (1+1) KPZ equation with different analytic noise terms.
Physical derivation
This derivation is from and. Suppose we want to describe a surface growth by some partial differential equation. Let represent the height of the surface at position and time . Their values are continuous. We expect that there would be a sort of smoothening mechanism. Then the simplest equation for the surface growth may be taken to be the diffusion equation,
But this is a deterministic equation, implying the surface has no random fluctuations. The simplest way to include fluctuations is to add a noise term. Then we may employ the equation
with taken to be the Gaussian white noise with mean zero and covariance . This is known as the Edwards–Wilkinson (EW) equation or stochastic heat equation with additive noise (SHE). Since this is a linear equation, it can be solved exactly by using Fourier analysis. But since the noise is Gaussian and the equation is linear, the fluctuations seen for this equation are still Gaussian. This means the EW equation is not enough to describe the surface growth of interest, so we need to add a nonlinear function for the growth. Therefore, surface growth change in time has three contributions. The first models lateral growth as a nonlinear function of the form . The second is a relaxation, or regularization, through the diffusion term , and the third is the white noise forcing . Therefore,
The key term , the deterministic part of the growth, is assumed to be a function only of the slope, and to be a symmetric function. A great observation of Kardar, Parisi, and Zhang (KPZ) was that while a surface grows in a normal direction (to the surface), we are measuring the height on the height axis, which is perpendicular to the space axis, and hence there should appear a nonlinearity coming from this simple geometric effect. When the surface slope is small, the effect takes the form , but this leads to a seemingly intractable equation. To circumvent this difficulty, one can take a general and expand it as a Taylor series,
The first term can be removed from the equation by a time shift, since if solves the KPZ equation, then solves
The second should vanish because of the symmetry of , but could anyway have been removed from the equation by a constant velocity shift of coordinates, since if solves the KPZ equation, then solves
Thus the quadratic term is the first nontrivial contribution, and it is the only one kept. We arrive at the KPZ equation
See also
Fokker–Planck equation
Fractal
Quantum field theory
Renormalization group
Rough path
Stochastic partial differential equation
Surface growth
Tracy–Widom distribution
Universality (dynamical systems)
Sources
Further reading
Statistical mechanics
Stochastic differential equations
Partial differential equations
Functions of space and time | Kardar–Parisi–Zhang equation | Physics | 1,197 |
13,552,928 | https://en.wikipedia.org/wiki/Water%20stop | A water stop or water station on a railroad is a place where steam trains stop to replenish water. The stopping of the train itself is also referred to as a "water stop". The term originates from the times of steam engines when large amounts of water were essential. Also known as wood and water stops or coal and water stops, since it was reasonable to replenish engines with fuel as well when adding water to the tender.
United States
During the very early days of steam locomotives, water stops were necessary every 7–10 miles (11–16 km) and consumed much travel time. With the introduction of tenders (a special car containing water and fuel), trains could run 100–150 miles (160–240 km) without a refill.
To accumulate the water, water stops employed water tanks, water towers and tank ponds. The water was initially pumped by windmills, watermills, or by hand pumps often by the train crew themselves. Later, small steam and gasoline engines were used.
As the U.S. railroad system expanded, large numbers of tank ponds were built by damming various small creeks that intersected the tracks in order to provide water for water stops. Largemouth bass were often stocked in tank ponds.
Many water stops along new railways evolved into new settlements. When a train stopped for water and was positioned by a water tower, a member of the engine crew, usually the fireman, swung out the spigot arm over the water tender and "jerked" the chain to begin watering. This gave rise to a 19th-century slang term "Jerkwater town" for towns too insignificant to have a regular train station. Some water stops grew into established settlements: for example, the town of Coalinga, California, formerly, Coaling Station A, gets its name from the original coal stop at this location. On the other hand, with the replacement of steam engines by diesel locomotives many of the then obsolete water stops, especially in deserted areas, became ghost towns.
During the days of the Wild West, isolated water stops were among the favorite ambush places for train robbers.
See also
Track pan - a water trough
Water crane
Notes
References
Rail infrastructure
Water supply
Steam locomotive technologies | Water stop | Chemistry,Engineering,Environmental_science | 448 |
46,780,448 | https://en.wikipedia.org/wiki/Penicillium%20meridianum | Penicillium meridianum is an anamorph species of the genus Penicillium.
References
meridianum
Fungi described in 1968
Fungus species | Penicillium meridianum | Biology | 32 |
35,411,073 | https://en.wikipedia.org/wiki/Raymond%20C.%20Archibald | Raymond Clare Archibald (7 October 1875 – 26 July 1955) was a prominent Canadian-American mathematician. He is known for his work as a historian of mathematics, his editorships of mathematical journals and his contributions to the teaching of mathematics.
Biography
Raymond Clare Archibald was born in South Branch, Stewiacke, Nova Scotia on 7 October 1875. He was the son of Abram Newcomb Archibald (1849–1883) and Mary Mellish Archibald (1849–1901). He was the fourth cousin twice removed of the famous Canadian-American astronomer and mathematician Simon Newcomb.
Archibald graduated in 1894 from Mount Allison College with B.A. degree in mathematics and teacher's certificate in violin. After teaching mathematics and violin for a year at the Mount Allison Ladies' College he went to Harvard where he received a B.A. 1896 and a M.A. in 1897. He then traveled to Europe where he attended the Humboldt University of Berlin during 1898 and received a Ph.D. cum laude from the University of Strasbourg in 1900. His advisor was Karl Theodor Reye and title of his dissertation was The Cardioide and Some of its Related Curves.
He returned to Canada in 1900 and taught mathematics and violin at the Mount Allison Ladies' College until 1907. After a one-year appointment at Acadia University he accepted an invitation of join the mathematics department at Brown University. He stayed at Brown for the rest of his career becoming a Professor Emeritus in 1943. While at Brown he created one of the finest mathematical libraries in the western hemisphere.
Archibald returned to Mount Allison in 1954 to curate the Mary Mellish Archibald Memorial Library, the library he had founded in 1905 to honor his mother. At his death the library contained 23,000 volumes, 2,700 records, and 70,000 songs in American and English poetry and drama.
Raymond Clare Archibald was a world-renowned historian of mathematics with a lifelong concern for the teaching of mathematics in secondary schools. At the presentation of his portrait to Brown University the head of the mathematics department, Professor Clarence Raymond Adams said of him:
"The instincts of the bibliophile were also his from early years. Possessing a passion for accurate detail, systematic by nature and blessed with a memory that was the marvel of his friends, he gradually acquired a knowledge of mathematical books and their values which has scarcely been equalled. This knowledge and an untiring energy he dedicated to the upbuilding of the mathematical library at Brown University. From modest beginnings he has developed this essential equipment of the mathematical investigator to a point where it has no superior, in completeness and in convenience for the user."
Honors
Archibald received honorary degrees from the University of Padua (LL.D., 1922), Mount Allison University (LL.D., 1923) and from Brown University (M.A. ad eundem, 1943).
Fellow, American Association for the Advancement of Science (1906)
Member, Deutsche Mathematiker-Vereinigung (1908)
Member, Edinburgh Mathematical Society (1909)
Member, Mathematical Association (England) (1910)
Member, Société Mathématique de France (1911)
Member, London Mathematical Society (1912)
Charter Member, Mathematical Association of America (1916); elected president for 1922
Fellow, American Academy of Arts and Sciences (1917)
Librarian, American Mathematical Society (1921-1941)
Member, Circolo Matematico di Palermo (1922)
Soci Fondatori, Unione Matematica Italiana (1924)
Founding Member, History of Science Society (1924)
Honorary Member, Society of Sciences, Cluj, Roumania (1929)
Honorary Foreign Fellow, Masarykova Akademie Prace, Prague, Czecho-Slovakia (1930)
Membre Effective, Académie Internationale d'Historie des Sciences (1931)
Honorary Foreign Member, Polish Mathematical Society (1934)
Honorary Member, New Brunswick Museum (1946)
Honorary Member, Mathematical Association (England) (1949)
Editorships
Associate editor, Bulletin of the American Mathematical Society (1913–20)
Editor-in-chief, American Mathematical Monthly (1919–21); associate editor (1918–19)
Associate editor, Revue Semestrielles des Publications Mathématiques (1923–34)
Associate editor, Isis (1924–48)
Associate editor, Scripta Mathematica (1932–49)
Founder and editor, Mathematical Tables and Other Aids to Computation (1943–49)
Co-founder and editor, Eudemes
Bibliography
Archibald's bibliography contains over 1,000 entries. He contributed to over 20 different journals, mathematical, scientific, educational and literary. The following are the books of which he is an author:
Margaret Gordon, Lady Bannerman, Carlyle's First Love, John Lane, 1910,
Euclid's Book on Divisions of Figures: (Peri diairéseon biblion): with a restoration based on Woepcke's text and on the Practica geometriae of Leonardo Pisano, Cambridge University Press, 1916,
The Training of Teachers of Mathematics for the Secondary Schools of the Countries Represented in the International Commission on the Teaching of Mathematics, U.S. Government Printing Office, 1917
Benjamin Peirce, 1809–1880. Biographical Sketch and Bibliography, Mathematical Association of America, 1925
Bibliography of Egyptian and Babylonian Mathematics, Plandome Press, 1929
History of Mathematics, Mathematical Association of America, 1931
Outline of the History of Mathematics, The Lancaster Press, 1932
Bibliography of the Life and Works of Simon Newcomb, J. Hope and & Sons, 1932
A Semicentennial History of the American Mathematical Society, American Mathematical Society, 1938,
Mary Mellish Archibald Memory Library Guide for Students and Scholars, Mount Allison University, 1935–46
Mathematical Table Makers, Scripta Mathematica, 1948
Geometrical Constructions with a Ruler, Scripta Mathematica, 1950
Historical Notes on the Education of Women at Mount Allison, 1854–1954, Mount Allison University, 1954
Famous Problems of Elementary Geometry, Dover, 1955
Biographies
Biographisch-Literarisches Handwörterbuch zur Geschichte der Exacten Wissenschaften Enthaltend Nachweisungen über Lebensverhältnisse und Leitstunger von Mathematikern, Astronomen, Physikern, Chemikern, Mineralogen, Geologen usw. aller Völker und Zeiten ("Poggendorff"), 1904/22 and 1923/31
American Men of Science, 1905 though 1955
The Canadian Men and Women of the Time, 1912
Who's Who in Science, International, 1913
Who's Who in America, 1914/15 though 1954/55
Who's Who, 1922 though 1955
Encyclopædia Britannica, 1929
Who's Who in American Education, 1935/36, with portrait
The Compendium of American Genealogy, First Families of America, 1937
The Canadian Who's Who, 1937/38 though 1952/54
Who's Who in New England, 1916, 1938, 1948
The National Cyclopaedia of American Biography, 1938
Who's Who Among North American Authors, 1927/28 though 1936/40
Leaders in Education: A Biographical Directory, 1941
Directory of American Scholars. A Biographical Directory, 1942
Who's Who in the East, 1948 though 1953
World Biography, 1948 and 1954
The Author's & Writer's Who's Who, 1949
Who knows, and what, among authorities, experts, and the specially informed, 1949
The International Who is Who in Music, 1951
The New Century Cyclopedia of Names, 1954
Who Was Who. 1951–1960, 1964
Who Was Who in America. 1951–1960, 1964.
International Personal Bibliographie, 1800—1943
Enciclopedia Universal Ilustrada Europeo-Americana, Madrid, 1905—1930
Internationale Bibliographie der Zeitschriftenliteratur aus allen Gebieten des Wissens
A Bio-Bibliographical Finding List of Canadian Musicians
Isis Cumulative Bibliography
MacTutor
Harvard College Class of 1896. Fiftieth Anniversary Report, 1946
Further reading
Jim Tattersall and Shawnee McMurran, Raymond Clare Archibald: A Euterpean Historian of Mathematics, New England Math J., v.~36, n. 2, May 2004, p. 31—47.
Cheryl White Ennals, Raymond Clare Archibald---Collector: The Legacy of a Scholar's Labor of Love, in The Book Disease: Atlantic Provinces Book Collectors, ed. Eric L. Swanick, London: The Vine Press, 1996, p. 99-117.
References
External links
Brown University faculty
19th-century American mathematicians
20th-century American mathematicians
Canadian mathematicians
Harvard University alumni
Humboldt University of Berlin alumni
Mount Allison University alumni
Presidents of the Mathematical Association of America
American historians of mathematics
History of mathematics
Mathematical tables
1875 births
1955 deaths
Canadian expatriates in the United States
Canadian expatriates in Germany
The American Mathematical Monthly editors | Raymond C. Archibald | Mathematics | 1,827 |
58,597,571 | https://en.wikipedia.org/wiki/NGC%202936 | NGC 2936, also known as the Penguin Galaxy or the Porpoise Galaxy, is an interacting spiral galaxy located at a distance of 326 million light years, in the constellation Hydra. NGC 2936 is interacting with elliptical galaxy NGC 2937, located just beneath it. They were both discovered by Albert Marth on Mar 3, 1864. To some astronomers, the galaxy looks like a penguin or a porpoise. NGC 2936, NGC 2937, and PGC 1237172 are included in the Atlas of Peculiar Galaxies as Arp 142 in the category "Galaxy triplet".
On 20 June 2013, the Hubble Space Telescope examined and photographed NGC 2936.
NGC 2936 once had a flat, spiral disk. The orbits of the galaxy's stars have been perturbed due to gravitational tidal interactions with NGC 2937. Gas from the center of NGC 2936 became compressed during the encounter with NGC 2937, which is shown as blue knots close to NGC 2937. The red dust that was inside the center of the galaxy has been mostly thrown out due to the collision. During the collision, gas coming from NGC 2936 triggered star formation.
PGC 1237172, an unrelated bluish irregular galaxy or edge-on spiral galaxy, is located just off to the side of NGC 2936. It is located 230 million light years away, making it closer to the Earth than the NGC 2936 collision, and it happens to be located next to two unrelated stars from the Milky Way.
In July 2024, NASA’s James Webb Space Telescope captured a vivid image of Arp 142, revealing intricate details of the interacting galaxies NGC 2936 and NGC 2937. The observations showcased new star formation regions within the Penguin galaxy, enhanced by Webb’s near- and mid-infrared capabilities, offering deeper insights into galactic evolution processes.
The brightest star in this galaxy is USNOA2 0900-06460021.
See also
List of NGC objects (2001–3000)
References
External links
Spiral galaxies
Hydra (constellation)
2936
Interacting galaxies
027422 | NGC 2936 | Astronomy | 424 |
74,752,544 | https://en.wikipedia.org/wiki/Jonathan%20Jeffers | Jonathan Jeffers is a mechanical engineer and Professor of Mechanical Engineering at Imperial College London. He was awarded a Research Professorship by the National Institute for Health and Care Research (NIHR), the first engineer to receive this award. His research focuses on improving surgical treatment of osteoarthritis.
Early life and education
Jonathan Jeffers studied Mechanical Engineering at Trinity College Dublin. He also has a PhD from the University of Southampton.
Career and research
His research focuses on topics such as orthopedic implants, external stabilisers for broken bones and biomechanics in hip surgery. He is also a co-investigator at the Smart Materials Hub of the UK Regenerative Medicine Platform. During the COVID-19 pandemic he worked on 3D printing FFP3 face masks through injection moulding.
He was awarded an NIHR Research Professorship in 2019 for his research on improving treatment options for osteoarthritis after early intervention orthopaedic surgery.
Jeffers is the Chief Technology Officer of OSSTEC, a company that manufactures orthopaedic implants using 3D printing. He was a co-founder of Additive Instruments, which was acquired by Smith&Nephew in 2023.
References
Living people
NIHR Research Professors
Mechanical engineers
Alumni of Trinity College Dublin
Alumni of the University of Southampton
Biomedical engineers
Academics of Imperial College London
Year of birth missing (living people) | Jonathan Jeffers | Engineering | 284 |
22,874,754 | https://en.wikipedia.org/wiki/Cold%20River%20Virgin%20Forest | Cold River Virgin Forest a virgin hemlock-northern hardwood forest in northwestern Massachusetts, United States. Believed to be the only stand of its type in New England, it was designated a National Natural Landmark by the National Park Service in April 1980.
It is located within Mohawk Trail State Forest nine miles southeast of North Adams in Berkshire and Franklin counties. The forest features hemlocks and sugar maples exceeding 400 years in age.
See also
List of National Natural Landmarks in Massachusetts
List of Massachusetts State Parks
List of old growth forests in Massachusetts
References
Protected areas of Berkshire County, Massachusetts
Protected areas of Franklin County, Massachusetts
National Natural Landmarks in Massachusetts
Charlemont, Massachusetts
Forests of Massachusetts
Old-growth forests | Cold River Virgin Forest | Biology | 139 |
69,791,721 | https://en.wikipedia.org/wiki/Candolleomyces%20candolleanus | Candolleomyces candolleanus (formerly known as Psathyrella candolleana) is a mushroom in the family Psathyrellaceae. It is commonly found growing in small groups around stumps and tree roots on lawns and pastures in Europe and North America. In 2014, it was reported from Iraq. The coloring varies between white and golden brown.
Description
The cap is tan when young, growing to in diameter, initially conical, later becoming rounded and finally with upturned margins in maturity. The cap margin is irregular and radially asymmetrical—a defining characteristic of this species. It can retain veil fragments on the edge and center. The white stalk is tall and 3–7 mm wide. The spore print is purple-brown, while spores are smooth and elliptical, measuring 6.5–8 by 4–5 μm.
Etymology
The specific epithet candolleanus honors Swiss botanist Augustin Pyramus de Candolle.
Edibility
While it is edible and may have a good flavor, it is not recommended due to its thin flesh, alleged poor culinary value and consistency, as well as difficulty in identification.
Similar species
One similar species is Psathyrella gracilis. Some species may have darker caps when young, drying to match that of C. candolleanus.
See also
List of Psathyrella species
References
External links
Psathyrellaceae
Edible fungi
Fungi described in 1818
Fungi of Asia
Fungi of Europe
Fungi of North America
Taxa named by Elias Magnus Fries
Fungus species | Candolleomyces candolleanus | Biology | 309 |
28,551,310 | https://en.wikipedia.org/wiki/Sea%20ice%20growth%20processes | Sea ice is a complex composite composed primarily of pure ice in various states of crystallization, but including air bubbles and pockets of brine. Understanding its growth processes is important for climate modellers and remote sensing specialists, since the composition and microstructural properties of the ice affect how it reflects or absorbs sunlight.
Sea ice growth models for predicting the ice distribution and extent are also valuable for shipping. An ice growth model can be combined with remote sensing measurements in an assimilation model as a means of generating more accurate ice charts.
Overview
Several formation mechanisms of sea ice have been identified. At its earliest stages, sea ice consists of elongated, randomly oriented crystals. This is called frazil, and mixed with water in the unconsolidated state is known as grease ice. If wave and wind conditions are calm these crystals will consolidate at the surface, and by selective pressure begin to grow preferentially in the downward direction, forming nilas. In more turbulent conditions, the frazil will consolidate by mechanical action to form pancake ice, which has a more random structure. Another common formation mechanism, especially in the Antarctic where precipitation over sea ice is high, is from snow deposition: on thin ice the snow will weigh down the ice enough to cause flooding. Subsequent freezing will form ice with a much more granular structure.
One of the more interesting processes to occur within consolidated ice packs is changes in
the saline content. As the ice freezes, most of the salt content gets rejected and forms highly
saline brine inclusions between the crystals. With decreasing temperatures in the ice sheet, the
size of the brine pockets decreases while the salt content goes up. Since ice is less dense than
water, increasing pressure causes some of the brine to be ejected from both the top and bottom,
producing the characteristic C-shaped salinity profile of first-year ice.
Brine will also drain through vertical channels, particularly in the melt season. Thus multi-year ice will tend
to have both lower salinity and lower density than first-year ice. Sea-ice density is relatively stable during winter with values close to 910 kg/m3, but may decrease up to 720 kg/m3 during warming mainly due to increase in air volume. Air volume of sea ice in can be as high as 15% in summer and 4% in late autumn.
The main physical processes of sea-ice desalination are gravity drainage and flushing of surface meltwater and melt ponds. During winter, desalination is governed mostly by gravity drainage, while flushing becomes important during summer. Gravity drainage can be triggered both by atmospheric heat and bottom melt from oceanic heat. A typical salinity of first-year ice by the end of winter season is 4–6, while typical salinities of multiyear ice is 2–3. Snowmelt, surface flooding, and the presence of under-ice meltwater may affect sea-ice salinity. During the melt season, the only process of ice growth is related to the formation of false bottoms.
Vertical growth
The downward growth of consolidated ice under the assumption of zero heat flux from the ocean is determined by the rate of conductive heat flux, Q*, at the ice-water interface. The ocean heat fluxes substantially vary spatially and temporally and strongly contribute to the summer sea ice melt and the absence of sea ice in some parts of the Arctic Ocean. If we also assume a linear temperature profile within ice and no effect from ice thermal inertia, we can determine latent heat flux Q* by solving the following equation:
where Tsi is the snow-ice interface temperature, Ts is the air-snow interface temperature, hi and hs are the ice and snow thicknesses. The water temperature Tw is assumed to be at or near freezing (Stefan problem). We can approximate the ice and snow thermal conductivities ki and ks, as an average over the layers. The surface heat budget defines the snow surface temperature Ts and includes four atmospheric heat fluxes:
which are latent, sensible, longwave and shortwave radiation fluxes, respectively. For a description of the approximate parameterizations, see determining surface flux under sea ice thickness. The equation can be solved using a numerical root-finding algorithm such as bisection: the functional dependencies on surface temperature are given, with e being the equilibrium vapor pressure. Shortwave radiation may increase ocean surface temperatures and corresponding ocean heat fluxes, affecting heat balance at the ice-ocean interface. This process is a part of Ice–albedo feedback.
While Cox and Weeks assume thermal equilibrium, Tonboe uses a more complex thermodynamic model based on numerical solution of the heat equation. This would be appropriate when the ice is thick or the weather conditions are changing rapidly.
The rate of ice growth can be calculated from heat flux by the following equation:
where L is the latent heat of fusion for water and is the density of ice (for pure ice). For sea ice, L is the effective latent heat of sea ice and is the density of sea ice. These two parameters depend on sea-ice salinity, temperature, and volumetric gas fraction, as well as sea-ice thermal conductivity. The growth rate of sea ice in turn determines the saline content of the newly frozen ice. Empirical equations for determining the initial brine entrapment in sea ice have been derived by Cox and Weeks and Nakawo and Sinha and take the form:
where S is ice salinity, S0 is the salinity of the parent water, and f is an empirical function of ice growth rate, e.g.:
where g is in cm/s.
Salt content
Brine entrapped in sea ice will always be at or near freezing, since any departure will either cause some of the water in the brine to freeze, or melt some of the surrounding ice. Thus, brine salinity is variable and can be determined based strictly on temperature—see freezing point depression. There are empirical formulas relating sea ice temperature to brine salinity.
The relative brine volume, Vb, is defined as the fraction of brine relative to the total volume. It too is highly variable, however its value is more difficult to determine since changes in temperature may cause some of the brine to be ejected or move within the layers, particularly in new ice. Writing equations relating the salt content of the brine, the total salt content, the brine volume, the density of the brine and the density of the ice and solving for brine volume produces the following relation:
where S is sea ice salinity, Sb is brine salinity, is the density of the ice and is brine density. Compare with this empirical formula from Frankenstein and Garner:
where T is ice temperature in degrees Celsius and S is ice salinity in parts per thousand.
In new ice, the amount of brine ejected as the ice cools can be determined by assuming that the total volume stays constant and subtracting the volume increase from the brine volume. Note that this is only applicable to newly formed ice: any warming will tend to generate air pockets as the brine volume will increase more slowly than the ice volume decreases, again due to the density difference. Cox and Weeks provide the following formula determining the ratio of total ice salinity between temperatures, T1 and T2 where T2 < T1:
where c=0.8 kg m−3 is a constant. As the ice goes through constant warming and cooling cycles it becomes progressively more porous, through ejection of the brine and drainage through the resulting channels.
The figure above shows a scatter plot of salinity versus ice thickness for ice cores taken from the Weddell Sea, Antarctica, with an exponential fit of the form, , overlaid, where h is ice thickness and a and b are constants.
Horizontal motion
The horizontal motion of sea ice is quite difficult to model because ice is a non-Newtonian fluid.
Sea ice will deform primarily at fracture points which in turn will form at the points of greatest stress and lowest strength, or where the ratio between the two is a maximum. Ice thickness, salinity and porosity will all affect the strength of the ice. The motion of the ice is driven primarily by ocean currents, though to a lesser extent by wind. Note that stresses will not be in the direction of the winds or currents, but rather will be shifted by Coriolis effects—see, for instance, Ekman spiral.
See also
Sea ice
Sea ice thickness
Sea ice concentration
Sea ice emissivity modelling
References
Sea ice
Climatology | Sea ice growth processes | Physics | 1,767 |
48,625,183 | https://en.wikipedia.org/wiki/Immunoglobulin%20therapy | Immunoglobulin therapy is the use of a mixture of antibodies (normal human immunoglobulin) to treat several health conditions. These conditions include primary immunodeficiency, immune thrombocytopenic purpura, chronic inflammatory demyelinating polyneuropathy, Kawasaki disease, certain cases of HIV/AIDS and measles, Guillain–Barré syndrome, and certain other infections when a more specific immunoglobulin is not available. Depending on the formulation it can be given by injection into muscle, a vein, or under the skin. The effects last a few weeks.
Common side effects include pain at the site of injection, muscle pain, and allergic reactions. Other severe side effects include kidney problems, anaphylaxis, blood clots, and red blood cell breakdown. Use is not recommended in people with some types of IgA deficiency. Use appears to be relatively safe during pregnancy. Human immunoglobulin is made from human blood plasma. It contains antibodies against many viruses.
Human immunoglobulin therapy first occurred in the 1930s and a formulation for injection into a vein was approved for medical use in the United States in 1981. It is on the World Health Organization's List of Essential Medicines. Each formulation of the product is somewhat different. A number of specific immunoglobulin formulations are also available including for hepatitis B, rabies, tetanus, varicella infection, and Rh positive blood exposure.
Medical uses
Immunoglobulin therapy is used in a variety of conditions, many of which involve decreased or abolished antibody production capabilities, which range from a complete absence of multiple types of antibodies, to IgG subclass deficiencies (usually involving IgG2 or IgG3), to other disorders in which antibodies are within a normal quantitative range, but lacking in quality – unable to respond to antigens as they normally should – resulting in an increased rate or increased severity of infections. In these situations, immunoglobulin infusions confer passive resistance to infection on their recipients by increasing the quantity/quality of IgG they possess. Immunoglobulin therapy is also used for a number of other conditions, including in many autoimmune disorders such as dermatomyositis in an attempt to decrease the severity of symptoms. Immunoglobulin therapy is also used in some treatment protocols for secondary immunodeficiencies such as human immunodeficiency virus (HIV), some autoimmune disorders (such as immune thrombocytopenia and Kawasaki disease), some neurological diseases (multifocal motor neuropathy, stiff person syndrome, multiple sclerosis and myasthenia gravis) some acute infections and some complications of organ transplantation.
Immunoglobulin therapy is especially useful in some acute infection cases such as pediatric HIV infection and is also considered the standard of treatment for some autoimmune disorders such as Guillain–Barré syndrome. The high demand which coupled with the difficulty of producing immunoglobulin in large quantities has resulted in increasing global shortages, usage limitations and rationing of immunoglobulin.
Australia
The Australian Red Cross Blood Service developed their own guidelines for the appropriate use of immunoglobulin therapy in 1997. Immunoglobulin is funded under the National Blood Supply and indications are classified as either an established or emerging therapeutic role or conditions for which immunoglobulin use is in exceptional circumstances only.
Subcutaneous immunoglobulin access programs have been developed to facilitate hospital based programs.
Human normal immunoglobulin (human immunoglobulin G) (Cutaquig) was approved for medical use in Australia in May 2021.
Canada
The National Advisory Committee on Blood and Blood Products of Canada (NAC) and Canadian Blood Services have also developed their own separate set of guidelines for the appropriate use of immunoglobulin therapy, which strongly support the use of immunoglobulin therapy in primary immunodeficiencies and some complications of HIV, while remaining silent on the issues of sepsis, multiple sclerosis, and chronic fatigue syndrome.
European Union
Brands include HyQvia (human normal immunoglobulin), Privigen (human normal immunoglobulin (IVIg)), Hizentra (human normal immunoglobulin (SCIg)), Kiovig (human normal immunoglobulin), and Flebogamma DIF (human normal immunoglobulin).
In the EU human normal immunoglobulin (SCIg) (Hizentra) is used in people whose blood does not contain enough antibodies (proteins that help the body to fight infections and other diseases), also known as immunoglobulins. It is used to treat the following conditions:
primary immunodeficiency syndromes (PID, when people are born with an inability to produce enough antibodies);
low levels of antibodies in the blood in people with chronic lymphocytic leukaemia (a cancer of a type of white blood cell) or myeloma (a cancer of another type of white blood cell) and who have frequent infections;
low levels of antibodies in the blood in people before or after allogeneic haematopoietic stem cell transplantation (a procedure where the patient's bone marrow is cleared of cells and replaced by stem cells from a donor);
chronic inflammatory demyelinating polyneuropathy (CIDP). In this rare disease, the immune system (the body's defence system) works abnormally and destroys the protective covering over the nerves.
It is indicated for replacement therapy in adults and children in primary immunodeficiency syndromes such as:
congenital agammaglobulinaemia and hypogammaglobulinaemia (low levels of antibodies);
common variable immunodeficiency;
severe combined immunodeficiency;
immunoglobulin-G-subclass deficiencies with recurrent infections;
replacement therapy in myeloma or chronic lymphocytic leukaemia with severe secondary hypogammaglobulinaemia and recurrent infections.
Flebogamma DIF is indicated for the replacement therapy in adults, children and adolescents (0–18 years) in:
primary immunodeficiency syndromes with impaired antibody production;
hypogammaglobulinaemia (low levels of antibodies) and recurrent bacterial infections in patients with chronic lymphocytic leukaemia (a cancer of a type of white blood cell), in whom prophylactic antibiotics have failed;
hypogammaglobulinaemia (low levels of antibodies) and recurrent bacterial infections in plateau-phase-multiple-myeloma (another cancer of a type of white blood cell) patients who failed to respond to pneumococcal immunisation;
hypogammaglobulinaemia (low levels of antibodies) in patients after allogenic haematopoietic-stem-cell transplantation (HSCT) (when the patient receives stem cells from a matched donor to help restore the bone marrow);
congenital acquired immune deficiency syndrome (AIDS) with recurrent bacterial infections.
and for the immunomodulation in adults, children and adolescents (0–18 years) in:
primary immune thrombocytopenia (ITP), in patients at high risk of bleeding or prior to surgery to correct the platelet count;
Guillain–Barré syndrome, which causes multiple inflammations of the nerves in the body;
Kawasaki disease, which causes multiple inflammation of several organs in the body.
United Kingdom
The United Kingdom's National Health Service recommends the routine use of immunoglobulin for a variety of conditions including primary immunodeficiencies and a number of other conditions, but recommends against the use of immunoglobulin in sepsis (unless a specific toxin has been identified), multiple sclerosis, neonatal sepsis, and pediatric HIV/AIDS.
United States
The American Academy of Allergy, Asthma, and Immunology supports the use of immunoglobulin for primary immunodeficiencies, while noting that such usage actually accounts for a minority of usage and acknowledging that immunoglobulin supplementation can be appropriately used for a number of other conditions, including neonatal sepsis (citing a sixfold decrease in mortality), considered in cases of HIV (including pediatric HIV), considered as a second line treatment in relapsing-remitting multiple sclerosis, but recommending against its use in such conditions as chronic fatigue syndrome, PANDAS (pediatric autoimmune neuropsychiatric disorders associated with streptococcal infection) until further evidence to support its use is found (though noting that it may be useful in PANDAS patients with an autoimmune component), cystic fibrosis, and a number of other conditions.
Brands include:
Alyglo (immune globulin intravenous human-stwk)
Asceniv (immune globulin intravenous, human – slra)
Bivigam (immune globulin intravenous – human 10% liquid)
Gamunex-C, (immune globulin injection human)
Hizentra (immune globulin subcutaneous human)
Hyqvia (immune globulin 10 percent – human with recombinant human hyaluronidase)
Octagam (immune globulin intravenous, human)
Panzyga (immune globulin intravenous, human – ifas)
Xembify (immune globulin subcutaneous, human – klhw)
Yimmugo (immune globulin intravenous, human-dira)
Side effects
Although immunoglobulin is frequently used for long periods of time and is generally considered safe, immunoglobulin therapy can have severe adverse effects, both localized and systemic. Subcutaneous administration of immunoglobulin is associated with a lower risk of both systemic and localized risk when compared to intravenous administration (hyaluronidase-assisted subcutaneous administration is associated with a greater frequency of adverse effects than traditional subcutaneous administration but still a lower frequency of adverse effects when compared to intravenous administration). Patients who are receiving immunoglobulin and experience adverse events are sometimes recommended to take acetaminophen and diphenhydramine before their infusions to reduce the rate of adverse effects. Additional premedication may be required in some instances (especially when first getting accustomed to a new dosage), prednisone or another oral steroid.
Local side effects of immunoglobulin infusions most frequently include an injection site reaction (reddening of the skin around the injection site), itching, rash, and hives. Less serious systemic side effects to immunoglobulin infusions include an increased heart rate, hyper or hypotension, an increased body temperature, diarrhea, nausea, abdominal pain, vomiting, arthralgia or myalgia, dizziness, headache, fatigue, fever, and pain.
Serious side effects of immunoglobulin infusions in infants, children, and adults include chest discomfort or pain, myocardial infarction, tachycardia, hyponatremia, hemolysis, hemolytic anemia, thrombosis, hepatitis, anaphylaxis, backache, aseptic meningitis, acute kidney injury, hypokalemic nephropathy, pulmonary embolism, and transfusion related acute lung injury. hemoThere is also a small chance that even given the precautions taken in preparing immunoglobulin preparations, an immunoglobulin infusion may pass a virus to its recipient. Some immunoglobulin solutions also contain isohemagglutinins, which in rare circumstances can cause hemolysis by the isohemagglutinins triggering phagocytosis.
IVIG has long been known to induce a decrease in peripheral blood neutrophil count, or neutropenia in neonates, and in patients with Idiopathic Thrombocytopenic Purpura, resolving spontaneously and without complications within 48 h. Possible pathomechanisms include apoptosis/cell death due to antineutrophil antibodies with or without neutrophil migration into a storage pool outside the blood circulation.
Immunoglobulin therapy interferes with the ability of the body to produce a normal immune response to an attenuated live-virus vaccine (like MMR) for up to a year, can result in falsely elevated blood glucose levels, and can interfere with many of the IgG-based assays often used to diagnose a patient with a particular infection.
Routes of administration
1950s – intramuscular
After immunoglobulin therapy's discovery in 1952, weekly intramuscular injections of immunoglobulin (IMIg) were the norm until intravenous formulations (IVIg) began to be introduced in the 1980s. During the mid and late 1950s, one-time IMIg injections were a common public health response to outbreaks of polio before the widespread availability of vaccines. Intramuscular injections were extremely poorly tolerated due to their extreme pain and poor efficacy – rarely could intramuscular injections alone raise plasma immunoglobulin levels enough to make a clinically meaningful difference.
1980s – intravenous
Intravenous formulations began to be approved in the 1980s, which represented a significant improvement over intramuscular injections, as they allowed for a sufficient amount of immunoglobulin to be injected to reach clinical efficacy, although they still had a fairly high rate of adverse effects (though the addition of stabilizing agents reduced this further).
1990s – subcutaneous
The first description of a subcutaneous route of administration for immunoglobulin therapy dates back to 1980, but for many years subcutaneous administration was considered to be a secondary choice, only to be considered when peripheral venous access was no longer possible or tolerable.
During the late 1980s and early 1990s, it became obvious that for at least a subset of patients the systemic adverse events associated with intravenous therapy were still not easily tolerable, and more doctors began to experiment with subcutaneous immunoglobulin administration, culminating in an ad hoc clinical trial in Sweden of 3000 subcutaneous injections administered to 25 adults (most of whom had previously experienced systemic adverse effects with IMIg or IVIg), where no infusion in the ad hoc trial resulted in a severe systemic adverse reaction, and most subcutaneous injections were able to be administered in non-hospital settings, allowing for considerably more freedom for the people involved.
In the later 1990s, large-scale trials began in Europe to test the feasibility of subcutaneous immunoglobulin administration, although it was not until 2006 that the first subcutaneous-specific preparation of immunoglobulin was approved by a major regulatory agency (Vivaglobin, which was voluntarily discontinued in 2011). A number of other brand names of subcutaneous immunoglobulin have since been approved, although some small-scale studies have indicated that a particular cohort of patients with common variable immunodeficiency (CVID) may develop intolerable side effects with subcutaneous immunoglobulin (SCIg) that they do not with intravenous immunoglobulin (IVIg).
Although intravenous was the preferred route for immunoglobulin therapy for many years, in 2006, the US Food and Drug Administration (FDA) approved the first preparation of immunoglobulin that was designed exclusively for subcutaneous use.
Mechanism of action
The precise mechanism by which immunoglobulin therapy suppresses harmful inflammation is likely multifactorial. For example, it has been reported that immunoglobulin therapy can block Fas-mediated cell death.
Perhaps a more popular theory is that the immunosuppressive effects of immunoglobulin therapy are mediated through IgG's Fc glycosylation. By binding to receptors on antigen presenting cells, IVIG can increase the expression of the inhibitory Fc receptor, FcgRIIB, and shorten the half-life of auto-reactive antibodies. The ability of immunoglobulin therapy to suppress pathogenic immune responses by this mechanism is dependent on the presence of a sialylated glycan at position CH2-84.4 of IgG. Specifically, de-sialylated preparations of immunoglobulin lose their therapeutic activity and the anti-inflammatory effects of IVIG can be recapitulated by administration of recombinant sialylated IgG1 Fc.
Sialylated-Fc-dependent mechanism was not reproduced in other experimental models suggesting that this mechanism is functional under a particular disease or experimental settings. On the other hand, several other mechanisms of action and the actual primary targets of immunoglobulin therapy have been reported. In particular, F(ab')2-dependent action of immunoglobulin to inhibit activation of human dendritic cells, induction of autophagy, induction of COX-2-dependent PGE-2 in human dendritic cells leading to expansion of regulatory T cells, inhibition of pathogenic Th17 responses, and induction of human basophil activation and IL-4 induction via anti-IgE autoantibodies. Some believe that immunoglobulin therapy may work via a multi-step model where the injected immunoglobulin first forms a type of immune complex in the patient. Once these immune complexes are formed, they can interact with Fc receptors on dendritic cells, which then mediate anti-inflammatory effects helping to reduce the severity of the autoimmune disease or inflammatory state.
Other proposed mechanisms include the possibility that donor antibodies may bind directly with the abnormal host antibodies, stimulating their removal; the possibility that IgG stimulates the host's complement system, leading to enhanced removal of all antibodies, including the harmful ones; and the ability of immunoglobulin to block the antibody receptors on immune cells (macrophages), leading to decreased damage by these cells, or regulation of macrophage phagocytosis. Indeed, it is becoming more clear that immunoglobulin can bind to a number of membrane receptors on T cells, B cells, and monocytes that are pertinent to autoreactivity and induction of tolerance to self.
A report stated that immunoglobulin application to activated T cells leads to their decreased ability to engage microglia. As a result of immunoglobulin treatment of T cells, the findings showed reduced levels of tumor necrosis factor-alpha and interleukin-10 in T cell-microglia co-culture. The results add to the understanding of how immunoglobulin may affect inflammation of the central nervous system in autoimmune inflammatory diseases.
Hyperimmune globulin
Hyperimmune globulins are a class of immunoglobulins prepared in a similar way as for normal human immunoglobulin, except that the donor has high titers of antibody against a specific organism or antigen in their plasma. Some agents against which hyperimmune globulins are available include hepatitis B, rabies, tetanus toxin, varicella-zoster, etc. Administration of hyperimmune globulin provides "passive" immunity to the patient against an agent. This is in contrast to vaccines that provide "active" immunity. However, vaccines take much longer to achieve that purpose while hyperimmune globulin provides instant "passive" short-lived immunity. Hyperimmune globulin may have serious side effects, thus usage is taken very seriously.
Hyperimmune serum and plasma contain high amounts of an antibody, as a consequence of disease convalescence or of repeated immunization. Hyperimmune plasma is used in veterinary medicine, and hyperimmune plasma derivatives are used to treat snakebite. It has been hypothesized that hyperimmune serum may be an effective therapy for persons infected with the Ebola virus.
Society and culture
Economics
In the United Kingdom a dose cost the NHS between 11.20 and 1,200.00 depending on the type and amount. In the United States, antivenoms may cost thousands of dollars per dose because of markups that occur after manufacturing.
Brand names
As biologicals, various brand names of immunoglobulin products are not necessarily interchangeable, and care must be exercised when changing between them. Brand names of intravenous immunoglobulin formulations include Flebogamma, Gamunex, Privigen, Octagam, and Gammagard, while brand names of subcutaneous formulations include Cutaquig, Cuvitru, HyQvia, Hizentra, Gamunex-C, and Gammaked.
Supply issues
The United States is one of a handful of countries that allow plasma donors to be paid, meaning that the US supplies much of the plasma-derived medicinal products (including immunoglobulin) used across the world, including more than 50% of the European Union's supply. The Council of Europe has officially endorsed the idea of not paying for plasma donations for both ethical reasons and reasons of safety, but studies have found that relying on entirely voluntary plasma donation leads to shortages of immunoglobulin and forces member countries to import immunoglobulin from countries that do compensate donors.
In Australia, blood donation is voluntary and therefore to cope with increasing demand and to reduce the shortages of locally produced immunoglobulin, several programs have been undertaken including adopting plasma for first time blood donors, better processes for donation, plasma donor centres and encouraging current blood donors to consider plasma only donation.
Research
Experimental results from a small clinical trial in humans suggested protection against the progression of Alzheimer's disease, but no such benefit was found in a subsequent phase III clinical trial. In May 2020, the US approved a phase three clinical trial on the efficacy and safety of high-concentration intravenous immune globulin therapy in severe COVID-19. Efficacy of heterologous immunoglobulin derivatives has been demonstrated in clinical trials of antivenoms for scorpion sting and for snakebite.
References
Glycoproteins
Medical treatments
Therapeutic antibodies
Transfusion medicine
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Immunoglobulin therapy | Chemistry | 4,789 |
40,354,723 | https://en.wikipedia.org/wiki/Structural%20complexity%20%28applied%20mathematics%29 | Structural complexity is a science of applied mathematics that aims to relate fundamental physical or biological aspects of a complex system with the mathematical description of the morphological complexity that the system exhibits, by establishing rigorous relations between mathematical and physical properties of such system.
Structural complexity emerges from all systems that display morphological organization. Filamentary structures, for instance, are an example of coherent structures that emerge, interact and evolve in many physical and biological systems, such as mass distribution in the Universe, vortex filaments in turbulent flows, neural networks in our brain and genetic material (such as DNA) in a cell. In general information on the degree of morphological disorder present in the system tells us something important about fundamental physical or biological processes.
Structural complexity methods are based on applications of differential geometry and topology (and in particular knot theory) to interpret physical properties of dynamical systems. such as relations between kinetic energy and tangles of vortex filaments in a turbulent flow or magnetic energy and braiding of magnetic fields in the solar corona, including aspects of topological fluid dynamics.
Literature
References
Applied mathematics
Complex systems theory | Structural complexity (applied mathematics) | Mathematics | 219 |
19,609 | https://en.wikipedia.org/wiki/Memory%20leak | In computer science, a memory leak is a type of resource leak that occurs when a computer program incorrectly manages memory allocations in a way that memory which is no longer needed is not released. A memory leak may also happen when an object is stored in memory but cannot be accessed by the running code (i.e. unreachable memory). A memory leak has symptoms similar to a number of other problems and generally can only be diagnosed by a programmer with access to the program's source code.
A related concept is the "space leak", which is when a program consumes excessive memory but does eventually release it.
Because they can exhaust available system memory as an application runs, memory leaks are often the cause of or a contributing factor to software aging.
Consequences
A memory leak reduces the performance of the computer by reducing the amount of available memory. A memory leak can cause an increase in memory usage and performance run-time, and can negatively impact the user experience. Eventually, in the worst case, too much of the available memory may become allocated and all or part of the system or device stops working correctly, the application fails, or the system slows down vastly due to thrashing.
Memory leaks may not be serious or even detectable by normal means. In modern operating systems, normal memory used by an application is released when the application terminates. This means that a memory leak in a program that only runs for a short time may not be noticed and is rarely serious.
Much more serious leaks include those:
where a program runs for a long time and consumes added memory over time, such as background tasks on servers, and especially in embedded systems which may be left running for many years
where new memory is allocated frequently for one-time tasks, such as when rendering the frames of a computer game or animated video
where a program can request memory, such as shared memory, that is not released, even when the program terminates
where memory is very limited, such as in an embedded system or portable device, or where the program requires a very large amount of memory to begin with, leaving little margin for leaks
where a leak occurs within the operating system or memory manager
when a system device driver causes a leak
running on an operating system that does not automatically release memory on program termination.
An example of memory leak
The following example, written in pseudocode, is intended to show how a memory leak can come about, and its effects, without needing any programming knowledge. The program in this case is part of some very simple software designed to control an elevator. This part of the program is run whenever anyone inside the elevator presses the button for a floor.
When a button is pressed:
Get some memory, which will be used to remember the floor number
Put the floor number into the memory
Are we already on the target floor?
If so, we have nothing to do: finished
Otherwise:
Wait until the lift is idle
Go to the required floor
Release the memory we used to remember the floor number
The memory leak would occur if the floor number requested is the same floor that the elevator is on; the condition for releasing the memory would be skipped. Each time this case occurs, more memory is leaked.
Cases like this would not usually have any immediate effects. People do not often press the button for the floor they are already on, and in any case, the elevator might have enough spare memory that this could happen hundreds or thousands of times. However, the elevator will eventually run out of memory. This could take months or years, so it might not be discovered despite thorough testing.
The consequences would be unpleasant; at the very least, the elevator would stop responding to requests to move to another floor (such as when an attempt is made to call the elevator or when someone is inside and presses the floor buttons). If other parts of the program need memory (a part assigned to open and close the door, for example), then no one would be able to enter, and if someone happens to be inside, they will become trapped (assuming the doors cannot be opened manually).
The memory leak lasts until the system is reset. For example: if the elevator's power were turned off or in a power outage, the program would stop running. When power was turned on again, the program would restart and all the memory would be available again, but the slow process of memory leak would restart together with the program, eventually prejudicing the correct running of the system.
The leak in the above example can be corrected by bringing the "release" operation outside of the conditional:
When a button is pressed:
Get some memory, which will be used to remember the floor number
Put the floor number into the memory
Are we already on the target floor?
If not:
Wait until the lift is idle
Go to the required floor
Release the memory we used to remember the floor number
Programming issues
Memory leaks are a common error in programming, especially when using languages that have no built in automatic garbage collection, such as C and C++. Typically, a memory leak occurs because dynamically allocated memory has become unreachable. The prevalence of memory leak bugs has led to the development of a number of debugging tools to detect unreachable memory. BoundsChecker, Deleaker, Memory Validator, IBM Rational Purify, Valgrind, Parasoft Insure++, Dr. Memory and memwatch are some of the more popular memory debuggers for C and C++ programs. "Conservative" garbage collection capabilities can be added to any programming language that lacks it as a built-in feature, and libraries for doing this are available for C and C++ programs. A conservative collector finds and reclaims most, but not all, unreachable memory.
Although the memory manager can recover unreachable memory, it cannot free memory that is still reachable and therefore potentially still useful. Modern memory managers therefore provide techniques for programmers to semantically mark memory with varying levels of usefulness, which correspond to varying levels of reachability. The memory manager does not free an object that is strongly reachable. An object is strongly reachable if it is reachable either directly by a strong reference or indirectly by a chain of strong references. (A strong reference is a reference that, unlike a weak reference, prevents an object from being garbage collected.) To prevent this, the developer is responsible for cleaning up references after use, typically by setting the reference to null once it is no longer needed and, if necessary, by deregistering any event listeners that maintain strong references to the object.
In general, automatic memory management is more robust and convenient for developers, as they do not need to implement freeing routines or worry about the sequence in which cleanup is performed or be concerned about whether or not an object is still referenced. It is easier for a programmer to know when a reference is no longer needed than to know when an object is no longer referenced. However, automatic memory management can impose a performance overhead, and it does not eliminate all of the programming errors that cause memory leaks.
RAII
Resource acquisition is initialization (RAII) is an approach to the problem commonly taken in C++, D, and Ada. It involves associating scoped objects with the acquired resources, and automatically releasing the resources once the objects are out of scope. Unlike garbage collection, RAII has the advantage of knowing when objects exist and when they do not. Compare the following C and C++ examples:
/* C version */
#include <stdlib.h>
void f(int n)
{
int* array = calloc(n, sizeof(int));
do_some_work(array);
free(array);
}
// C++ version
#include <vector>
void f(int n)
{
std::vector<int> array (n);
do_some_work(array);
}
The C version, as implemented in the example, requires explicit deallocation; the array is dynamically allocated (from the heap in most C implementations), and continues to exist until explicitly freed.
The C++ version requires no explicit deallocation; it will always occur automatically as soon as the object array goes out of scope, including if an exception is thrown. This avoids some of the overhead of garbage collection schemes. And because object destructors can free resources other than memory, RAII helps to prevent the leaking of input and output resources accessed through a handle, which mark-and-sweep garbage collection does not handle gracefully. These include open files, open windows, user notifications, objects in a graphics drawing library, thread synchronisation primitives such as critical sections, network connections, and connections to the Windows Registry or another database.
However, using RAII correctly is not always easy and has its own pitfalls. For instance, if one is not careful, it is possible to create dangling pointers (or references) by returning data by reference, only to have that data be deleted when its containing object goes out of scope.
D uses a combination of RAII and garbage collection, employing automatic destruction when it is clear that an object cannot be accessed outside its original scope, and garbage collection otherwise.
Reference counting and cyclic references
More modern garbage collection schemes are often based on a notion of reachability – if you do not have a usable reference to the memory in question, it can be collected. Other garbage collection schemes can be based on reference counting, where an object is responsible for keeping track of how many references are pointing to it. If the number goes down to zero, the object is expected to release itself and allow its memory to be reclaimed. The flaw with this model is that it does not cope with cyclic references, and this is why nowadays most programmers are prepared to accept the burden of the more costly mark and sweep type of systems.
The following Visual Basic code illustrates the canonical reference-counting memory leak:
Dim A, B
Set A = CreateObject("Some.Thing")
Set B = CreateObject("Some.Thing")
' At this point, the two objects each have one reference,
Set A.member = B
Set B.member = A
' Now they each have two references.
Set A = Nothing ' You could still get out of it...
Set B = Nothing ' And now you've got a memory leak!
End
In practice, this trivial example would be spotted straight away and fixed. In most real examples, the cycle of references spans more than two objects, and is more difficult to detect.
A well-known example of this kind of leak came to prominence with the rise of AJAX programming techniques in web browsers in the lapsed listener problem. JavaScript code which associated a DOM element with an event handler, and failed to remove the reference before exiting, would leak memory (AJAX web pages keep a given DOM alive for a lot longer than traditional web pages, so this leak was much more apparent).
Effects
If a program has a memory leak and its memory usage is steadily increasing, there will not usually be an immediate symptom. Every physical system has a finite amount of memory, and if the memory leak is not contained (for example, by restarting the leaking program) it will eventually cause problems.
Most modern consumer desktop operating systems have both main memory which is physically housed in RAM microchips, and secondary storage such as a hard drive. Memory allocation is dynamic – each process gets as much memory as it requests. Active pages are transferred into main memory for fast access; inactive pages are pushed out to secondary storage to make room, as needed. When a single process starts consuming a large amount of memory, it usually occupies more and more of main memory, pushing other programs out to secondary storage – usually significantly slowing performance of the system. Even if the leaking program is terminated, it may take some time for other programs to swap back into main memory, and for performance to return to normal.
When all the memory on a system is exhausted (whether there is virtual memory or only main memory, such as on an embedded system) any attempt to allocate more memory will fail. This usually causes the program attempting to allocate the memory to terminate itself, or to generate a segmentation fault. Some programs are designed to recover from this situation (possibly by falling back on pre-reserved memory). The first program to experience the out-of-memory may or may not be the program that has the memory leak.
Some multi-tasking operating systems have special mechanisms to deal with an out-of-memory condition, such as killing processes at random (which may affect "innocent" processes), or killing the largest process in memory (which presumably is the one causing the problem). Some operating systems have a per-process memory limit, to prevent any one program from hogging all of the memory on the system. The disadvantage to this arrangement is that the operating system sometimes must be re-configured to allow proper operation of programs that legitimately require large amounts of memory, such as those dealing with graphics, video, or scientific calculations.
If the memory leak is in the kernel, the operating system itself will likely fail. Computers without sophisticated memory management, such as embedded systems, may also completely fail from a persistent memory leak.
Publicly accessible systems such as web servers or routers are prone to denial-of-service attacks if an attacker discovers a sequence of operations which can trigger a leak. Such a sequence is known as an exploit.
A "sawtooth" pattern of memory utilization may be an indicator of a memory leak within an application, particularly if the vertical drops coincide with reboots or restarts of that application. Care should be taken though because garbage collection points could also cause such a pattern and would show a healthy usage of the heap.
Other memory consumers
Note that constantly increasing memory usage is not necessarily evidence of a memory leak. Some applications will store ever increasing amounts of information in memory (e.g. as a cache). If the cache can grow so large as to cause problems, this may be a programming or design error, but is not a memory leak as the information remains nominally in use. In other cases, programs may require an unreasonably large amount of memory because the programmer has assumed memory is always sufficient for a particular task; for example, a graphics file processor might start by reading the entire contents of an image file and storing it all into memory, something that is not viable where a very large image exceeds available memory.
To put it another way, a memory leak arises from a particular kind of programming error, and without access to the program code, someone seeing symptoms can only guess that there might be a memory leak. It would be better to use terms such as "constantly increasing memory use" where no such inside knowledge exists.
A simple example in C++
The following C++ program deliberately leaks memory by losing the pointer to the allocated memory.
int main() {
int* a = new int(5);
a = nullptr;
/* The pointer in the 'a' no longer exists, and therefore cannot be freed,
but the memory is still allocated by the system.
If the program continues to create such pointers without freeing them,
it will consume memory continuously.
Therefore, a leak would occur. */
}
See also
Buffer overflow
Memory management
Memory debugger
Plumbr is a popular memory leak detection tool for applications running on Java Virtual Machine.
nmon (short for Nigel's Monitor) is a popular system monitor tool for the AIX and Linux operating systems.
References
External links
Visual Leak Detector for Visual Studio, open source
Valgrind, open source
Deleaker for Visual Studio, proprietary
Memory Validator for Visual Studio, Delphi, Fortran, Visual Basic, proprietary
Detecting a Memory Leak (Using MFC Debugging Support)
Article "Memory Leak Detection in Embedded Systems" by Cal Erickson
WonderLeak, a high performance Windows heap and handle allocation profiler, proprietary
Software bugs
Articles with example pseudocode
Software anomalies
Memory management | Memory leak | Technology | 3,272 |
32,738,118 | https://en.wikipedia.org/wiki/Membrane%20technology | Membrane technology encompasses the scientific processes used in the construction and application of membranes. Membranes are used to facilitate the transport or rejection of substances between mediums, and the mechanical separation of gas and liquid streams. In the simplest case, filtration is achieved when the pores of the membrane are smaller than the diameter of the undesired substance, such as a harmful microorganism. Membrane technology is commonly used in industries such as water treatment, chemical and metal processing, pharmaceuticals, biotechnology, the food industry, as well as the removal of environmental pollutants.
After membrane construction, there is a need to characterize the prepared membrane to know more about its parameters, like pore size, function group, material properties, etc., which are difficult to determine in advance. In this process, instruments such as the Scanning Electron Microscope, the Transmission electron Microscope, the Fourier Transform Infrared Spectroscopy, X-ray Diffraction, and Liquid–Liquid Displacement Porosimetry are utilized.
Introduction
Membrane technology covers all engineering approaches for the transport of substances between two fractions with the help of semi-permeable membranes. In general, mechanical separation processes for separating gaseous or liquid streams use membrane technology. In recent years, different methods have been used to remove environmental pollutants, like adsorption, oxidation, and membrane separation. Different pollution occurs in the environment like air pollution, waste water pollution etc. As per industry requirement to prevent industrial pollution because more than 70% of environmental pollution occurs due to industries. It is their responsibility to follow government rules of the Air Pollution Control & Prevention Act 1981 to maintain and prevent the harmful chemical release into the environment. Make sure to do prevention & safety processes after that industries are able to release their waste in the environment.
Biomass-based Membrane technology is one of the most promising technologies for use as a pollutants removal weapon because it has low cost, more efficiency, & lack of secondary pollutants.
Typically polysulfone, polyvinylidene fluoride, and polypropylene are used in the membrane preparation process. These membrane materials are non-renewable and non-biodegradable which create harmful environmental pollution. Researchers are trying to find a solution to synthesize an eco-friendly membrane which avoids environmental pollution. Synthesis of biodegradable material with the help of naturally available material such as biomass-based membrane synthesis can be used to remove pollutants.
Membrane Overview
Membrane separation processes operate without heating and therefore use less energy than conventional thermal separation processes such as distillation, sublimation or crystallization. The separation process is purely physical and both fractions (permeate and retentate) can be obtained as useful products. Cold separation using membrane technology is widely used in the food technology, biotechnology and pharmaceutical industries. Furthermore, using membranes enables separations to take place that would be impossible using thermal separation methods. For example, it is impossible to separate the constituents of azeotropic liquids or solutes which form isomorphic crystals by distillation or recrystallization but such separations can be achieved using membrane technology. Depending on the type of membrane, the selective separation of certain individual substances or substance mixtures is possible. Important technical applications include the production of drinking water by reverse osmosis. In waste water treatment, membrane technology is becoming increasingly important. Ultra/microfiltration can be very effective in removing colloids and macromolecules from wastewater. This is needed if wastewater is discharged into sensitive waters especially those designated for contact water sports and recreation.
About half of the market is in medical applications such as artificial kidneys to remove toxic substances by hemodialysis and as artificial lung for bubble-free supply of oxygen in the blood.
The importance of membrane technology is growing in the field of environmental protection (Nano-Mem-Pro IPPC Database). Even in modern energy recovery techniques, membranes are increasingly used, for example in fuel cells and in osmotic power plants.
Mass transfer
Two basic models can be distinguished for mass transfer through the membrane:
the solution-diffusion model and
the hydrodynamic model.
In real membranes, these two transport mechanisms certainly occur side by side, especially during ultra-filtration.
Solution-diffusion model
In the solution-diffusion model, transport occurs only by diffusion. The component that needs to be transported must first be dissolved in the membrane. The general approach of the solution-diffusion model is to assume that the chemical potential of the feed and permeate fluids are in equilibrium with the adjacent membrane surfaces such that appropriate expressions for the chemical potential in the fluid and membrane phases can be equated at the solution-membrane interface. This principle is more important for dense membranes without natural pores such as those used for reverse osmosis and in fuel cells. During the filtration process a boundary layer forms on the membrane. This concentration gradient is created by molecules which cannot pass through the membrane. The effect is referred to as concentration polarization and, occurring during the filtration, leads to a reduced trans-membrane flow (flux). Concentration polarization is, in principle, reversible by cleaning the membrane which results in the initial flux being almost totally restored. Using a tangential flow to the membrane (cross-flow filtration) can also minimize concentration polarization.
Hydrodynamic model
Transport through pores – in the simplest case – is done convectively. This requires the size of the pores to be smaller than the diameter of the two separate components. Membranes that function according to this principle are used mainly in micro- and ultrafiltration. They are used to separate macromolecules from solutions, colloids from a dispersion or remove bacteria. During this process, the retained particles or molecules form a pulpy mass (filter cake) on the membrane, and this blockage of the membrane hampers the filtration. This blockage can be reduced by the use of the cross-flow method (cross-flow filtration). Here, the liquid to be filtered flows along the front of the membrane and is separated by the pressure difference between the front and back of the membrane into retentate (the flowing concentrate) on the front and permeate (filtrate) on the back. The tangential flow on the front creates a shear stress that cracks the filter cake and reduces the fouling.
Membrane operations
According to the driving force of the operation, it is possible to distinguish:
Pressure-driven operations
microfiltration
ultrafiltration
nanofiltration
reverse osmosis
gas separation
Concentration driven operations
dialysis
pervaporation
forward osmosis
artificial lung
Operations in an electric potential gradient
electrodialysis
membrane electrolysis e.g. chloralkaline process
electrode ionization
electro filtration
fuel cell
Operations in a temperature gradient
membrane distillation
Membrane shapes and flow geometries
There are two main flow configurations of membrane processes: cross-flow (or tangential flow) and dead-end filtrations. In cross-flow filtration the feed flow is tangential to the surface of the membrane, retentate is removed from the same side further downstream, whereas the permeate flow is tracked on the other side. In dead-end filtration, the direction of the fluid flow is normal to the membrane surface. Both flow geometries offer some advantages and disadvantages. Generally, dead-end filtration is used for feasibility studies on a laboratory scale. The dead-end membranes are relatively easy to fabricate which reduces the cost of the separation process. The dead-end membrane separation process is easy to implement and the process is usually cheaper than cross-flow membrane filtration. The dead-end filtration process is usually a batch-type process, where the filtering solution is loaded (or slowly fed) into the membrane device, which then allows passage of some particles subject to the driving force. The main disadvantage of dead-end filtration is the extensive membrane fouling and concentration polarization. The fouling is usually induced faster at higher driving forces. Membrane fouling and particle retention in a feed solution also builds up a concentration gradients and particle backflow (concentration polarization). The tangential flow devices are more cost and labor-intensive, but they are less susceptible to fouling due to the sweeping effects and high shear rates of the passing flow. The most commonly used synthetic membrane devices (modules) are flat sheets/plates, spiral wounds, and hollow fibers.
Flat plates are usually constructed as circular thin flat membrane surfaces to be used in dead-end geometry modules. Spiral wounds are constructed from similar flat membranes but in the form of a "pocket" containing two membrane sheets separated by a highly porous support plate. Several such pockets are then wound around a tube to create a tangential flow geometry and to reduce membrane fouling. Hollow fiber modules consist of an assembly of self-supporting fibers with dense skin separation layers, and a more open matrix helping to withstand pressure gradients and maintain structural integrity. The hollow fiber modules can contain up to 10,000 fibers ranging from 200 to 2500 μm in diameter; The main advantage of hollow fiber modules is the very large surface area within an enclosed volume, increasing the efficiency of the separation process.
The Disc tube module uses a cross-flow geometry and consists of a pressure tube and hydraulic discs, which are held by a central tension rod, and membrane cushions that lie between two discs.
Membrane performance and governing equations
The selection of synthetic membranes for a targeted separation process is usually based on few requirements. Membranes have to provide enough mass transfer area to process large amounts of feed stream. The selected membrane has to have high selectivity (rejection) properties for certain particles; it has to resist fouling and to have high mechanical stability. It also needs to be reproducible and to have low manufacturing costs. The main modeling equation for the dead-end filtration at constant pressure drop is represented by Darcy's law:
where Vp and Q are the volume of the permeate and its volumetric flow rate respectively (proportional to same characteristics of the feed flow), μ is dynamic viscosity of permeating fluid, A is membrane area, Rm and R are the respective resistances of membrane and growing deposit of the foulants. Rm can be interpreted as a membrane resistance to the solvent (water) permeation. This resistance is a membrane intrinsic property and is expected to be fairly constant and independent of the driving force, Δp. R is related to the type of membrane foulant, its concentration in the filtering solution, and the nature of foulant-membrane interactions. Darcy's law allows for calculation of the membrane area for a targeted separation at given conditions. The solute sieving coefficient is defined by the equation:
where Cf and Cp are the solute concentrations in feed and permeate respectively. Hydraulic permeability is defined as the inverse of resistance and is represented by the equation:
where J is the permeate flux which is the volumetric flow rate per unit of membrane area. The solute sieving coefficient and hydraulic permeability allow the quick assessment of the synthetic membrane performance.
Membrane separation processes
Membrane separation processes have a very important role in the separation industry. Nevertheless, they were not considered technically important until the mid-1970s. Membrane separation processes differ based on separation mechanisms and size of the separated particles. The widely used membrane processes include microfiltration, ultrafiltration, nanofiltration, reverse osmosis, electrolysis, dialysis, electrodialysis, gas separation, vapor permeation, pervaporation, membrane distillation, and membrane contactors. All processes except for pervaporation involve no phase change. All processes except electrodialysis are pressure driven. Microfiltration and ultrafiltration is widely used in food and beverage processing (beer microfiltration, apple juice ultrafiltration), biotechnological applications and pharmaceutical industry (antibiotic production, protein purification), water purification and wastewater treatment, the microelectronics industry, and others. Nanofiltration and reverse osmosis membranes are mainly used for water purification purposes. Dense membranes are utilized for gas separations (removal of CO2 from natural gas, separating N2 from air, organic vapor removal from air or a nitrogen stream) and sometimes in membrane distillation. The later process helps in the separation of azeotropic compositions reducing the costs of distillation processes.
Pore size and selectivity
The pore sizes of technical membranes are specified differently depending on the manufacturer. One common distinction is by nominal pore size. It describes the maximum pore size distribution and gives only vague information about the retention capacity of a membrane. The exclusion limit or "cut-off" of the membrane is usually specified in the form of NMWC (nominal molecular weight cut-off, or MWCO, molecular weight cut off, with units in Dalton). It is defined as the minimum molecular weight of a globular molecule that is retained to 90% by the membrane. The cut-off, depending on the method, can by converted to so-called D90, which is then expressed in a metric unit. In practice the MWCO of the membrane should be at least 20% lower than the molecular weight of the molecule that is to be separated.
Using track etched mica membranes Beck and Schultz demonstrated that hindered diffusion of molecules in pores can be described by the Rankin equation.
Filter membranes are divided into four classes according to pore size:
The form and shape of the membrane pores are highly dependent on the manufacturing process and are often difficult to specify. Therefore, for characterization, test filtrations are carried out and the pore diameter refers to the diameter of the smallest particles which could not pass through the membrane.
The rejection can be determined in various ways and provides an indirect measurement of the pore size. One possibility is the filtration of macromolecules (often dextran, polyethylene glycol or albumin), another is measurement of the cut-off by gel permeation chromatography. These methods are used mainly to measure membranes for ultrafiltration applications. Another testing method is the filtration of particles with defined size and their measurement with a particle sizer or by laser induced breakdown spectroscopy (LIBS). A vivid characterization is to measure the rejection of dextran blue or other colored molecules. The retention of bacteriophage and bacteria, the so-called "bacteria challenge test", can also provide information about the pore size.
To determine the pore diameter, physical methods such as porosimeter (mercury, liquid-liquid porosimeter and Bubble Point Test) are also used, but a certain form of the pores (such as cylindrical or concatenated spherical holes) is assumed. Such methods are used for membranes whose pore geometry does not match the ideal, and we get "nominal" pore diameter, which characterizes the membrane, but does not necessarily reflect its actual filtration behavior and selectivity.
The selectivity is highly dependent on the separation process, the composition of the membrane and its electrochemical properties in addition to the pore size. With high selectivity, isotopes can be enriched (uranium enrichment) in nuclear engineering or industrial gases like nitrogen can be recovered (gas separation). Ideally, even racemics can be enriched with a suitable membrane.
When choosing membranes selectivity has priority over a high permeability, as low flows can easily be offset by increasing the filter surface with a modular structure. In gas phase filtration different deposition mechanisms are operative, so that particles having sizes below the pore size of the membrane can be retained as well.
Membrane Classification
Bio-Membrane is classified in two categories, synthetic membrane and natural membrane. synthetic membranes further classified in organic and inorganic membranes. Organic membrane sub classified polymeric membranes and inorganic membrane sub classified ceramic polymers.
Synthesis of Biomass Membrane
The composite biomass membrane
Green membrane or Bio-membrane synthesis is the solution to protected environments which have largely comprehensive performance. Biomass is used in the form of activated carbon nanoparticles, like using cellulose based biomass coconut shell, hazelnut shell, walnut shell, agricultural wastes of corn stalks etc. which improve surface hydrophilicity, larger pore size, more and lower surface roughness therefore, the separation and anti-fouling performance of membranes are also improved simultaneously.
Fabrication of pure biomass based membrane
A biomass-based membrane is a membrane made from organic materials such as plant fibers. These membranes are often used in water filtration and wastewater treatment applications. The fabrication of a pure biomass-based membrane is a complex process that involves a number of steps. The first step is to create a slurry of the organic materials. This slurry is then cast onto a substrate, such as a glass or metal plate. The cast is then dried, and the resulting membrane is then subjected to a number of treatments, such as chemical or heat treatments, to improve its properties. One of the challenges in the fabrication of biomass-based membranes is to create a membrane with the desired properties.
Equipment and instruments used in the process
List of instruments used in membrane synthesis procedures:
Centrifuge
Casting Machine
Plane casting glass
Magnetic Stirrer
Glass ware: Beakers, measuring cylinders, flask etc.
Oven
Mortar and pestle
Membrane Characterization
After casting and synthesis of membrane there is need to characterize the prepared membrane to know more details about membrane parameters, like pore size, functional groups, wettability, surface charge, etc. It is important to know membrane properties so we are able to remove and treat a particulate pollutant, which causes pollution in the environment. For characterization following different instruments are used:
Scanning Electron Microscope (SEM)
Transmission electron Microscope (TEM)
Fourier Transform Infrared Spectroscopy (FTIR)
Atomic force microscopy
Contact angle meter
Zeta potential (streaming potential)
X-ray Diffraction (XRD)
Liquid–Liquid Displacement Porosimetry (LLDP)
Biomass Membrane Applications
Water treatment
Water treatment is any process that improves the quality of water to make it more acceptable for a specific end-use. Membranes can be used to remove particulates from water by either size exclusion or charge separation. In size exclusion, the pores in the membrane are sized such that only particles smaller than the pores can pass through. The pores in the membrane are sized such that only water molecules can pass through, leaving dissolved contaminants behind.
Gas separation
Utilization of membranes in gas separation, like carbon dioxide (), Nitrogen oxides (), Sulphur oxides (), harmful gasses can be removed to protect the environment. Biomass Membrane gas separation more effective than commercial membrane.
Hemodialysis
Membrane application in hemodialysis is a process of using a semipermeable membrane to remove waste products and excess fluids from the blood.
See also
Particle deposition
Synthetic membrane
Notes
References
Osada, Y., Nakagawa, T., Membrane Science and Technology, New York: Marcel Dekker, Inc,1992.
Zeman, Leos J., Zydney, Andrew L., Microfiltration and Ultrafitration, Principles and Applications., New York: Marcel Dekker, Inc,1996.
Mulder M., Basic Principles of Membrane Technology, Kluwer Academic Publishers, Netherlands, 1996.
Jornitz, Maik W., Sterile Filtration, Springer, Germany, 2006
Van Reis R., Zydney A. Bioprocess membrane technology. J Mem Sci. 297(2007): 16-50.
Templin T., Johnston D., Singh V., Tumbleson M.E., Belyea R.L. Rausch K.D. Membrane separation of solids from corn processing streams. Biores Tech. 97(2006): 1536-1545.
Ripperger S., Schulz G. Microporous membranes in biotechnical applications. Bioprocess Eng. 1(1986): 43-49.
Thomas Melin, Robert Rautenbach, Membranverfahren, Springer, Germany, 2007, .
Munir Cheryan, Handbuch Ultrafiltration, Behr, 1990, .
Eberhard Staude, Membranen und Membranprozesse, VCH, 1992, .
Filtration | Membrane technology | Chemistry | 4,162 |
431,369 | https://en.wikipedia.org/wiki/Sunyaev%E2%80%93Zeldovich%20effect | The Sunyaev–Zeldovich effect (named after Rashid Sunyaev and Yakov B. Zeldovich and often abbreviated as the SZ effect) is the spectral distortion of the cosmic microwave background (CMB) through inverse Compton scattering by high-energy electrons in galaxy clusters, in which the low-energy CMB photons receive an average energy boost during collision with the high-energy cluster electrons. Observed distortions of the cosmic microwave background spectrum are used to detect the disturbance of density in the universe. Using the Sunyaev–Zeldovich effect, dense clusters of galaxies have been observed.
Overview
The Sunyaev–Zeldovich effect was predicted by Rashid Sunyaev and Yakov Zeldovich to describe anisotropies in the CMB. The effect is caused by the CMB interacting with high energy electrons. These high energy electrons cause inverse Compton scattering of CMB photons which causes a distortion in the radiation spectrum of the CMB. The Sunyaev–Zeldovich effect is most apparent when observing galactic clusters. Analysis of CMB data at higher angular resolution (high -values) requires taking into account the Sunyaev–Zeldovich effect.
The Sunyaev–Zeldovich effect can be divided into different types:
Thermal effects, where the CMB photons interact with electrons that have high energies due to their temperature
Kinematic effects, a second-order effect where the CMB photons interact with electrons that have high energies due to their bulk motion (also called the Ostriker–Vishniac effect, after Jeremiah P. Ostriker and Ethan Vishniac.)
Polarization
The Sunyaev–Zeldovich effect is of major astrophysical and cosmological interest. It can help determine the value of the Hubble constant, determine the location of new galaxy clusters, and in the study of cluster structure and mass. Since the Sunyaev–Zeldovich effect is a scattering effect, its magnitude is independent of redshift, which means that clusters at high redshift can be detected just as easily as those at low redshift.
Thermal effects
The distortion of the CMB resulting from a large number of high energy electrons is known as the thermal Sunyaev–Zeldovich effect. The thermal Sunyaev–Zeldovich effect is most commonly studied in galaxy clusters. By comparing the Sunyaev–Zeldovich effect and X-ray emission data, the thermal structure of the cluster can be studied, and if the temperature profile is known, Sunyaev–Zeldovich data can be used to determine the baryonic mass of the cluster along the line of sight. Comparing Sunyaev–Zeldovich and X-ray data can also be used to determine the Hubble constant using the angular diameter distance of the cluster. These thermal distortions can also be measured in superclusters and in gases in the local group, although they are less significant and more difficult to detect. In superclusters, the effect is not strong (< 8 μK), but with precise enough equipment, measuring this distortion can give a glimpse into large-scale structure formation. Gases in the local group may also cause anisotropies in the CMB due to the thermal Sunyaev–Zeldovich effect which must be taken into account when measuring the CMB for certain angular scales.
Kinematic effects
The kinematic Sunyaev–Zeldovich effect is caused when a galaxy cluster is moving relative to the Hubble flow. The kinematic Sunyaev–Zeldovich effect gives a method for calculating the peculiar velocity:
where is the peculiar velocity, and is the optical depth. In order to use this equation, the thermal and kinematic effects need to be separated. The effect is relatively weak for most galaxy clusters. Using gravitational lensing, the peculiar velocity can be used to determine other velocity components for a specific galaxy cluster. These kinematic effects can be used to determine the Hubble constant and the behavior of clusters.
Research
Current research is focused on modelling how the effect is generated by the intracluster plasma in galaxy clusters, and on using the effect to estimate the Hubble constant and to separate different components in the angular average statistics of fluctuations in the background. Hydrodynamic structure formation simulations are being studied to gain data on thermal and kinetic effects in the theory. Observations are difficult due to the small amplitude of the effect and to confusion with experimental error and other sources of CMB temperature fluctuations. To distinguish the SZ effect due to galaxy clusters from ordinary density perturbations, both the spectral dependence and the spatial dependence of fluctuations in the cosmic microwave background are used.
A factor which facilitates high redshift cluster detection is the angular scale versus redshift relation: it changes little between redshifts of 0.3 and 2, meaning that clusters between these redshifts have similar sizes on the sky. The use of surveys of clusters detected by their Sunyaev–Zeldovich effect for the determination of cosmological parameters has been demonstrated by Barbosa et al. (1996). This might help in understanding the dynamics of dark energy in surveys (South Pole Telescope, Atacama Cosmology Telescope, Planck).
Observations
In 1984, researchers from the Cambridge Radio Astronomy Group and the Owens Valley Radio Observatory first detected the Sunyaev–Zeldovich effect from clusters of galaxies. Ten years later, the Ryle Telescope was used to image a cluster of galaxies in the Sunyaev–Zeldovich effect for the first time.
In 1987 the Cosmic Background Explorer (COBE) satellite observed the CMB and gave more accurate data for anisotropies in the CMB, allowing for more accurate analysis of the Sunyaev–Zeldovich effect.
Instruments built specifically to study the effect include the Sunyaev–Zeldovich camera on the Atacama Pathfinder Experiment, and the Sunyaev–Zeldovich Array, which both saw first light in 2005. In 2012, the Atacama Cosmology Telescope (ACT) performed the first statistical detection of the kinematic SZ effect. In 2012 the kinematic SZ effect was detected in an individual object for the first time in MACS J0717.5+3745.
As of 2015, the South Pole Telescope (SPT) had used the Sunyaev–Zeldovich effect to discover 415 galaxy clusters. The Sunyaev–Zeldovich effect has been and will continue to be an important tool in discovering hundreds of galaxy clusters.
Recent experiments such as the OLIMPO balloon-borne telescope try to collect data in specific frequency bands and specific regions of the sky in order to pinpoint the Sunyaev–Zeldovich effect and give a more accurate map of certain regions of the sky.
See also
Sachs–Wolfe effect
Cosmic microwave background spectral distortions
Kompaneyets equation
References
Further reading
Royal Astronomical Society, Corrupted echoes from the Big Bang? RAS Press Notice PN 04/01
External links
Corrupted echoes from the Big Bang? innovations-report.com.
Sunyaev–Zel'dovich effect on arxiv.org
Physical cosmological concepts
Radio astronomy | Sunyaev–Zeldovich effect | Physics,Astronomy | 1,481 |
34,347,641 | https://en.wikipedia.org/wiki/NanoAndMore | NanoAndMore is a distributor for AFM cantilevers from NanoWorld, Nanosensors, BudgetSensors, MikroMasch, Opus and nanotools, calibration standards and other products for nanotechnology.
History
NanoAndMore was founded in Germany in 2002 and started operating in the US in 2005. In 2005, NanoWorld Holding AG from Schaffhausen, Switzerland, acquired and integrated NanoAndMore into the NanoWorld group composed of Nanotechnology companies. The world market leader in AFM probes, NanoWorld has appointed NanoAndMore as the official distributor for NanoWorld and Nanosensors products.
NanoAndMore GmbH is operating from a location in Wetzlar, Germany - serving the European market. NanoAndMore USA is serving the North and South American markets. From 2005 to 2015, NanoAndMore USA was operating from Lady's Island (South Carolina), United States. In 2015, NanoAndMore USA moved to Watsonville, California, United States. NanoAndMore Japan was founded in 2019 and is serving Japan and operating from Misato in Saitama.
Products
AFM probes and accessories distributed by NanoAndMore are used for Atomic Force Microscopy in material science, physics, biology, life sciences and in semiconductor industry.
AFM probes sold by NanoAndMore fit all common Atomic Force Microscopes (AFM) like Asylum Research, Bruker, JPK, Molecular Imaging, Nanosurf, Veeco, WiTEK, NTMDT, Novascan, etc. As an important distributor of AFM probes it is often cited as a supplier in research papers and is therefore considered an important source of products for Atomic Force Microscopy.
References
Nanotechnology companies
Technology companies of Germany
2002 establishments in Germany
2005 establishments in the United States
2019 establishments in Japan | NanoAndMore | Materials_science | 372 |
12,439,068 | https://en.wikipedia.org/wiki/Medical%20test | A medical test is a medical procedure performed to detect, diagnose, or monitor diseases, disease processes, susceptibility, or to determine a course of treatment. Medical tests such as, physical and visual exams, diagnostic imaging, genetic testing, chemical and cellular analysis, relating to clinical chemistry and molecular diagnostics, are typically performed in a medical setting.
Types of tests
By purpose
Medical tests can be classified by their purposes, including diagnosis, screening or monitoring.
Diagnostic
A diagnostic test is a procedure performed to confirm or determine the presence of disease in an individual suspected of having a disease, usually following the report of symptoms, or based on other medical test results. This includes posthumous diagnosis. Examples of such tests are:
Using nuclear medicine to examine a patient suspected of having a lymphoma.
Measuring the blood sugar in a person suspected of having diabetes mellitus after periods of increased urination.
Taking a complete blood count of an individual experiencing a high fever to check for a bacterial infection.
Monitoring electrocardiogram readings on a patient with chest pain to diagnose or determine any heart irregularities.
Screening
Screening refers to a medical test or series of tests used to detect or predict the presence of disease in at-risk individuals within a defined group such as a population, family, or workforce. Screenings may be performed to monitor disease prevalence, manage epidemiology, aid in prevention, or strictly for statistical purposes.
Examples of screenings include measuring the level of TSH in the blood of a newborn infant as part of newborn screening for congenital hypothyroidism, checking for Lung cancer in non-smoking individuals who are exposed to second-hand smoke in an unregulated working environment, and Pap smear screening for prevention or early detection of cervical cancer.
Monitoring
Some medical tests are used to monitor the progress of, or response to medical treatment.
By method
Most test methods can be classified into one of the following broad groups:
Patient observations, which may be photographed or recorded
Questions asked when taking an individual's medical history
Tests performed in a physical examination
Radiologic tests, in which, for example, x-rays are used to form an image of a body target. These tests often involve administration of a contrast agent.
In vivo diagnostics which test in the body, such as:
Manometry
Administering a diagnostic agent and measuring the body's response, as in the gluten challenge test, contraction stress test, bronchial challenge test, oral food challenge, or the ACTH stimulation test.
which test a sample of tissue or bodily fluids, such as:
Liquid biopsy
Microbiological culturing, which determines the presence or absence of microbes in a sample from the body, and usually targeted at detecting pathogenic bacteria.
Genetic testing
Blood sugar level
Liver function testing
Calcium testing
Testing for electrolytes in the blood, such as sodium, potassium, creatinine, and urea
By sample location
In vitro tests can be classified according to the location of the sample being tested, including:
Blood tests
Urine tests, including naked eye exam of the urine
Stool tests, including naked eye exam of the feces
Sputum (phlegm), including naked eye exam of the sputum
Accuracy and precision
Accuracy of a laboratory test is its correspondence with the true value. Accuracy is maximized by calibrating laboratory equipment with reference material and by participating in external quality control programs.
Precision of a test is its reproducibility when it is repeated on the same sample. An imprecise test yields widely varying results on repeated measurement. Precision is monitored in laboratory by using control material.
Detection and quantification
Tests performed in a physical examination are usually aimed at detecting a symptom or sign, and in these cases, a test that detects a symptom or sign is designated a positive test, and a test that indicated absence of a symptom or sign is designated a negative test, as further detailed in a separate section below.A quantification of a target substance, a cell type or another specific entity is a common output of, for example, most blood tests. This is not only answering if a target entity is present or absent, but also how much is present. In blood tests, the quantification is relatively well specified, such as given in mass concentration, while most other tests may be quantifications as well although less specified, such as a sign of being "very pale" rather than "slightly pale". Similarly, radiologic images are technically quantifications of radiologic opacity of tissues.
Especially in the taking of a medical history, there is no clear limit between a detecting or quantifying test versus rather descriptive information of an individual. For example, questions regarding the occupation or social life of an individual may be regarded as tests that can be regarded as positive or negative for the presence of various risk factors, or they may be regarded as "merely" descriptive, although the latter may be at least as clinically important.
Positive or negative
The result of a test aimed at detection of an entity may be positive or negative: this has nothing to do with a bad prognosis, but rather means that the test worked or not, and a certain parameter that was evaluated was present or not. For example, a negative screening test for breast cancer means that no sign of breast cancer could be found (which is in fact very positive for the patient).
The classification of tests into either positive or negative gives a binary classification, with resultant ability to perform bayesian probability and performance metrics of tests, including calculations of sensitivity and specificity.
Continuous values
Tests whose results are of continuous values, such as most blood values, can be interpreted as they are, or they can be converted to a binary ones by defining a cutoff value, with test results being designated as positive or negative depending on whether the resultant value is higher or lower than the cutoff.
Interpretation
In the finding of a pathognomonic sign or symptom it is almost certain that the target condition is present, and in the absence of finding a sine qua non sign or symptom it is almost certain that the target condition is absent. In reality, however, the subjective probability of the presence of a condition is never exactly 100% or 0%, so tests are rather aimed at estimating a post-test probability of a condition or other entity.
Most diagnostic tests basically use a reference group to establish performance data such as predictive values, likelihood ratios and relative risks, which are then used to interpret the post-test probability for an individual.
In monitoring tests of an individual, the test results from previous tests on that individual may be used as a reference to interpret subsequent tests.
Risks
Some medical testing procedures have associated health risks, and even require general anesthesia, such as the mediastinoscopy. Other tests, such as the blood test or pap smear have little to no direct risks. Medical tests may also have indirect risks, such as the stress of testing, and riskier tests may be required as follow-up for a (potentially) false positive test result. Consult the health care provider (including physicians, physician assistants, and nurse practitioners) prescribing any test for further information.
Indications
Each test has its own indications and contraindications. An indication is a valid medical reason to perform the test. A contraindication is a valid medical reason not to perform the test. For example, a basic cholesterol test may be indicated (medically appropriate) for a middle-aged person. However, if the same test was performed on that person very recently, then the existence of the previous test is a contraindication for the test (a medically valid reason to not perform it).
Information bias is the cognitive bias that causes healthcare providers to order tests that produce information that they do not realistically expect or intend to use for the purpose of making a medical decision. Medical tests are indicated when the information they produce will be used. For example, a screening mammogram is not indicated (not medically appropriate) for a woman who is dying, because even if breast cancer is found, she will die before any cancer treatment could begin.
In a simplified fashion, how much a test is indicated for an individual depends largely on its net benefit for that individual. Tests are chosen when the expected benefit is greater than the expected harm. The net benefit may roughly be estimated by:
, where:
bn is the net benefit of performing a test
Λp is the absolute difference between pre- and posttest probability of conditions (such as diseases) that the test is expected to achieve. A major factor for such an absolute difference is the power of the test itself, such as can be described in terms of, for example, sensitivity and specificity or likelihood ratio. Another factor is the pre-test probability, with a lower pre-test probability resulting in a lower absolute difference, with the consequence that even very powerful tests achieve a low absolute difference for very unlikely conditions in an individual (such as rare diseases in the absence of any other indicating sign), but on the other hand, that even tests with low power can make a great difference for highly suspected conditions. The probabilities in this sense may also need to be considered in context of conditions that are not primary targets of the test, such as profile-relative probabilities in a differential diagnostic procedure.
ri is the rate of how much probability differences are expected to result in changes in interventions (such as a change from "no treatment" to "administration of low-dose medical treatment"). For example, if the only expected effect of a medical test is to make one disease more likely compared to another, but the two diseases have the same treatment (or neither can be treated), then, this factor is very low and the test is probably without value for the individual in this aspect.
bi is the benefit of changes in interventions for the individual
hi is the harm of changes in interventions for the individual, such as side effects of medical treatment
ht is the harm caused by the test itself.
Some additional factors that influence a decision whether a medical test should be performed or not included: cost of the test, availability of additional tests, potential interference with subsequent test (such as an abdominal palpation potentially inducing intestinal activity whose sounds interfere with a subsequent abdominal auscultation), time taken for the test or other practical or administrative aspects. The possible benefits of a diagnostic test may also be weighed against the costs of unnecessary tests and resulting unnecessary follow-up and possibly even unnecessary treatment of incidental findings.
In some cases, tests being performed are expected to have no benefit for the individual being tested. Instead, the results may be useful for the establishment of statistics in order to improve health care for other individuals. Patients may give informed consent to undergo medical tests that will benefit other people.
Patient expectations
In addition to considerations of the nature of medical testing noted above, other realities can lead to misconceptions and unjustified expectations among patients. These include: Different labs have different normal reference ranges; slightly different values will result from repeating a test; "normal" is defined by a spectrum along a bell curve resulting from the testing of a population, not by "rational, science-based, physiological principles"; sometimes tests are used in the hope of turning something up to give the doctor a clue as to the nature of a given condition; and imaging tests are subject to fallible human interpretation and can show "incidentalomas", most of which "are benign, will never cause symptoms, and do not require further evaluation," although clinicians are developing guidelines for deciding when to pursue diagnoses of incidentalomas.
Standard for the reporting and assessment
The QUADAS-2 revision is available.
List of medical tests
See also
Blood culture
Chemical test
Gold standard (test)
Medical sign
Molecular diagnostics
Nailbed assessment
Test panel
Point-of-care testing
EU IVD Regulation
References
Further reading
Pathology | Medical test | Biology | 2,439 |
473,238 | https://en.wikipedia.org/wiki/Runtime%20library | In computer programming, a runtime library is a set of low-level routines used by a compiler to invoke some of the behaviors of a runtime environment, by inserting calls to the runtime library into compiled executable binary. The runtime environment implements the execution model, built-in functions, and other fundamental behaviors of a programming language. During execution (run time) of that computer program, execution of those calls to the runtime library cause communication between the executable binary and the runtime environment. A runtime library often includes built-in functions for memory management or exception handling. Therefore, a runtime library is always specific to the platform and compiler.
The runtime library may implement a portion of the runtime environment's behavior, but if one reads the code of the calls available, they are typically only thin wrappers that simply package information, and send it to the runtime environment or operating system. However, sometimes the term runtime library is meant to include the code of the runtime environment itself, even though much of that code cannot be directly reached via a library call.
For example, some language features that can be performed only (or are more efficient or accurate) at runtime are implemented in the runtime environment and may be invoked via the runtime library API, e.g. some logic errors, array bounds checking, dynamic type checking, exception handling, and possibly debugging functionality. For this reason, some programming bugs are not discovered until the program is tested in a "live" environment with real data, despite sophisticated compile-time checking and testing performed during development.
As another example, a runtime library may contain code of built-in low-level operations too complicated for their inlining during compilation, such as implementations of arithmetic operations not directly supported by the targeted CPU, or various miscellaneous compiler-specific operations and directives.
The concept of a runtime library should not be confused with an ordinary program library like that created by an application programmer or delivered by a third party, nor with a dynamic library, meaning a program library linked at run time. For example, the C programming language requires only a minimal runtime library (commonly called crt0), but defines a large standard library (called C standard library) that has to be provided by each implementation.
Examples
glibc
msvcrt
See also
Static build
Runtime environment
References
External links
What is the C runtime library? (StackExchange)
Computer libraries
Run-time systems | Runtime library | Technology | 511 |
2,686,262 | https://en.wikipedia.org/wiki/Piezophile | A piezophile (from Greek "piezo-" for pressure and "-phile" for loving) is an organism with optimal growth under high hydrostatic pressure, i.e., an organism that has its maximum rate of growth at a hydrostatic pressure equal to or above , when tested over all permissible temperatures. Originally, the term barophile was used for these organisms, but since the prefix "baro-" stands for weight, the term piezophile was given preference. Like all definitions of extremophiles, the definition of piezophiles is anthropocentric, and humans consider that moderate values for hydrostatic pressure are those around 1 atm (= 0.1 MPa = 14.7 psi), whereas those "extreme" pressures are the normal living conditions for those organisms. Hyperpiezophiles are organisms that have their maximum growth rate above 50 MPa (= 493 atm = 7,252 psi).
Though the high hydrostatic pressure has deleterious effects on organisms growing at atmospheric pressure, these organisms which are solely found at high pressure habitats at deep sea in fact need high pressures for their optimum growth. Often their growth is able to continue at much higher pressures (such as 100MPa) compared to those organisms which normally grow at low pressures.
The first obligate piezophile found was a psychrophilic bacteria called Colwellia marinimaniae strain M-41. It was isolated from a decaying amphipod Hirondellea gigas from the bottom of Mariana Trench. The first thermophilic piezophilic archaea Pyrococcus yayanosii strain CH1 was isolated from the Ashadze site, a deep sea hydrothermal vent. Strain MT-41 has an optimal growth pressure at 70MPa at 2 °C and strain CH1 has a optimal growth pressure at 52MPa at 98 °C. They are unable to grow at pressures lower than or equal to 20MPa, and both can grow at pressures above 100MPa.The current record for highest hydrostatic pressure where growth was observed is 140MPa shown by Colwellia marinimaniae MTCD1. The term "obligate piezophile" refers to organisms that are unable to grow under lower hydrostatic pressures, such as 0.1 MPa. In contrast, piezotolerant organisms are those that have their maximum rate of growth at a hydrostatic pressure under 10 MPa, but that nevertheless are able to grow at lower rates under higher hydrostatic pressures.
Most of the Earth's biosphere (in terms of volume) is subject to high hydrostatic pressure, and the piezosphere comprises the deep sea (at the depth of 1,000 m and greater) plus the deep subsurface (which can extend up to 5,000 m beneath the seafloor or the continental surface). The deep sea has a mean temperature around 1 to 3 °C, and it is dominated by psychropiezophiles. In contrast, deep subsurface and hydrothermal vents in the seafloor are dominated by thermopiezophiles that prosper in temperatures above 45 °C (113 °F).
Although the study of nutrient acquisition and metabolism within the piezosphere is still in its infancy, it is understood that most of the organic matter present are refractory complex polymers from the eutrophic zone. Both heterotrophic metabolism and autotrophic fixation are present within the piezosphere and additional research suggests significant metabolism of iron-bearing minerals and carbon monoxide. Additional research is required to fully understand and characterize piezosphere metabolism.
Piezophilic adaptations
High pressure has several effects on biological systems. The application of pressure results in equilibrium shifting towards state occupying small volume and it changes intermolecular distances and affects conformations. This also has an effect on the functionality of the cells. Piezophiles employ several mechanisms to adapt themselves to these high hydrostatic pressures. They regulate gene expression according to pressure and also adapt their biomolecules to differences in pressure.
Nucleic acids
High pressure stabilizes hydrogen bonds and stacking interactions of the DNA. Thus it favours the double stranded duplex structure of the DNA. However, to carry out several processes like DNA replication, transcription and translation, the transition to single-strand structure is necessary, which becomes difficult as high pressure increases the melting temperature, Tm. Thus, these processes may face difficulties.
Cell membranes
When pressure increases, the fluidity of the cell membrane is decreased as due to restrictions in volume they change their conformation and packing. This decreases the permeability of the cell membrane to water and different molecules. In response to flucatuation in environment, they change their membrane structures. Piezophilic bacteria do so by varying their acyl chain length, by accumulating unsaturated fatty acids, accumulating specific polar headgroups and branched fatty acids. Piezophilic archaea synthesize archaeol and cadarchaeol-based polar lipids, bipolar tetraether lipids, incorporate cyclopentane rings and increase in unsaturation.
Proteins
The macromolecules bearing the largest effect of pressure are proteins. Just like lipids, they change their conformation and packing to accommodate changes in pressure. This affects their multimeric conformation, stability and also the structure of their catalytic sites, which changes their functionality. In pressure-intolerant species, proteins tend to compact and unfold under high pressures as overall volume is reduced. Piezophilic proteins, however, tend to have less void space and smaller void spaces overall to mitigate compaction and unfolding pressures. There are also changes in the various interactions between amino acids. In general, they are very resistant to pressure.
Enzymes
Due to the functional nature of enzymes, piezophiles must maintain their activity to survive. High pressures tend to favor enzymes with higher flexibility at the cost of lower stability. Additionally, piezophilic enzymes often have high absolute (distinct from temperature or pressure) and relative catalytic activity. This allows the enzymes to maintain sufficient activity even with decreases due to temperature or pressure effects. Furthermore, some piezophilic enzymes have increasing catalytic activity with increasing pressures, though this is not a generalization for all piezophilic enzymes.
Overall effect on cells
As a result of high pressure, several functions may be lost in organisms that are pressure-intolerant. Effects can include loss of flagellar motility, enzyme function, and thus metabolism. It can also lead to cell death due to modifications in the cellular structure. High pressures also can cause an imbalance in oxidation and reduction reactions generating relatively high concentrations of reactive oxygen species (ROS). An increased amount of anti-oxidation genes and proteins are found in piezophiles to combat the ROS as they often cause cellular damage.
See also
Extremophile
Thermophile
Psychrophile
Archaea
Bacteria
Cell membrane
References
Aquatic ecology
Bacteria | Piezophile | Biology | 1,453 |
4,745,238 | https://en.wikipedia.org/wiki/B-Netz | B-Netz was an analog, commercial mobile radio telephone network that was operated by the Deutsche Bundespost in Germany (at first only West Germany) from 1972 until 1994. The system was also implemented in neighboring countries Austria, The Netherlands and Luxembourg. The B refers to the fact that it was the country's second public mobile telephone network, following the A-Netz.
As opposed to its predecessor, it featured direct-dialing (so that human operators were not required to connect calls). The frequency plan originally included only 38 channels (with one call possible per frequency channel), but it was upgraded to incorporate the A-Netz frequencies when that network was retired in 1980. The upgraded network had 78 channels and is sometimes referred to as the B2-Netz.
A major limitation of system was that, in order to reach a subscriber, one had to know his location since the handset would assume the local area code of the base station serving it. Handoff was not possible and calls were dropped when cells were switched. Roaming was possible between the implementing countries.
At its height in 1986, the network had 158 base stations and about 27,000 subscribers in Germany and 1,770 in Austria. At the end of 1988, there were 1,078 participants in West Berlin alone. The network was vastly oversubscribed and finding an available channel could prove difficult.
The connection between base station and handset was unencrypted, so eavesdropping was easy and common. In rare cases, additional devices were added by both participants to encrypt conversations (such as discussions of important politicians).
The B-Netz would eventually be superseded by the technically superior C-Netz, which was put into operation on May 1, 1985.
Technical details
Multiplexing: Frequency division multiplexing
Bandwidth per channel: 14 kHz
Channel spacing: 20 kHz
Duplexing: Frequency-division duplexinging
Duplex distance: 4.6 MHz
Transmitting power
20 watts for stationary stations
10 watts for mobile stations
Frequency ranges
See also
C-Netz
References
Mobile radio telephone systems | B-Netz | Technology | 429 |
34,467,061 | https://en.wikipedia.org/wiki/Interruptible%20foldback | Interruptible foldback (IFB), also known as interrupted foldback, interruptible feedback, or interrupt for broadcast, is a monitoring and cueing system used in television, filmmaking, video production, and radio broadcast for one-way communication from the director or assistant director to on-air talent or a remote location. The names are backronyms for the Telex IFB-XXX model line. Less common names for the system include program cue interrupt (PCI) and switched talkback. IFB is often facilitated using an earpiece that on-air persons wear to get cues, feedback or direction from their control rooms. The earpiece itself may also be referred to as an IFB. Sometimes IFB is accomplished by the director talking to off-camera personnel who visually cue the on-camera talent.
The IFB is a special intercom circuit that consists of a mix-minus program feed sent to an earpiece worn by talent via a wire, telephone, or radio receiver (audio that is being "fed back" to talent) that can be interrupted and replaced by a television producer's or director's intercom microphone. On a television news program for example, a producer can talk to the news anchors, to tell them when they are live on the air and when to begin reading off the script on the teleprompter or cue cards. In live television, some news anchors are seen listening to IFBs in order to report breaking news and announcements.
In electronic news gathering (ENG), the IFB can be sent through a telephone hybrid, or some other return link in a broadcast auxiliary service. The physics and design of electronics cause time delays in signals as they travel through wire, fiber optics, or space and when they are converted back and forth from physical sound, electronic signals, radio waves, and from analogue to digital. The latter process and other audio processing can introduce unacceptable delays or echos into the sound. To achieve the mix-minus program to the IFB, certain audio elements that originate remotely from the mix point will be eliminated from the mix that is sent back to the IFB at the remote site to avoid those undesirable effects.
Wired or wireless in-ear monitors (IEMs) may be used to carry the IFB audio to the on-air talent.
References
Broadcast engineering
Television terminology | Interruptible foldback | Engineering | 479 |
44,044,088 | https://en.wikipedia.org/wiki/Paradox%20of%20radiation%20of%20charged%20particles%20in%20a%20gravitational%20field | The paradox of a charge in a gravitational field is an apparent physical paradox in the context of general relativity. A charged particle at rest in a gravitational field, such as on the surface of the Earth, must be supported by a force to prevent it from falling. According to the equivalence principle, it should be indistinguishable from a particle in flat spacetime being accelerated by a force. Maxwell's equations say that an accelerated charge should radiate electromagnetic waves, yet such radiation is not observed for stationary particles in gravitational fields.
One of the first to study this problem was Max Born in his 1909 paper about the consequences of a charge in uniformly accelerated frame. Earlier concerns and possible solutions were raised by Wolfgang Pauli (1918), Max von Laue (1919), and others, but the most recognized work on the subject is the resolution of Thomas Fulton and Fritz Rohrlich in 1960.
Background
It is a standard result from Maxwell's equations of classical electrodynamics that an accelerated charge radiates. That is, it produces an electric field that falls off as in addition to its rest-frame Coulomb field. This radiation electric field has an accompanying magnetic field, and the whole oscillating electromagnetic radiation field propagates independently of the accelerated charge, carrying away momentum and energy. The energy in the radiation is provided by the work that accelerates the charge.
The theory of general relativity is built on the equivalence principle of gravitation and inertia. This principle states that it is impossible to distinguish through any local measurement whether one is in a gravitational field or being accelerated. An elevator out in deep space, far from any planet, could mimic a gravitational field to its occupants if it could be accelerated continuously "upward". Whether the acceleration is from motion or from gravity makes no difference in the laws of physics. One can also understand it in terms of the equivalence of so-called gravitational mass and inertial mass. The mass in Newton's law of universal gravitation (gravitational mass) is the same as the mass in Newton's second law of motion (inertial mass). They cancel out when equated, with the result discovered by Galileo Galilei in 1638, that all bodies fall at the same rate in a gravitational field, independent of their mass. A famous demonstration of this principle was performed on the Moon during the Apollo 15 mission, when a hammer and a feather were dropped at the same time and struck the surface at the same time.
Closely tied in with this equivalence is the fact that gravity vanishes in free fall. For objects falling in an elevator whose cable is cut, all gravitational forces vanish, and things begin to look like the free-floating absence of forces one sees in videos from the International Space Station. It is a linchpin of general relativity that everything must fall together in free fall. Just as with acceleration versus gravity, no experiment should be able to distinguish the effects of free fall in a gravitational field, and being out in deep space far from any forces.
Statement of the paradox
Putting together these two basic facts of general relativity and electrodynamics, we seem to encounter a paradox. For if we dropped a neutral particle and a charged particle together in a gravitational field, the charged particle should begin to radiate as it is accelerated under gravity, thereby losing energy and slowing relative to the neutral particle. Then a free-falling observer could distinguish free fall from the true absence of forces, because a charged particle in a free-falling laboratory would begin to be pulled upward relative to the neutral parts of the laboratory, even though no obvious electric fields were present.
Equivalently, we can think about a charged particle at rest in a laboratory on the surface of the Earth. In order to be at rest, it must be supported by something which exerts an upward force on it. This system is equivalent to being in outer space accelerated constantly upward at 1 g, and we know that a charged particle accelerated upward at 1 g would radiate. However, we do not see radiation from charged particles at rest in the laboratory. It would seem that we could distinguish between a gravitational field and acceleration, because an electric charge apparently only radiates when it is being accelerated through motion, but not through gravitation.
Resolution by Rohrlich
The resolution of this paradox, like the twin paradox and ladder paradox, comes through appropriate care in distinguishing frames of reference. This section follows the analysis of Fritz Rohrlich (1965), who shows that a charged particle and a neutral particle fall equally fast in a gravitational field. Likewise, a charged particle at rest in a gravitational field does not radiate in its rest frame, but it does so in the frame of a free-falling observer. The equivalence principle is preserved for charged particles.
The key is to realize that the laws of electrodynamics, Maxwell's equations, hold only within an inertial frame, that is, in a frame in which all forces act locally, and there is no net acceleration when the net local forces are zero. The frame could be free fall under gravity, or far in space away from any forces. The surface of the Earth is not an inertial frame, as it is being constantly accelerated. We know that the surface of the Earth is not an inertial frame because an object at rest there may not remain at rest—objects at rest fall to the ground when released. Gravity is a non-local fictitious “force” within the Earth's surface frame, just like centrifugal “force”. So we cannot naively formulate expectations based on Maxwell's equations in this frame. It is remarkable that we now understand the special-relativistic Maxwell equations do not hold, strictly speaking, on the surface of the Earth, even though they were discovered in electrical and magnetic experiments conducted in laboratories on the surface of the Earth. (This is similar to how the concept of mechanics in an inertial frame is not applicable to the surface of the Earth even disregarding gravity due to its rotation - cf. e.g. Foucault pendulum, yet they were originally found from considering ground experiments and intuitions.) Therefore, in this case, we cannot apply Maxwell's equations to the description of a falling charge relative to a "supported", non-inertial observer.
Maxwell's equations can be applied relative to an observer in free fall, because free-fall is an inertial frame. So the starting point of considerations is to work in the free-fall frame in a gravitational field—a "falling" observer. In the free-fall frame, Maxwell's equations have their usual, flat-spacetime form for the falling observer. In this frame, the electric and magnetic fields of the charge are simple: the falling electric field is just the Coulomb field of a charge at rest, and the magnetic field is zero. As an aside, note that we are building in the equivalence principle from the start, including the assumption that a charged particle falls equally as fast as a neutral particle.
The fields measured by an observer supported on the surface of the Earth are different. Given the electric and magnetic fields in the falling frame, we have to transform those fields into the frame of the supported observer. This manipulation is not a Lorentz transformation, because the two frames have a relative acceleration. Instead, the machinery of general relativity must be used.
In this case the gravitational field is fictitious because it can be "transformed away" by appropriate choice of coordinate system in the falling frame. Unlike the total gravitational field of the Earth, here we are assuming that spacetime is locally flat, so that the curvature tensor vanishes. Equivalently, the lines of gravitational acceleration are everywhere parallel, with no convergences measurable in the laboratory. Then the most general static, flat-space, cylindrical metric and line element can be written:
where is the speed of light, is proper time, are the usual coordinates of space and time, is the acceleration of the gravitational field, and is an arbitrary function of the coordinate but must approach the observed Newtonian value of . This formula is the metric for the gravitational field measured by the supported observer.
Meanwhile, the metric in the frame of the falling observer is simply the Minkowski metric:
From these two metrics Rohrlich constructs the coordinate transformation between them:
When this coordinate transformation is applied to the electric and magnetic fields of the charge in the rest frame, it is found to be radiating. Rohrlich emphasizes that this charge remains at rest in its free-fall frame, just as a neutral particle would. Furthermore, the radiation rate for this situation is Lorentz-invariant, but it is not invariant under the coordinate transformation above because it is not a Lorentz transformation.
To see whether the supported charge should radiate, we start again in the falling frame.
As observed from the freefalling frame, the supported charge appears to be accelerated uniformly upward. The case of constant acceleration of a charge is treated by Rohrlich. He finds a charge uniformly accelerated at rate has a radiation rate given by the Lorentz invariant:
The corresponding electric and magnetic fields of an accelerated charge are also given in Rohrlich. To find the fields of the charge in the supporting frame, the fields of the uniformly accelerated charge are transformed according to the coordinate transformation previously given. When that is done, one finds no radiation in the supporting frame from a supported charge, because the magnetic field is zero in this frame. Rohrlich does note that the gravitational field slightly distorts the Coulomb field of the supported charge, but not enough to be observable. So although the Coulomb law was discovered in a supporting frame, general relativity tells us that the field of such a charge is not precisely .
Fate of the radiation
The radiation from the supported charge viewed in the freefalling frame (or vice versa) is something of a curiosity: one might ask where it goes. David G. Boulware (1980) finds that the radiation goes into a region of spacetime inaccessible to the co-accelerating, supported observer. In effect, a uniformly accelerated observer has an event horizon, and there are regions of spacetime inaccessible to this observer. Camila de Almeida and Alberto Saa (2006) have a more accessible treatment of the event horizon of the accelerated observer.
References
Books
Physical paradoxes
General relativity
Radiation
Relativistic paradoxes | Paradox of radiation of charged particles in a gravitational field | Physics,Chemistry | 2,129 |
42,754,865 | https://en.wikipedia.org/wiki/Teri%20W.%20Odom | Teri W. Odom is an American chemist and materials scientist. She is the chair of the chemistry department, the Joan Husting Madden and William H. Madden, Jr. Professor of Chemistry, and a professor of materials science and engineering at Northwestern University. She is affiliated with the university's International Institute for Nanotechnology, Chemistry of Life Processes Institute, Northwestern Initiative for Manufacturing Science and Innovation, Interdisciplinary Biological Sciences Graduate Program, and department of applied physics.
Education
Odom attended Stanford University, where she earned a BS in chemistry, was elected to Phi Beta Kappa, and received the Standford's Marsden Memorial Prize for Chemistry Research (1996). She obtained her PhD in chemical physics from Harvard University in 2001 under the guidance of Charles M. Lieber, then conducted post-doctoral research at Harvard with George M. Whitesides from 2001 to 2002.
Career
Odom joined Northwestern University's department of chemistry in 2002 and became the department chair in 2018. In 2010, she became the founding chair of the Noble Metal Nanoparticles Gordon Research Conference Between 2016 and 2018, she was associate director of the International Institute for Nanotechnology. Odom has worked on the editorial advisory boards of ACS Nano, Bioconjugate Chemistry, Materials Horizons, Annual Review of Physical Chemistry Natural Sciences, Nano Futures, and Accounts of Chemical Research. Odom became an inaugural associate editor for Royal Society of Chemistry's Chemical Science journal in 2009, a position she held until 2013. She was on the editorial advisory board of Nano Letters beginning in 2010 and became editor-in-chief in 2019. In 2013, she became a founding Executive Editor for ACS Photonics.
Research interests
Research in the Odom group focus on controlling materials at 100 nm scale and investigating their size and shape-dependent properties. Odom group has developed parallel, multi-scale pattering tools to generate hierarchical, anisotropic, and 3D hard and soft materials with applications in imaging, sensing, wetting and cancer therapeutics. As a result of Odom's nanofabrication tools, she has developed flat optics that can manipulate light at the nanoscale and beat the diffraction limit and tunable plasmon-based lasers. Odom also conducts research into nanoparticle-cell interactions using new biological nanoconstructs that offer imaging and therapeutic functions due to their shape (gold nanostar).
Personal life
Odom's husband Brian, now a physicist and astronomer at Northwestern University, piqued her interest in science by introducing her to the double-slit experiment while they were dating. He encouraged her to pursue undergraduate summer research, an experience that inspired her to continue studying physics and chemistry.
Awards and recognition
1996-1999 - National Science Foundation Predoctoral Fellow, Harvard University
2001 - International Union of Pure and Applied Chemistry Young Chemists award for thesis
2001-2002 - National Research Service Award Postdoctoral Fellow, Harvard University
2002 - Research Corporation's Research Innovation Award
2002 - Dow Teacher-Scholar Award, inaugural recipient
2003 - American Chemical Society's Victor K. LaMer Award
2003 - David and Lucile Packard Foundation Fellow
2004 - National Science Foundation CAREER Award
2004 - MIT Technology Review Top 100 Innovators
2005 - Cottrell Scholar Award
2005 - DuPont Young Investigator Award
2005 - Alfred P. Sloan Research Fellow
2006 - ExxonMobil Solid State Chemistry Faculty Fellow
2007 - Rohm and Haas New Faculty Award
2008 - Phi Lambda Upsilon's National Fresenius Award
2008 - National Institutes of Health Director's Pioneer Award
2009 - Materials Research Society Outstanding Young Investigator Award
2010 - Institute for Defense Analyses's Defense Sciences Study Group (one year)
2011 - Radcliffe Institute for Advanced Study Fellow, Harvard University
2014 - Royal Society of Chemistry Fellow
2014 - Blavatnik Awards for Young Scientists Finalist
2014 - International Precious Metals Institute's Carol Tyler Award
2016 - Materials Research Society Fellow
2016 - Blavatnik Awards for Young Scientists Finalist
2017 - ACS Nano Lectureship Award
2017 - United States Department of Defense Vannevar Bush Faculty Fellow
2018 - American Physical Society Fellow
2018 - Research Corporation Cottrell Scholar TREE Award
2018 - Optica Senior Member
2020 - Royal Society of Chemistry's Centenary Prize
2020 - American Academy of Arts and Sciences Fellow
2020 - American Chemical Society's Award in Surface Chemistry
2022 - American Institute for Medical and Biological Engineering Fellow
2022 - American Chemical Society's Crano Memorial Lecture (Akron Section) at Malone University
2022 - American Association for the Advancement of Science Fellow
References
External links
Northwestern Chemistry Faculty bio
The Odom Research Group
Living people
21st-century American chemists
American materials scientists
Northwestern University faculty
Stanford University alumni
Harvard University alumni
American women chemists
Women materials scientists and engineers
1970s births
American women academics
Solid state chemists
21st-century American women scientists
Fellows of the American Physical Society
Nanophysicists
American nanotechnologists
Sloan Research Fellows
Radcliffe fellows
Fellows of the Royal Society of Chemistry
Fellows of the American Academy of Arts and Sciences
Fellows of the American Institute for Medical and Biological Engineering
Fellows of the American Association for the Advancement of Science | Teri W. Odom | Chemistry,Materials_science,Technology | 1,027 |
47,250,276 | https://en.wikipedia.org/wiki/Dothiorella%20longicollis | Dothiorella longicollis is an endophytic fungus that might be a canker pathogen, specifically for Adansonia gibbosa (baobab). It was isolated from said trees, as well as surrounding ones, in the Kimberley (Western Australia).
References
Further reading
Sakalidis, Monique L., Giles E. StJ Hardy, and Treena I. Burgess. "Endophytes as potential pathogens of the baobab species Adansonia gregorii: a focus on the Botryosphaeriaceae." Fungal Ecology 4.1 (2011): 1–14.
Jami, Fahimeh, et al. "Five new species of the Botryosphaeriaceae from Acacia karroo in South Africa." Cryptogamie, Mycologie 33.3 (2012): 245–266.
Australia, Western. Draginja Pavlic. Diss. University of Pretoria, 2009.
External links
MycoBank
longicollis
Fungi described in 2008
Fungus species | Dothiorella longicollis | Biology | 215 |
65,457,925 | https://en.wikipedia.org/wiki/Static%20Context%20Header%20Compression | Static Context Header Compression (SCHC) is a standard compression and fragmentation mechanism defined in the IPv6 over LPWAN working group at the IETF. It offers compression and fragmentation of IPv6/UDP/CoAP packets to allow their transmission over the Low-Power Wide-Area Networks (LPWAN).
Compression scheme tailored to LPWAN
About LPWAN
Low-Power Wide-Area Network (LPWAN) gathers the connectivity technologies tailored for Internet of Things (IoT), allowing for:
long-range communication (up to 40 km),
very low energy consumption (on the device side),
and energy efficiency (for networks).
The trade-off for achieving these features includes severe limitation in terms of throughput and packet size supported. Also, LPWAN come with limitations on transmission modalities since, in order to save battery, devices are dormant most of the time and wake up only episodically to transmit and receive data for a short time window.
As a result, the LPWAN use their specific protocols, each adapted to their own specificities. Most importantly, they cannot carry IPv6, which was designed to allocate addresses to the billions of IoT connected devices.
IETF compression standards
In the early 2000s, the IETF produced the first wave of mature standards for compression and fragmentation:
RoHC (Robust Header Compression) in 2001,
and 6LoWPAN (IPv6 over Low-Power Wireless Personal Area Networks) in 2007.
Yet, these compression schemes cannot fit the LPWAN specificities.
SCHC associates the benefits of the RoHC context, which provides high flexibility in the fields processing, and of the 6LoWPAN operations to avoid transiting fields that are known by the other side.
SCHC compression
SCHC takes advantage of the LPWAN characteristics (no routing, highly predictable traffic format and content of messages) to reduce the overhead to a few bytes and save network traffic.
The SCHC compression is based on the notion of context. A context is a set of rules that describes the communication context, meaning the header fields. It is shared and pre-provisioned in both the end-devices and the core network. The "static context" assumes that the rule description does not change during transmission. Thanks to this mechanism, IPv6/UDP headers are in most cases reduced to a small identifier.
SCHC fragmentation
When compression is not enough, SCHC provides a fragmentation mechanism that works in 3 different ways:
No-Ack
In this mode the SCHC packet is separated in multiple fragments that are blindly sent to the receiver, if the receiver missed any one packet then it won't be able to re build the sent packet.
Ack-On-Error
In this mode the concept of "windows" is used, windows have a predefined size, allowing the receiver to keep a count of which windows or windows parts have been received, at the moment the receiver gets the last fragment from the sender it will calculate which parts of the packets it has missed and send a message describing that to the sender. The sender will then initialize the retransmission of the missing packet parts.
Ack-Always
In Ack-Always mode the same retransmission mechanism as for Ack-On-Error is used except that it is not done at the end of the transmission but for each window.
Standardization efforts
The Generic Framework for Static Context Header Compression and Fragmentation, RFC 8724 has been published in April 2020. It describes the generic framework that can be used on all LPWAN technologies, and more generally on all Internet networks.
Additional work is dedicated to the definition of standard parameter settings and modes of operation to optimize SCHC's performance according to the implemented protocols and the underlying LPWAN technologies:
RFC 9011: SCHC over LoRaWAN
RFC 8824: SCHC for CoAP
RFC 9363: YANG Data Model for SCHC
RFC 9391: SCHC over NB-IoT
SCHC over Sigfox
SCHC over IEEE 802.15.4 networks
OAM for LPWAN using SCHC
On May 17, 2022, The LoRa Alliance (global association of companies backing the open LoRaWAN® standard for the internet of things low-power wide-area networks) announced that LoRaWAN now seamlessly supports Internet Protocol version 6 (IPv6) from end-to-end. By expanding the breadth of device-to-application solutions with IPv6, LoRaWAN's addressable IoT market is also broadened to include internet based standards required in smart electricity metering and new applications in smart buildings, industries, logistics, and homes. The Alliance released a technical specification TS 10–1.0.0 to explain how to use SCHC as an adaptation layer to enable LoRaWAN end-devices to use IPv6-based stacks over LoRaWAN and expands its certification program to include SCHC over LoRaWAN® Enabling IPv6 Solutions.
In addition, SCHC is being adopted in a joint standardization effort carried out by the DLMS User Association and the LoRa Alliance for the smart metering industries.
See also
LPWAN: Low Power Wide Area Networks
IPv6: Version 6 of the Internet Protocol
6LoWPAN: IPv6 over Low-Power Wireless Personal Area Networks
RoHC: Robust Header Compression
CoAP: Constrained Application Protocol
References
External links
IPv6 over Low Power Wide-Area Networks (LPWAN) Working group at IETF
RFC 8724 – SCHC: Generic Framework for Static Context Header Compression and Fragmentation
RFC 9011 – SCHC over LoRaWAN
RFC 8824 – SCHC for CoAP
RFC 9363 – YANG Data Model for SCHC
RFC 8376 – Low-Power Wide Area Network (LPWAN) Overview
IPv6
Wireless networking standards
Data compression
Internet protocols
Internet layer protocols | Static Context Header Compression | Technology | 1,199 |
9,710 | https://en.wikipedia.org/wiki/Elementary%20algebra | Elementary algebra, also known as high school algebra or college algebra, encompasses the basic concepts of algebra. It is often contrasted with arithmetic: arithmetic deals with specified numbers, whilst algebra introduces variables (quantities without fixed values).
This use of variables entails use of algebraic notation and an understanding of the general rules of the operations introduced in arithmetic: addition, subtraction, multiplication, division, etc. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real and complex numbers.
It is typically taught to secondary school students and at introductory college level in the United States, and builds on their understanding of arithmetic. The use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Many quantitative relationships in science and mathematics are expressed as algebraic equations.
Algebraic operations
Algebraic notation
Algebraic notation describes the rules and conventions for writing mathematical expressions, as well as the terminology used for talking about parts of expressions. For example, the expression has the following components:
A coefficient is a numerical value, or letter representing a numerical constant, that multiplies a variable (the operator is omitted). A term is an addend or a summand, a group of coefficients, variables, constants and exponents that may be separated from the other terms by the plus and minus operators. Letters represent variables and constants. By convention, letters at the beginning of the alphabet (e.g. ) are typically used to represent constants, and those toward the end of the alphabet (e.g. and ) are used to represent variables. They are usually printed in italics.
Algebraic operations work in the same way as arithmetic operations, such as addition, subtraction, multiplication, division and exponentiation, and are applied to algebraic variables and terms. Multiplication symbols are usually omitted, and implied when there is no space between two variables or terms, or when a coefficient is used. For example, is written as , and may be written .
Usually terms with the highest power (exponent), are written on the left, for example, is written to the left of . When a coefficient is one, it is usually omitted (e.g. is written ). Likewise when the exponent (power) is one, (e.g. is written ). When the exponent is zero, the result is always 1 (e.g. is always rewritten to ). However , being undefined, should not appear in an expression, and care should be taken in simplifying expressions in which variables may appear in exponents.
Alternative notation
Other types of notation are used in algebraic expressions when the required formatting is not available, or can not be implied, such as where only letters and symbols are available. As an illustration of this, while exponents are usually formatted using superscripts, e.g., , in plain text, and in the TeX mark-up language, the caret symbol represents exponentiation, so is written as "x^2". This also applies to some programming languages such as Lua. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so is written as "x**2". Many programming languages and calculators use a single asterisk to represent the multiplication symbol, and it must be explicitly used, for example, is written "3*x".
Concepts
Variables
Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general (non-specified) numbers. This is useful for several reasons.
Variables may represent numbers whose values are not yet known. For example, if the temperature of the current day, C, is 20 degrees higher than the temperature of the previous day, P, then the problem can be described algebraically as .
Variables allow one to describe general problems, without specifying the values of the quantities that are involved. For example, it can be stated specifically that 5 minutes is equivalent to seconds. A more general (algebraic) description may state that the number of seconds, , where m is the number of minutes.
Variables allow one to describe mathematical relationships between quantities that may vary. For example, the relationship between the circumference, c, and diameter, d, of a circle is described by .
Variables allow one to describe some mathematical properties. For example, a basic property of addition is commutativity which states that the order of numbers being added together does not matter. Commutativity is stated algebraically as .
Simplifying expressions
Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition, subtraction, multiplication, division and exponentiation). For example,
Added terms are simplified using coefficients. For example, can be simplified as (where 3 is a numerical coefficient).
Multiplied terms are simplified using exponents. For example, is represented as
Like terms are added together, for example, is written as , because the terms containing are added together, and, the terms containing are added together.
Brackets can be "multiplied out", using the distributive property. For example, can be written as which can be written as
Expressions can be factored. For example, , by dividing both terms by the common factor, can be written as
Equations
An equation states that two expressions are equal using the symbol for equality, (the equals sign). One of the best-known equations describes Pythagoras' law relating the length of the sides of a right angle triangle:
This equation states that , representing the square of the length of the side that is the hypotenuse, the side opposite the right angle, is equal to the sum (addition) of the squares of the other two sides whose lengths are represented by and .
An equation is the claim that two expressions have the same value and are equal. Some equations are true for all values of the involved variables (such as ); such equations are called identities. Conditional equations are true for only some values of the involved variables, e.g. is true only for and . The values of the variables which make the equation true are the solutions of the equation and can be found through equation solving.
Another type of equation is inequality. Inequalities are used to show that one side of the equation is greater, or less, than the other. The symbols used for this are: where represents 'greater than', and where represents 'less than'. Just like standard equality equations, numbers can be added, subtracted, multiplied or divided. The only exception is that when multiplying or dividing by a negative number, the inequality symbol must be flipped.
Properties of equality
By definition, equality is an equivalence relation, meaning it is reflexive (i.e. ), symmetric (i.e. if then ), and transitive (i.e. if and then ). It also satisfies the important property that if two symbols are used for equal things, then one symbol can be substituted for the other in any true statement about the first and the statement will remain true. This implies the following properties:
if and then and ;
if then and ;
more generally, for any function , if then .
Properties of inequality
The relations less than and greater than have the property of transitivity:
If and then ;
If and then ;
If and then ;
If and then .
By reversing the inequation, and can be swapped, for example:
is equivalent to
Substitution
Substitution is replacing the terms in an expression to create a new expression. Substituting 3 for in the expression makes a new expression with meaning . Substituting the terms of a statement makes a new statement. When the original statement is true independently of the values of the terms, the statement created by substitutions is also true. Hence, definitions can be made in symbolic terms and interpreted through substitution: if is meant as the definition of as the product of with itself, substituting for informs the reader of this statement that means . Often it's not known whether the statement is true independently of the values of the terms. And, substitution allows one to derive restrictions on the possible values, or show what conditions the statement holds under. For example, taking the statement , if is substituted with , this implies , which is false, which implies that if then cannot be .
If and are integers, rationals, or real numbers, then implies or . Consider . Then, substituting for and for , we learn or . Then we can substitute again, letting and , to show that if then or . Therefore, if , then or ( or ), so implies or or .
If the original fact were stated as " implies or ", then when saying "consider ," we would have a conflict of terms when substituting. Yet the above logic is still valid to show that if then or or if, instead of letting and , one substitutes for and for (and with , substituting for and for ). This shows that substituting for the terms in a statement isn't always the same as letting the terms from the statement equal the substituted terms. In this situation it's clear that if we substitute an expression into the term of the original equation, the substituted does not refer to the in the statement " implies or ."
Solving algebraic equations
The following sections lay out examples of some of the types of algebraic equations that may be encountered.
Linear equations with one variable
Linear equations are so-called, because when they are plotted, they describe a straight line. The simplest equations to solve are linear equations that have only one variable. They contain only constant numbers and a single variable without an exponent. As an example, consider:
Problem in words: If you double the age of a child and add 4, the resulting answer is 12. How old is the child?
Equivalent equation: where represent the child's age
To solve this kind of equation, the technique is add, subtract, multiply, or divide both sides of the equation by the same number in order to isolate the variable on one side of the equation. Once the variable is isolated, the other side of the equation is the value of the variable. This problem and its solution are as follows:
In words: the child is 4 years old.
The general form of a linear equation with one variable, can be written as:
Following the same procedure (i.e. subtract from both sides, and then divide by ), the general solution is given by
Linear equations with two variables
A linear equation with two variables has many (i.e. an infinite number of) solutions. For example:
Problem in words: A father is 22 years older than his son. How old are they?
Equivalent equation: where is the father's age, is the son's age.
That cannot be worked out by itself. If the son's age was made known, then there would no longer be two unknowns (variables). The problem then becomes a linear equation with just one variable, that can be solved as described above.
To solve a linear equation with two variables (unknowns), requires two related equations. For example, if it was also revealed that:
Problem in words
In 10 years, the father will be twice as old as his son.
Equivalent equation
Now there are two related linear equations, each with two unknowns, which enables the production of a linear equation with just one variable, by subtracting one from the other (called the elimination method):
In other words, the son is aged 12, and since the father 22 years older, he must be 34. In 10 years, the son will be 22, and the father will be twice his age, 44. This problem is illustrated on the associated plot of the equations.
For other ways to solve this kind of equations, see below, System of linear equations.
Quadratic equations
A quadratic equation is one which includes a term with an exponent of 2, for example, , and no term with higher exponent. The name derives from the Latin quadrus, meaning square. In general, a quadratic equation can be expressed in the form , where is not zero (if it were zero, then the equation would not be quadratic but linear). Because of this a quadratic equation must contain the term , which is known as the quadratic term. Hence , and so we may divide by and rearrange the equation into the standard form
where and . Solving this, by a process known as completing the square, leads to the quadratic formula
where the symbol "±" indicates that both
are solutions of the quadratic equation.
Quadratic equations can also be solved using factorization (the reverse process of which is expansion, but for two linear terms is sometimes denoted foiling). As an example of factoring:
which is the same thing as
It follows from the zero-product property that either or are the solutions, since precisely one of the factors must be equal to zero. All quadratic equations will have two solutions in the complex number system, but need not have any in the real number system. For example,
has no real number solution since no real number squared equals −1.
Sometimes a quadratic equation has a root of multiplicity 2, such as:
For this equation, −1 is a root of multiplicity 2. This means −1 appears twice, since the equation can be rewritten in factored form as
Complex numbers
All quadratic equations have exactly two solutions in complex numbers (but they may be equal to each other), a category that includes real numbers, imaginary numbers, and sums of real and imaginary numbers. Complex numbers first arise in the teaching of quadratic equations and the quadratic formula. For example, the quadratic equation
has solutions
Since is not any real number, both of these solutions for x are complex numbers.
Exponential and logarithmic equations
An exponential equation is one which has the form for , which has solution
when . Elementary algebraic techniques are used to rewrite a given equation in the above way before arriving at the solution. For example, if
then, by subtracting 1 from both sides of the equation, and then dividing both sides by 3 we obtain
whence
or
A logarithmic equation is an equation of the form for , which has solution
For example, if
then, by adding 2 to both sides of the equation, followed by dividing both sides by 4, we get
whence
from which we obtain
Radical equations
A radical equation is one that includes a radical sign, which includes square roots, cube roots, , and nth roots, . Recall that an nth root can be rewritten in exponential format, so that is equivalent to . Combined with regular exponents (powers), then (the square root of cubed), can be rewritten as . So a common form of a radical equation is (equivalent to ) where and are integers. It has real solution(s):
For example, if:
then
and thus
System of linear equations
There are different methods to solve a system of linear equations with two variables.
Elimination method
An example of solving a system of linear equations is by using the elimination method:
Multiplying the terms in the second equation by 2:
Adding the two equations together to get:
which simplifies to
Since the fact that is known, it is then possible to deduce that by either of the original two equations (by using 2 instead of ) The full solution to this problem is then
This is not the only way to solve this specific system; could have been resolved before .
Substitution method
Another way of solving the same system of linear equations is by substitution.
An equivalent for can be deduced by using one of the two equations. Using the second equation:
Subtracting from each side of the equation:
and multiplying by −1:
Using this value in the first equation in the original system:
Adding 2 on each side of the equation:
which simplifies to
Using this value in one of the equations, the same solution as in the previous method is obtained.
This is not the only way to solve this specific system; in this case as well, could have been solved before .
Other types of systems of linear equations
Inconsistent systems
In the above example, a solution exists. However, there are also systems of equations which do not have any solution. Such a system is called inconsistent. An obvious example is
As 0≠2, the second equation in the system has no solution. Therefore, the system has no solution.
However, not all inconsistent systems are recognized at first sight. As an example, consider the system
Multiplying by 2 both sides of the second equation, and adding it to the first one results in
which clearly has no solution.
Undetermined systems
There are also systems which have infinitely many solutions, in contrast to a system with a unique solution (meaning, a unique pair of values for and ) For example:
Isolating in the second equation:
And using this value in the first equation in the system:
The equality is true, but it does not provide a value for . Indeed, one can easily verify (by just filling in some values of ) that for any there is a solution as long as . There is an infinite number of solutions for this system.
Over- and underdetermined systems
Systems with more variables than the number of linear equations are called underdetermined. Such a system, if it has any solutions, does not have a unique one but rather an infinitude of them. An example of such a system is
When trying to solve it, one is led to express some variables as functions of the other ones if any solutions exist, but cannot express all solutions numerically because there are an infinite number of them if there are any.
A system with a higher number of equations than variables is called overdetermined. If an overdetermined system has any solutions, necessarily some equations are linear combinations of the others.
See also
History of algebra
Binary operation
Gaussian elimination
Mathematics education
Number line
Polynomial
Cancelling out
Tarski's high school algebra problem
References
Leonhard Euler, Elements of Algebra, 1770. English translation Tarquin Press, 2007, , also online digitized editions 2006, 1822.
Charles Smith, A Treatise on Algebra, in Cornell University Library Historical Math Monographs.
Redden, John. Elementary Algebra . Flat World Knowledge, 2011
External links
Algebra | Elementary algebra | Mathematics | 3,758 |
23,105,334 | https://en.wikipedia.org/wiki/Lodge%20Cottrell | Lodge Cottrell Ltd. is a supplier of environmental air pollution control equipment for the power generation industry and other industrial process applications, with over 4,500 installations worldwide. It has facilities in Birmingham, England and Houston, Texas, United States, and operates through a network of associates, partners, agents and licensees. Lodge Cottrell Ltd and Lodge Cottrell Inc. are part of KC Cottrell Co., Ltd., an air pollution control company with its headquarters in Seoul, South Korea.
The company went into administration in May 2024.
History
The Lodge Fume Deposit Company Limited was founded in Birmingham, England in 1913 by Sir Oliver Lodge who pioneered the electrostatic precipitation technique for removing dust. In 1922, the Lodge Fume Company changed its name to Lodge-Cottrell Ltd. in honor of Frederick Gardner Cottrell's additional contributions to the development electrostatic precipitation.
References
External links
Lodge Cottrell Ltd website
Pollution control technologies
Technology companies established in 1922
Manufacturing companies based in Birmingham, West Midlands
1922 establishments in England | Lodge Cottrell | Chemistry,Engineering | 213 |
18,008,681 | https://en.wikipedia.org/wiki/American%20Society%20for%20Bone%20and%20Mineral%20Research | The American Society for Bone and Mineral Research (ASBMR) is a professional, scientific and medical society established in 1977 to promote excellence in bone and mineral research and to facilitate the translation of that research into clinical practice. The ASBMR has a membership of nearly 4,000 physicians, basic research scientists, and clinical investigators from around the world.
Mission
The mission of the ASBMR is to promote excellence in bone and mineral research, foster integration of clinical and basic science, and facilitate the translation of that science to health care and clinical practice. The Society's broad goals include supporting the educational development of future generations of basic and clinical scientists, and disseminating new knowledge in bone and mineral metabolism.
Founding
In the 1970s, a growing number of US-based scientists began to focus their research on the understanding of basic bone biology and the disease osteoporosis. This led to the rise of a new field – bone and mineral research. In 1974, while attending the annual meeting of The Endocrine Society in Chicago, Illinois, USA, bone scientists Louis Avioli, Claude Arnaud, Norman Bell, William Peck, John Potts and Lawrence Raisz, along with Shirley Hohl, met at the Drake Hotel. The group laid the groundwork for an organization that would promote the study of bone and mineral research, support scientists involved in such research, and facilitate the discussion and exchange of new developments in the field. Three years later, in November 1977, the group's goals were realized with the official incorporation of the ASBMR as a nonprofit organization in St. Louis, Missouri, USA. The first ASBMR Annual Meeting was held June 11–12, 1979, at the Disneyland Hotel in Anaheim, California, USA, with approximately 150 people in attendance.
Growth
In 1984, ASBMR leaders established The Osteoporosis Foundation, which was later renamed the National Osteoporosis Foundation.
The Journal of Bone and Mineral Research (JBMR), the official journal of the ASBMR, was established in 1986. Initially a bi-monthly publication, it became a monthly journal in 1990. Lawrence G. Raisz served as the first Editor-in-Chief of the JBMR.
The ASBMR published the first edition of The Primer on the Metabolic Bone Diseases and Disorders of Mineral Metabolism, a textbook for medical and graduate students, in 1990. Murray J. Favus served as the Editor-in-Chief for the first six editions of the Primer.
The rise of new pharmacological treatments for osteoporosis in the 1990s, most notably the class of drugs known as bisphosphonates, gave rise to an influx of scientists and clinician-researchers into the field. This influx resulted in a dramatic increase in ASBMR membership and annual meeting attendance. During this period, the ASBMR expanded its advocacy endeavors targeted at U.S. government funding for bone research. The Society was a founding member of the National Coalition for Osteoporosis and Related Bone Diseases ("Bone Coalition") in 1991. ASBMR also became a member of the Federation of American Societies for Experimental Biology (FASEB).
Though conceived as a scientific research society, in recent years, ASBMR has made increasing public and healthcare professional awareness of bone diseases a top priority. The Society launched several educational initiatives aimed at primary care physicians to improve the detection and treatment of bone diseases, and founded the National Bone Health Alliance to serve as a resource and raise public awareness of bone diseases. It has also spearheaded task forces on numerous clinically relevant topics, including: osteonecrosis of the jaw and atypical femoral fractures. The Society has also sought to expand the study of bone to those in related fields and to those in emerging areas of the world.
Education
Annual meetings
ASBMR's annual meeting brings together leading basic, translational and clinical researchers in bone from around the world. The event is held in September or October and attracts nearly 4,000 attendees each year. The scientific program includes poster presentations, plenary lectures, workshops, networking events, ancillary meetings, and a host of other activities. Hallmarks of the ASBMR Annual Meeting include the Gerald D. Aurbach Lecture, the Louis V. Avioli Lecture, and the ASBMR/ECTS Clinical Debate. Abstracts from the meeting's poster presentations are published as supplements in the JBMR.
Topical Meetings
The ASBMR began holding topical meetings in 2002 to address specialized research topics within the bone field. Smaller in scale, topical meetings disseminate and discuss in-depth research on a specific area of scientific interest.
Publications
Journal of Bone and Mineral Research
The Journal of Bone and Mineral Research, the official journal of the ASBMR. The JBMR publishes original manuscripts, reviews, and special articles in basic and clinical science relevant to bone, muscle and mineral metabolism. Manuscripts are published on the biology and physiology of bone and muscle, relevant systems biology topics (e.g. osteoimmunology), and the pathophysiology and treatment of osteoporosis, sarcopenia and other disorders of bone and mineral metabolism. 2016 Journal Citation Reports Impact Factor was 6.3, ranking 15/138 (Endocrinology & Metabolism).
JBMR readers include basic scientists and physicians specializing in endocrinology, physiology, cell biology, pathology, molecular genetics, epidemiology, internal medicine, rheumatology, orthopaedics, geriatrics, dentistry, gynecology, molecular biology, nephrology and many other disciplines. The JBMR's international editorial board encourages manuscript submissions from around the world.
Primer on the Metabolic Bone Diseases and Disorders of Mineral Metabolism
The Primer on the Metabolic Bone Diseases and Disorders of Mineral Metabolism is a resource for scientists and students seeking an overview of the bone and mineral field and for clinicians who see patients with disorders of bone and mineral metabolism. The text provides valuable information on the symptoms, pathophysiology, diagnosis, and treatment of metabolic bone diseases and both common and rare disorders. Authors include internationally renowned experts in the field.
ASBMR e-News Weekly
The ASBMR e-News Weekly is a member newsletter featuring society initiatives and related information on upcoming events, conferences, membership benefits, and other important information. The newsletter keeps members abreast of ASBMR activities—from Council, Committee, task force, and program updates to the role of ASBMR within the bone and mineral field and the scientific and medical community at large. Each issue also includes the most recently published articles in JBMR and highlights current noteworthy news articles pertaining to the bone field from thousands of news sources worldwide.
Grants and awards
The Society has established numerous grant and award programs since its inception aimed at supporting the career development of its members, as well as recognizing their scientific accomplishments and contributions to the field.
See also
International Bone and Mineral Society
Orthopaedic Research Society
Australian and New Zealand Bone and Mineral Society
References
External links
ASBMR Website
Journal of Bone and Mineral Research
Primer on the Metabolic Bone Diseases and Disorders of Mineral Metabolism
International Federation of Musculoskeletal Research Societies
Non-profit organizations based in Washington, D.C.
Chemistry societies
Biology societies
Osteology | American Society for Bone and Mineral Research | Chemistry | 1,498 |
16,770,101 | https://en.wikipedia.org/wiki/Long%20non-coding%20RNA | Long non-coding RNAs (long ncRNAs, lncRNA) are a type of RNA, generally defined as transcripts more than 200 nucleotides that are not translated into protein. This arbitrary limit distinguishes long ncRNAs from small non-coding RNAs, such as microRNAs (miRNAs), small interfering RNAs (siRNAs), Piwi-interacting RNAs (piRNAs), small nucleolar RNAs (snoRNAs), and other short RNAs. Given that some lncRNAs have been reported to have the potential to encode small proteins or micro-peptides, the latest definition of lncRNA is a class of transcripts of over 200 nucleotides that have no or limited coding capacity. However, John S. Mattick and colleagues suggested to change definition of long non-coding RNAs to transcripts more than 500 nt, which are mostly generated by Pol II. That means that question of lncRNA exact definition is still under discussion in the field. Long intervening/intergenic noncoding RNAs (lincRNAs) are sequences of transcripts that do not overlap protein-coding genes.
Long non-coding RNAs include intergenic lincRNAs, intronic ncRNAs, and sense and antisense lncRNAs, each type showing different genomic positions in relation to genes and exons.
The definition of lncRNAs differs from that of other RNAs such as siRNAs, mRNAs, miRNAs, and snoRNAs because it is not connected to the function of the RNA. A lncRNA is any transcript that is not one of the other well-characterized RNAs and is longer than 200-500 nucleotides. Some scientists think that most lncRNAs do not have a biologically relevant function because they are transcripts of junk DNA.
Abundance
Long non-coding transcripts are found in many species. Large-scale complementary DNA (cDNA) sequencing projects such as FANTOM reveal the complexity of these transcripts in humans. The FANTOM3 project identified ~35,000 non-coding transcripts that bear many signatures of messenger RNAs, including 5' capping, splicing, and poly-adenylation, but have little or no open reading frame (ORF). This number represents a conservative lower estimate, since it omitted many singleton transcripts and non-polyadenylated transcripts (tiling array data shows more than 40% of transcripts are non-polyadenylated). Identifying ncRNAs within these cDNA libraries is challenging since it can be difficult to distinguish protein-coding transcripts from non-coding transcripts. It has been suggested through multiple studies that testis, and neural tissues express the greatest amount of long non-coding RNAs of any tissue type. Using FANTOM5, 27,919 long ncRNAs have been identified in various human sources.
Quantitatively, lncRNAs demonstrate ~10-fold lower abundance than mRNAs, which is explained by higher cell-to-cell variation of expression levels of lncRNA genes in the individual cells, when compared to protein-coding genes. In general, the majority (~78%) of lncRNAs are characterized as tissue-specific, as opposed to only ~19% of mRNAs. Only 3.6% of human lncRNA genes are expressed in various biological contexts and 34% of lncRNA genes are expressed at high level (top 25% of both lncRNAs and mRNAs) in at least one biological context. In addition to higher tissue specificity, lncRNAs are characterized by higher developmental stage specificity, and cell subtype specificity in tissues such as human neocortex and other parts of the brain, regulating correct brain development and function. In 2022, a comprehensive integration of lncRNAs from existing databases, revealed that there are 95,243 lncRNA genes and 323,950 transcripts in humans.
In comparison to mammals relatively few studies have focused on the prevalence of lncRNAs in plants. However an extensive study considering 37 higher plant species and six algae identified ~200,000 non-coding transcripts using an in-silico approach, which also established the associated Green Non-Coding Database (GreeNC), a repository of plant lncRNAs.
Genomic organization
In 2005 the landscape of the mammalian genome was described as numerous 'foci' of transcription that are separated by long stretches of intergenic space. While some long ncRNAs are located within the intergenic stretches, the majority are overlapping sense and antisense transcripts that often include protein-coding genes, giving rise to a complex hierarchy of overlapping isoforms. Genomic sequences within these transcriptional foci are often shared within a number of coding and non-coding transcripts in the sense and antisense directions For example, 3012 out of 8961 cDNAs previously annotated as truncated coding sequences within FANTOM2 were later designated as genuine ncRNA variants of protein-coding cDNAs. While the abundance and conservation of these arrangements suggest they have biological relevance, the complexity of these foci frustrates easy evaluation.
The GENCODE consortium has collated and analysed a comprehensive set of human lncRNA annotations and their genomic organisation, modifications, cellular locations and tissue expression profiles. Their analysis indicates human lncRNAs show a bias toward two-exon transcripts.
Identification software
Translation
There has been considerable debate about whether lncRNAs have been misannotated and do in fact encode proteins. Several lncRNAs have been found to in fact encode for peptides with biologically significant function. Ribosome profiling studies have suggested that anywhere from 40% to 90% of annotated lncRNAs are in fact translated, although there is disagreement about the correct method for analyzing ribosome profiling data. Additionally, it is thought that many of the peptides produced by lncRNAs may be highly unstable and without biological function.
Conservation
Unlike protein coding genes, sequence of long non-coding RNAs has lower level of conservation. Initial studies into lncRNA conservation noted that as a class, they were enriched for conserved sequence elements, depleted in substitution and insertion/deletion rates and depleted in rare frequency variants, indicative of purifying selection maintaining lncRNA function. However, further investigations into vertebrate lncRNAs revealed that while lncRNAs are conserved in sequence, they are not conserved in transcription. In other words, even when the sequence of a human lncRNA is conserved in another vertebrate species, there is often no transcription of a lncRNA in the orthologous genomic region. Some argue that these observations suggest non-functionality of the majority of lncRNAs, while others argue that they may be indicative of rapid species-specific adaptive selection.
While the turnover of lncRNA transcription is much higher than initially expected, it is important to note that still, hundreds of lncRNAs are conserved at the sequence level. There have been several attempts to delineate the different categories of selection signatures seen amongst lncRNAs including: lncRNAs with strong sequence conservation across the entire length of the gene, lncRNAs in which only a portion of the transcript (e.g. 5′ end, splice sites) is conserved, and lncRNAs that are transcribed from syntenic regions of the genome but have no recognizable sequence similarity. Additionally, there have been attempts to identify conserved secondary structures in lncRNAs, though these studies have currently given way to conflicting results.
Functions
Some groups have claimed that the majority of long noncoding RNAs in mammals are likely to be functional, but other groups have claimed the opposite. This is an active area of research.
Some lncRNAs have been functionally annotated in LncRNAdb (a database of literature described lncRNAs), with the majority of these being described in humans. Over 2600 human lncRNAs with experimental evidences have been community-curated in LncRNAWiki (a wiki-based, publicly editable and open-content platform for community curation of human lncRNAs). According to the curation of functional mechanisms of lncRNAs based on the literatures, lncRNAs are extensively reported to be involved in ceRNA regulation, transcriptional regulation, and epigenetic regulation. A further large-scale sequencing study provides evidence that many transcripts thought to be lncRNAs may, in fact, be translated into proteins.
In the regulation of gene transcription
In gene-specific transcription
In eukaryotes, RNA transcription is a tightly regulated process. Noncoding RNAs act upon different aspects of this process, targeting transcriptional modulators, RNA polymerase (RNAP) II and even the DNA duplex to regulate gene expression.
NcRNAs modulate transcription by several mechanisms, including functioning themselves as co-regulators, modifying transcription factor activity, or regulating the association and activity of co-regulators. For example, the noncoding RNA Evf-2 functions as a co-activator for the homeobox transcription factor Dlx2, which plays important roles in forebrain development and neurogenesis. Sonic hedgehog induces transcription of Evf-2 from an ultra-conserved element located between the Dlx5 and Dlx6 genes during forebrain development. Evf-2 then recruits the Dlx2 transcription factor to the same ultra-conserved element whereby Dlx2 subsequently induces expression of Dlx5. The existence of other similar ultra- or highly conserved elements within the mammalian genome that are both transcribed and fulfill enhancer functions suggest Evf-2 may be illustrative of a generalised mechanism that regulates developmental genes with complex expression patterns during vertebrate growth. Indeed, the transcription and expression of similar non-coding ultraconserved elements was shown to be abnormal in human leukaemia and to contribute to apoptosis in colon cancer cells, suggesting their involvement in tumorigenesis in like fashion to protein-coding RNA.
Local ncRNAs can also recruit transcriptional programmes to regulate adjacent protein-coding gene expression.
The RNA binding protein TLS binds and inhibits the CREB binding protein and p300 histone acetyltransferase activities on a repressed gene target, cyclin D1. The recruitment of TLS to the promoter of cyclin D1 is directed by long ncRNAs expressed at low levels and tethered to 5' regulatory regions in response to DNA damage signals. Moreover, these local ncRNAs act cooperatively as ligands to modulate the activities of TLS. In the broad sense, this mechanism allows the cell to harness RNA-binding proteins, which make up one of the largest classes within the mammalian proteome, and integrate their function in transcriptional programs. Nascent long ncRNAs have been shown to increase the activity of CREB binding protein, which in turn increases the transcription of that ncRNA. A study found that a lncRNA in the antisense direction of the Apolipoprotein A1 (APOA1) regulates the transcription of APOA1 through epigenetic modifications.
Recent evidence has raised the possibility that transcription of genes that escape from X-inactivation might be mediated by expression of long non-coding RNA within the escaping chromosomal domains.
Regulating basal transcription machinery
NcRNAs also target general transcription factors required for the RNAP II transcription of all genes. These general factors include components of the initiation complex that assemble on promoters or involved in transcription elongation. A ncRNA transcribed from an upstream minor promoter of the dihydrofolate reductase (DHFR) gene forms a stable RNA-DNA triplex within the major promoter of DHFR to prevent the binding of the transcriptional co-factor TFIIB. This novel mechanism of regulating gene expression may represent a widespread method of controlling promoter usage, as thousands of RNA-DNA triplexes exist in eukaryotic chromosome. The U1 ncRNA can induce transcription by binding to and stimulating TFIIH to phosphorylate the C-terminal domain of RNAP II. In contrast the ncRNA 7SK is able to repress transcription elongation by, in combination with HEXIM1/2, forming an inactive complex that prevents PTEFb from phosphorylating the C-terminal domain of RNAP II, repressing global elongation under stressful conditions. These examples, which bypass specific modes of regulation at individual promoters provide a means of quickly affecting global changes in gene expression.
The ability to quickly mediate global changes is also apparent in the rapid expression of non-coding repetitive sequences. The short interspersed nuclear (SINE) Alu elements in humans and analogous B1 and B2 elements in mice have succeeded in becoming the most abundant mobile elements within the genomes, comprising ~10% of the human and ~6% of the mouse genome, respectively. These elements are transcribed as ncRNAs by RNAP III in response to environmental stresses such as heat shock, where they then bind to RNAP II with high affinity and prevent the formation of active pre-initiation complexes. This allows for the broad and rapid repression of gene expression in response to stress.
A dissection of the functional sequences within Alu RNA transcripts has drafted a modular structure analogous to the organization of domains in protein transcription factors. The Alu RNA contains two 'arms', each of which may bind one RNAP II molecule, as well as two regulatory domains that are responsible for RNAP II transcriptional repression in vitro. These two loosely structured domains may even be concatenated to other ncRNAs such as B1 elements to impart their repressive role. The abundance and distribution of Alu elements and similar repetitive elements throughout the mammalian genome may be partly due to these functional domains being co-opted into other long ncRNAs during evolution, with the presence of functional repeat sequence domains being a common characteristic of several known long ncRNAs including Kcnq1ot1, Xlsirt and Xist.
In addition to heat shock, the expression of SINE elements (including Alu, B1, and B2 RNAs) increases during cellular stress such as viral infection in some cancer cells where they may similarly regulate global changes to gene expression. The ability of Alu and B2 RNA to bind directly to RNAP II provides a broad mechanism to repress transcription. Nevertheless, there are specific exceptions to this global response where Alu or B2 RNAs are not found at activated promoters of genes undergoing induction, such as the heat shock genes. This additional hierarchy of regulation that exempts individual genes from the generalised repression also involves a long ncRNA, heat shock RNA-1 (HSR-1). It was argued that HSR-1 is present in mammalian cells in an inactive state, but upon stress is activated to induce the expression of heat shock genes. This activation involves a conformational alteration of HSR-1 in response to rising temperatures, permitting its interaction with the transcriptional activator HSF-1, which trimerizes and induces the expression of heat shock genes. In the broad sense, these examples illustrate a regulatory circuit nested within ncRNAs whereby Alu or B2 RNAs repress general gene expression, while other ncRNAs activate the expression of specific genes.
Transcribed by RNA polymerase III
Many of the ncRNAs that interact with general transcription factors or RNAP II itself (including 7SK, Alu and B1 and B2 RNAs) are transcribed by RNAP III, uncoupling their expression from RNAP II, which they regulate. RNAP III also transcribes other ncRNAs, such as BC2, BC200 and some microRNAs and snoRNAs, in addition to housekeeping ncRNA genes such as tRNAs, 5S rRNAs and snRNAs. The existence of an RNAP III-dependent ncRNA transcriptome that regulates its RNAP II-dependent counterpart is supported by the finding of a set of ncRNAs transcribed by RNAP III with sequence homology to protein-coding genes. This prompted the authors to posit a 'cogene/gene' functional regulatory network, showing that one of these ncRNAs, 21A, regulates the expression of its antisense partner gene, CENP-F in trans.
In post-transcriptional regulation
In addition to regulating transcription, ncRNAs also control various aspects of post-transcriptional mRNA processing. Similar to small regulatory RNAs such as microRNAs and snoRNAs, these functions often involve complementary base pairing with the target mRNA. The formation of RNA duplexes between complementary ncRNA and mRNA may mask key elements within the mRNA required to bind trans-acting factors, potentially affecting any step in post-transcriptional gene expression including pre-mRNA processing and splicing, transport, translation, and degradation.
In splicing
The splicing of mRNA can induce its translation and functionally diversify the repertoire of proteins it encodes. The Zeb2 mRNA requires the retention of a 5'UTR intron that contains an internal ribosome entry site for efficient translation. The retention of the intron depends on the expression of an antisense transcript that complements the intronic 5' splice site. Therefore, the ectopic expression of the antisense transcript represses splicing and induces translation of the Zeb2 mRNA during mesenchymal development. Likewise, the expression of an overlapping antisense Rev-ErbAa2 transcript controls the alternative splicing of the thyroid hormone receptor ErbAa2 mRNA to form two antagonistic isoforms.
In translation
NcRNA may also apply additional regulatory pressures during translation, a property particularly exploited in neurons where the dendritic or axonal translation of mRNA in response to synaptic activity contributes to changes in synaptic plasticity and the remodelling of neuronal networks. The RNAP III transcribed BC1 and BC200 ncRNAs, that previously derived from tRNAs, are expressed in the mouse and human central nervous system, respectively. BC1 expression is induced in response to synaptic activity and synaptogenesis and is specifically targeted to dendrites in neurons. Sequence complementarity between BC1 and regions of various neuron-specific mRNAs also suggest a role for BC1 in targeted translational repression. Indeed, it was recently shown that BC1 is associated with translational repression in dendrites to control the efficiency of dopamine D2 receptor-mediated transmission in the striatum and BC1 RNA-deleted mice exhibit behavioural changes with reduced exploration and increased anxiety.
In siRNA-directed gene regulation
In addition to masking key elements within single-stranded RNA, the formation of double-stranded RNA duplexes can also provide a substrate for the generation of endogenous siRNAs (endo-siRNAs) in Drosophila and mouse oocytes. The annealing of complementary sequences, such as antisense or repetitive regions between transcripts, forms an RNA duplex that may be processed by Dicer-2 into endo-siRNAs. Also, long ncRNAs that form extended intramolecular hairpins may be processed into siRNAs, compellingly illustrated by the esi-1 and esi-2 transcripts. Endo-siRNAs generated from these transcripts seem particularly useful in suppressing the spread of mobile transposon elements within the genome in the germline. However, the generation of endo-siRNAs from antisense transcripts or pseudogenes may also silence the expression of their functional counterparts via RISC effector complexes, acting as an important node that integrates various modes of long and short RNA regulation, as exemplified by the Xist and Tsix (see above).
In epigenetic regulation
Epigenetic modifications, including histone and DNA methylation, histone acetylation and sumoylation, affect many aspects of chromosomal biology, primarily including regulation of large numbers of genes by remodeling broad chromatin domains. While it has been known for some time that RNA is an integral component of chromatin, it is only recently that we are beginning to appreciate the means by which RNA is involved in pathways of chromatin modification. For example, Oplr16 epigenetically induces the activation of stem cell core factors by coordinating intrachromosomal looping and recruitment of DNA demethylase TET2.
In Drosophila, long ncRNAs induce the expression of the homeotic gene, Ubx, by recruiting and directing the chromatin modifying functions of the trithorax protein Ash1 to Hox regulatory elements. Similar models have been proposed in mammals, where strong epigenetic mechanisms are thought to underlie the embryonic expression profiles of the Hox genes that persist throughout human development. Indeed, the human Hox genes are associated with hundreds of ncRNAs that are sequentially expressed along both the spatial and temporal axes of human development and define chromatin domains of differential histone methylation and RNA polymerase accessibility. One ncRNA, termed HOTAIR, that originates from the HOXC locus represses transcription across 40 kb of the HOXD locus by altering chromatin trimethylation state. HOTAIR is thought to achieve this by directing the action of Polycomb chromatin remodeling complexes in trans to govern the cells' epigenetic state and subsequent gene expression. Components of the Polycomb complex, including Suz12, EZH2 and EED, contain RNA binding domains that may potentially bind HOTAIR and probably other similar ncRNAs. This example nicely illustrates a broader theme whereby ncRNAs recruit the function of a generic suite of chromatin modifying proteins to specific genomic loci, underscoring the complexity of recently published genomic maps. Indeed, the prevalence of long ncRNAs associated with protein coding genes may contribute to localised patterns of chromatin modifications that regulate gene expression during development. For example, the majority of protein-coding genes have antisense partners, including many tumour suppressor genes that are frequently silenced by epigenetic mechanisms in cancer. A recent study observed an inverse expression profile of the p15 gene and an antisense ncRNA in leukaemia. A detailed analysis showed the p15 antisense ncRNA (CDKN2BAS) was able to induce changes to heterochromatin and DNA methylation status of p15 by an unknown mechanism, thereby regulating p15 expression. Therefore, misexpression of the associated antisense ncRNAs may subsequently silence the tumour suppressor gene contributing towards cancer.
Imprinting
Many emergent themes of ncRNA-directed chromatin modification were first apparent within the phenomenon of imprinting, whereby only one allele of a gene is expressed from either the maternal or the paternal chromosome. In general, imprinted genes are clustered together on chromosomes, suggesting the imprinting mechanism acts upon local chromosome domains rather than individual genes. These clusters are also often associated with long ncRNAs whose expression is correlated with the repression of the linked protein-coding gene on the same allele. Indeed, detailed analysis has revealed a crucial role for the ncRNAs Kcnqot1 and Igf2r/Air in directing imprinting.
Almost all the genes at the Kcnq1 loci are maternally inherited, except the paternally expressed antisense ncRNA Kcnqot1. Transgenic mice with truncated Kcnq1ot fail to silence the adjacent genes, suggesting that Kcnqot1 is crucial to the imprinting of genes on the paternal chromosome. It appears that Kcnqot1 is able to direct the trimethylation of lysine 9 (H3K9me3) and 27 of histone 3 (H3K27me3) to an imprinting centre that overlaps the Kcnqot1 promoter and actually resides within a Kcnq1 sense exon. Similar to HOTAIR (see above), Eed-Ezh2 Polycomb complexes are recruited to the Kcnq1 loci paternal chromosome, possibly by Kcnqot1, where they may mediate gene silencing through repressive histone methylation. A differentially methylated imprinting centre also overlaps the promoter of a long antisense ncRNA Air that is responsible for the silencing of neighbouring genes at the Igf2r locus on the paternal chromosome. The presence of allele-specific histone methylation at the Igf2r locus suggests Air also mediates silencing via chromatin modification.
Xist and X-chromosome inactivation
The inactivation of a X-chromosome in female placental mammals is directed by one of the earliest and best characterized long ncRNAs, Xist. The expression of Xist from the future inactive X-chromosome, and its subsequent coating of the inactive X-chromosome, occurs during early embryonic stem cell differentiation. Xist expression is followed by irreversible layers of chromatin modifications that include the loss of the histone (H3K9) acetylation and H3K4 methylation that are associated with active chromatin, and the induction of repressive chromatin modifications including H4 hypoacetylation, H3K27 trimethylation, H3K9 hypermethylation and H4K20 monomethylation as well as H2AK119 monoubiquitylation. These modifications coincide with the transcriptional silencing of the X-linked genes. Xist RNA also localises the histone variant macroH2A to the inactive X–chromosome. There are additional ncRNAs that are also present at the Xist loci, including an antisense transcript Tsix, which is expressed from the future active chromosome and able to repress Xist expression by the generation of endogenous siRNA. Together these ncRNAs ensure that only one X-chromosome is active in female mammals.
Telomeric non-coding RNAs
Telomeres form the terminal region of mammalian chromosomes and are essential for stability and aging and play central roles in diseases such as cancer. Telomeres have been long considered transcriptionally inert DNA-protein complexes until it was shown in the late 2000s that telomeric repeats may be transcribed as telomeric RNAs (TelRNAs) or telomeric repeat-containing RNAs. These ncRNAs are heterogeneous in length, transcribed from several sub-telomeric loci and physically localise to telomeres. Their association with chromatin, which suggests an involvement in regulating telomere specific heterochromatin modifications, is repressed by SMG proteins that protect chromosome ends from telomere loss. In addition, TelRNAs block telomerase activity in vitro and may therefore regulate telomerase activity. Although early, these studies suggest an involvement for telomeric ncRNAs in various aspects of telomere biology.
In regulation of DNA replication timing and chromosome stability
Asynchronously replicating autosomal RNAs (ASARs) are very long (~200kb) non-coding RNAs that are non-spliced, non-polyadenylated, and are required for normal DNA replication timing and chromosome stability. Deletion of any one of the genetic loci containing ASAR6, ASAR15, or ASAR6-141 results in the same phenotype of delayed replication timing and delayed mitotic condensation (DRT/DMC) of the entire chromosome. DRT/DMC results in chromosomal segregation errors that lead to increased frequency of secondary rearrangements and an unstable chromosome. Similar to Xist, ASARs show random monoallelic expression and exist in asynchronous DNA replication domains. Although the mechanism of ASAR function is still under investigation, it is hypothesized that they work via similar mechanisms as the Xist lncRNA, but on smaller autosomal domains resulting in allele specific changes in gene expression.
Incorrect reparation of DNA double-strand breaks (DSB) leading to chromosomal rearrangements is one of the oncogenesis's primary causes. A number of lncRNAs are crucial at the different stages of the main pathways of DSB repair in eukaryotic cells: nonhomologous end joining (NHEJ) and homology-directed repair (HDR). Gene mutations or variation in expression levels of such RNAs can lead to local DNA repair defects, increasing the chromosome aberration frequency. Moreover, it was demonstrated that some RNAs could stimulate long-range chromosomal rearrangements.
In aging and disease
The discovery that long ncRNAs function in various aspects of cell biology has led to research on their role in disease. Tens of thousands of lncRNAs are potentially associated with diseases based on the multi-omics evidence. A handful of studies have implicated long ncRNAs in a variety of disease states and support an involvement and co-operation in neurological disease and cancer.
The first published report of an alteration in lncRNA abundance in aging and human neurological disease was provided by Lukiw et al. in a study using short post-mortem interval tissues from patients with Alzheimer's disease and non-Alzheimer's dementia (NAD) ; this early work was based on the prior identification of a primate brain-specific cytoplasmic transcript of the Alu repeat family by Watson and Sutcliffe in 1987 known as BC200 (brain, cytoplasmic, 200 nucleotide).
While many association studies have identified unusual expression of long ncRNAs in disease states, there is little understanding of their role in causing disease. Expression analyses that compare tumor cells and normal cells have revealed changes in the expression of ncRNAs in several forms of cancer. For example, in prostate tumours, PCGEM1 (one of two overexpressed ncRNAs) is correlated with increased proliferation and colony formation suggesting an involvement in regulating cell growth. PRNCR1 was found to promote tumor growth in several malignancies like prostate cancer, breast cancer, non-small cell lung cancer, oral squamous cell carcinoma and colorectal cancer. MALAT1 (also known as NEAT2) was originally identified as an abundantly expressed ncRNA that is upregulated during metastasis of early-stage non-small cell lung cancer and its overexpression is an early prognostic marker for poor patient survival rates. LncRNAs such as HEAT2 or KCNQ1OT1 have been shown to be regulated in the blood of patients with cardiovascular diseases such as heart failure or coronary artery disease and, moreover, to predict cardiovascular disease events. More recently, the highly conserved mouse homologue of MALAT1 was found to be highly expressed in hepatocellular carcinoma. Intronic antisense ncRNAs with expression correlated to the degree of tumor differentiation in prostate cancer samples have also been reported. Despite a number of long ncRNAs having aberrant expression in cancer, their function and potential role in tumourigenesis is relatively unknown. For example, the ncRNAs HIS-1 and BIC have been implicated in cancer development and growth control, but their function in normal cells is unknown. In addition to cancer, ncRNAs also exhibit aberrant expression in other disease states. Overexpression of PRINS is associated with psoriasis susceptibility, with PRINS expression being elevated in the uninvolved epidermis of psoriatic patients compared with both psoriatic lesions and healthy epidermis.
Genome-wide profiling revealed that many transcribed non-coding ultraconserved regions exhibit distinct profiles in various human cancer states. An analysis of chronic lymphocytic leukaemia, colorectal carcinoma and hepatocellular carcinoma found that all three cancers exhibited aberrant expression profiles for ultraconserved ncRNAs relative to normal cells. Further analysis of one ultraconserved ncRNA suggested it behaved like an oncogene by mitigating apoptosis and subsequently expanding the number of malignant cells in colorectal cancers. Many of these transcribed ultraconserved sites that exhibit distinct signatures in cancer are found at fragile sites and genomic regions associated with cancer. It seems likely that the aberrant expression of these ultraconserved ncRNAs within malignant processes results from important functions they fulfil in normal human development.
Recently, a number of association studies examining single nucleotide polymorphisms (SNPs) associated with disease states have been mapped to long ncRNAs. For example, SNPs that identified a susceptibility locus for myocardial infarction mapped to a long ncRNA, MIAT (myocardial infarction associated transcript). Likewise, genome-wide association studies identified a region associated with coronary artery disease that encompassed a long ncRNA, ANRIL. ANRIL is expressed in tissues and cell types affected by atherosclerosis and its altered expression is associated with a high-risk haplotype for coronary artery disease. Lately there has been increasing evidence on the role of non-coding RNAs in the development and in the categorization of heart failure.
The complexity of the transcriptome, and our evolving understanding of its structure may inform a reinterpretation of the functional basis for many natural polymorphisms associated with disease states. Many SNPs associated with certain disease conditions are found within non-coding regions and the complex networks of non-coding transcription within these regions make it particularly difficult to elucidate the functional effects of polymorphisms. For example, a SNP both within the truncated form of ZFAT and the promoter of an antisense transcript increases the expression of ZFAT not through increasing the mRNA stability, but rather by repressing the expression of the antisense transcript.
The ability of long ncRNAs to regulate associated protein-coding genes may contribute to disease if misexpression of a long ncRNA deregulates a protein coding gene with clinical significance. In similar manner, an antisense long ncRNA that regulates the expression of the sense BACE1 gene, a crucial enzyme in Alzheimer's disease etiology, exhibits elevated expression in several regions of the brain in individuals with Alzheimer's disease Alteration of the expression of ncRNAs may also mediate changes at an epigenetic level to affect gene expression and contribute to disease aetiology. For example, the induction of an antisense transcript by a genetic mutation led to DNA methylation and silencing of sense genes, causing β-thalassemia in a patient.
Alongside their role in mediating pathological processes, long noncoding RNAs play a role in the immune response to vaccination, as identified for both the influenza vaccine and the yellow fever vaccine.
Structure
It took over two decades after the discovery of the first human long non-coding transcripts for the functional significance of lncRNA structures to be fully recognized. Early structural studies led to the proposal of several hypotheses for classifying lncRNA architectures. One hypothesis suggests that lncRNAs may feature a compact tertiary structure, similar to ribozymes like the ribosome or self-splicing introns. Another possibility is that lncRNAs could have structured protein-binding sites arranged in a decentralized scaffold, lacking a compact core. A third hypothesis posits that lncRNAs might exhibit a largely unstructured architecture, with loosely organized protein-binding domains interspersed with long regions of disordered single-stranded RNA.
Studying the tertiary structure of lncRNAs by conventional methods such as X- ray crystallography, cryo-EM and nuclear magnetic resonance (NMR) is unfortunately still hampered by their size and conformational dynamics, and by the fact that for now we still know too little about their mechanism to reconstruct stable and functionally-active lncRNA-ribonucleoprotein complexes. But some pioneering studies, showed that lncRNAs can already be studied by low-resolution single-particle and in-solution methods, such as atomic force microscopy (AFM) and small-angle X-ray scattering (SAXS), in some cases even in complexes with small molecule modulators.
For instance, lncRNA MEG3 was shown to regulate transcription factor p53 thanks to its compact structured core. Moreover, lncRNA Braveheart (Bvht) was shown to have a well-defined, albeit flexible 3D structure that is remodeled upon binding CNBP (Cellular Nucleic-acid Binding Protein) which recognizes distal domains in the RNA. Finally, Xist a master regulator of X chromosome inactivation was shown to specifically bind a small molecule compound, which alters the conformation of Xist RepA motif and displaces two known interacting protein factors (PRC2 and SPEN) from the RNA. By such mechanism of action, the compound abrogates the initiation of X-chromosome inactivation.
See also
List of long non-coding RNA databases
NONCODE
Pinc
Sphinx (gene)
VIS1
ZNRD1-AS1
References
RNA
Non-coding RNA
Biotechnology | Long non-coding RNA | Biology | 7,769 |
50,492,220 | https://en.wikipedia.org/wiki/C10orf35 | Chromosome 10 open reading frame 35 (c10orf35) is a gene that in humans, encodes for a protein-binding, transmembrane protein. The protein contains the domain of unknown function 4605 (DUF4605) which belongs to the protein family pfam15378. This gene is located at locus 10q22.1.
General characteristics
The physical properties of the c10orf35 protein were analyzed, and the molecular weight was predicted to be 13.2 kdal, and the isoelectric point was predicted to be 11.5. The properties of DUF4605 were also analyzed, and the molecular weight was predicted to be 6.0 kdal. The isoelectric point was predicted to be 6.9, which is significantly less basic than the full protein.
Properties and structure
This protein includes a highly conserved region between amino acids 1 and 19 discovered by MSA. DUF 4605 is located between amino acids 62 and 92.
The secondary structure of the c10orf35 protein was predicted to consist of a β-sheet between amino acids 3 and 5, an α-helix between 61 and 72, and a transmembrane α-helix between 92 and 112.
The serine in position 62 can be phosphorylated to form a phosphoserine group.
Homology
Paralogs
The protein of interest has a similar, paralogous domain found in c4orf32, which is conserved in mammals, birds, very few reptiles, and one amphibian.
Orthologs
The protein coded for by c10orf35 has many orthologs found mainly in mammals, reptiles, few amphibians, one fish, and one invertebrate. Based on a multiple sequence alignment between representative orthologs, DUF 4605 and a second, highly conserved region were confirmed.
Clinical significance
Expression
The c10orf35 protein has expression over the 75th percentile in the brain, spinal cord, and male reproductive organs. Within the male reproductive system, the RNA is found within the testis and prostate, while the protein is expressed in the epididymis. In the brain, the RNA is found in the cerebral cortex, while the protein is expressed ubiquitously throughout the brain.
Transcription regulation
Protein interactions
The c10orf35 protein has been experimentally shown by Affinity Capture-MS to interact with 7 other proteins as shown below.
References
Uncharacterized proteins | C10orf35 | Biology | 503 |
48,429,148 | https://en.wikipedia.org/wiki/Tricholoma%20primulibrunneum | Tricholoma primulibrunneum is an agaric fungus of the genus Tricholoma. Found in Sabah, Malaysia, where it grows on humus in Agathis forest, it was described as new to science in 1994 by English mycologist E.J.H. Corner.
See also
List of Tricholoma species
References
primulibrunneum
Fungi described in 1994
Fungi of Asia
Taxa named by E. J. H. Corner
Fungus species | Tricholoma primulibrunneum | Biology | 101 |
73,216,226 | https://en.wikipedia.org/wiki/Metallaborane | In chemistry, a metalloborane is a compound that contains one or more metal atoms and one or more boron hydride. These compounds are related conceptually and often synthetically to the boron-hydride clusters by replacement of BHn units with metal-containing fragments. Often these metal fragments are derived from metal carbonyls or cyclopentadienyl complexes. Their structures can often be rationalized by polyhedral skeletal electron pair theory. The inventory of these compounds is large, and their structures can be quite complex.
Examples
Two simple examples are . The MB4 cores (M = Fe or Co) of these two compounds adopt structures expected for nido 5-vertex clusters. The iron compound is produced by reaction of diiron nonacarbonyl with pentaborane. and cyclobutadieneiron tricarbonyl have similar structures.
Metallacarboranes
Even greater in scope than metalloboranes are metallacarboranes. These cages have carbon vertices, often CH, in addition to BH and M vertices. A well-developed class of metallacarboranes are prepared from dicarbollides, anions of the formula [C2B9H11]2-. These anions function as ligands for a variety of metals, often forming sandwich complexes.
Some metalloboranes are derived by the metalation of neutral carboranes. Illustrative are the six-and seven-vertex cages prepared from closo-. Reaction of this carborane with iron carbonyl sources gives closo Fe- and Fe2-containing products, according to these idealized equations:
A further example of insertion into a closo carborane is the synthesis of the yellow-orange solid closo-1,2,3-:
A closely related reaction involves the capping of an anionic nido carborane
The last reaction is worked up with acid and air.
References
Cluster chemistry | Metallaborane | Chemistry | 408 |
30,365,744 | https://en.wikipedia.org/wiki/HD%2025171 | HD 25171 is a star with an orbiting exoplanet in the southern constellation of Reticulum, the reticle. With an apparent visual magnitude of 7.79, this star is too faint to be viewed with the naked eye. However, it is readily visible through a small telescope from the southern hemisphere. Parallax measurements place it at a distance of roughly from Earth. It is drifting further away with a heliocentric radial velocity of +43 km/s.
Based upon its spectrum, this is an ordinary F-type main sequence star with a stellar classification of F8 V. It is slightly larger than the Sun, with 9% more mass and an 7% greater radius. As such, it is radiating 189% of the Sun's luminosity from its outer atmosphere at an effective temperature of 6,063 K. This gives it the yellow-white hued glow of an F-type star. It appears to be roughly the same age as the Sun; around four billion years.
A survey in 2015 ruled out the existence of any stellar companions at projected distances above 26 astronomical units.
Planetary system
The planetary companion was discovered in 2010 with the HARPS instrument, which measured the radial velocity displacement caused by the gravitational perturbation of the star by the planet. This data provided an orbital period of 1,845 days and set a lower bound of the planet's mass at 95% of the mass of Jupiter. The planetary system of HD 25171 is analogous to Solar System in the sense that a gas giant orbiting outside the frost line, far enough to do not destabilize orbits within a circumstellar habitable zone.
References
F-type main-sequence stars
Planetary systems with one confirmed planet
Reticulum
Durchmusterung objects
025171
009141 | HD 25171 | Astronomy | 375 |
78,326,909 | https://en.wikipedia.org/wiki/Africa%20Prize%20For%20Engineering%20Innovation | The Africa Prize For Engineering Innovation is an award for excellence in engineering in Sub-Saharan Africa. Eight months are set aside to help the contestants. The winner is awarded £25,000, with the second, third and fourth runners-up gaining £10,000 each.
History
The award was introduced in January 2014 by the Royal Academy of Engineering in the United Kingdom. Competitor engineers must be from Sub-Saharan Africa.
In 2024 Esther Kimani was the winner of the prize and, because it was the award's tenth year, she was awarded £50,000. She was second winner from Kenya. Kimani had developed a method of identifying diseases in crops using image analysis.
Benefits
Sixteen competitors are selected and they are given any support they need during the competition and beyond to deliver their projects. These competitors receive training and support and they get the opportunity to improve their networking. The winner receives £25,000 and the second, third and fourth places are awarded £10,000.
Award recipients
2015 Dr. Askwar Hilonga and team of Tanzania
2016 Arthur Zang of Cameroon
2017 Godwin Benson of Nigeria
2018 Brian Gitta and his team from Uganda
2019 Neo Hutiri from South Africa
2020 Charlette N'Guessan from Ghana
2021 Noël N'guessan from Ivory Coast
2022 Norah Magero from Kenya
2023 Anatoli Kirigwajjo from Uganda
2023 Edmund Wessels from South Africa
2024 Esther Kimani, Kenya
References
International awards
Awards established in 2014
Awards of the Royal Academy of Engineering
2014 establishments in the United Kingdom | Africa Prize For Engineering Innovation | Technology | 316 |
14,511,835 | https://en.wikipedia.org/wiki/Trap-lining | In ethology and behavioral ecology, trap-lining or traplining is a feeding strategy in which an individual visits food sources on a regular, repeatable sequence, much as trappers check their lines of traps. Traplining is usually seen in species foraging for floral resources. This involves a specified route in which the individual traverses in the same order repeatedly to check specific plants for flowers that hold nectar, even over long distances. Trap-lining has been described in several taxa, including bees, butterflies, tamarins, bats, rats, and hummingbirds and tropical fruit-eating mammals such as opossums, capuchins and kinkajous. Traplining is used to term the method in which bumblebees and hummingbirds go about collecting nectar, and consequently, pollinating each plant they visit. The term "traplining" was originally coined by Daniel Janzen, although the concept was discussed by Charles Darwin and Nikolaas Tinbergen.
Behavioral response
In the instance of hummingbirds and bumblebees, traplining is an evolutionary response to the allocation of resources between species. Specifically, individual hummingbirds form their own specific routes in order to minimize competition and maximize nutrient availability. Some hummingbird species are territorial (e.g. rufous hummingbird, Selasphorus rufus,) and defend a specific territory, while others are trapliners (i.e. Long-billed hermit, Phaethornis longirostris) and constantly check different locations for food. Because of this, territorial hummingbirds will be more robust, while traplining hummingbirds have adaptations such as longer wings for more efficient flying. Traplining hummingbirds will move from source to source, obtaining nectar from each. Over time, one hummingbird will be the primary visitor to a particular source.
In the case of bumblebees, when competitors are removed, there is an influx to the removal area and less time is spent traplining over long distances. This demonstrates the ability to behaviorally adapt based on surrounding competition. In addition, bumblebees use traplining to distinguish between high nectar-producing flowers and low-nectar producing flowers by consistently recognizing and visiting those that produce higher levels. Other types of bees, such as with euglossine bees (i.e. Euglossa imperialis) use traplining to forage efficiently by flying rapidly from one precise flowering plant to the next in a set circuit, even ignoring newly blooming plants which are adjacent, but outside, of its daily route. By doing so, these euglossine bees significantly reduce the amount of time and energy spent searching for nectar each day. In general, it is seen that traplining species have higher nutritional rewards than non-traplining species.
Energy conservation
Traplining hummingbirds are known to be active proportionally to nectar production in flowers, decreasing throughout the day. Therefore, traplining hummingbirds can spend less time foraging, and obtain their energy intake from a few number of flowers. Spending less time searching for food means less energy spent flying and searching. Traplining bumblebees prioritize their routes based on travel distance and reward quantity. It is seen that the total distance of the trapline is related to the abundance of the reward (nectar) in the environment.
Spatial cognition and memory
Traplining can also be an indication of the levels of spatial cognition of species that use the technique. For example, traplining in bumblebees is an indication that bumblebees have spatial reference memory, or spatial memory, that is used to create specific routes in short term foraging. The ability to remember specific routes long-term cuts down foraging and flying time, consequently conserving energy. This theory has been tested, showing that bumblebees can remember the shortest route to the reward, even when the original path has been changed or obstructed. Additionally, bees cut down the amount of time spent revisiting sites with little or no nutritive reward. Bees with access to only short-term memory forage inefficiently.
Advantages
One of the main advantages of traplining is that the route can be taught to other members of the population quickly or over a period of hours, leading all members to a reliable food source. When the group works together on finding a particular source of food they can quickly establish where it is and get the route information transferred to all the individuals in the population. This ensures that the entire community is able to quickly find and consume the nutrients that are needed.
Traplining helps foragers that are competing for resources that replenish in a decelerating way. For example, nectar in a plant is slowly replaced over time, while acorns only occur once a year. Traplining can help plant diversity and evolution by keeping pollen with different genetics flowing from plant to plant. It is mostly pollinators that use traplining as a way to ensure they always know where the food sources they are looking for are. This means that organisms like bumblebees and hummingbirds can transfer pollen anywhere from the starting point of the route to the final food source along the path. Since the path is always the same, it greatly reduces the risk of self-pollination (iterogamy) because the pollinator won't return to the same flower on that particular foraging session.
Overall, plant species that are visited by trapliners have increased fitness and evolutionary advantages. Because of this mutualistic relationship between traplining hummingbirds and plants, traplining hummingbirds have been referred to as "legitimate pollinators", while territorial hummingbirds have been referred to as "nectar thieves". If an organism that traplines learns where a food source is once, they can always return to that food source because they can remember minute details about the location of the source. This allows them to adapt quickly if one of the major sources suddenly becomes scarce or destroyed.
Disadvantages
Serious obstacles, such as the arrangement of plant life, can hamper traplining. If the route zig zags through the understory of the tropical rainforest, some of the organisms using the route can get lost because of very subtle changes, such as a treefall gap or heavy rainfall. This could cause an individual to be separated from the entire group if it isn't able to find the path back to the original route. Some food sources can be overlooked because the traplining route in use does not lead the organisms to the area that these resources are in.
Since the route is very specific, the organisms following it may also miss out on opportunities to come in contact with potential mates. Male bumblebees going directly to the source of food have been observed to pass up on female bumblebees as potential mates that are along the same path, preferring to continue foraging and bring food back to the hive. This can take away from species diversification and could possibly delete some traits in the gene pool that are useful.
Research
Observing traplining in the natural world has proven to be very difficult and little is known about how and why species trapline, but the study of traplining in the natural environment does take place. In one particular study, individual bees trained on five artificial flowers of equal reward were observed traplining between those five flowers. When a new flower of higher reward gets included in the group, the bees subsequently adjust their trapline to include the higher reward flower. Under natural conditions they hypothesized that it would likely be beneficial for bees to prioritize higher reward flowers to either beat out competition or conserve energy.
In other field experiments, ecologists created a "competition vacuum" to observe whether or not bumblebees adjusted their feeding routes based on intense direct competition between other bumblebees. This study showed that bees in areas of higher competition are more productive than the control bees. Bumblebees opportunistically adjust their use of traplining routes in response to activity of other competing bees. Another effective way to study the behavior of traplining species is via computer simulation and indoor flight cage experiments. Simulation models can be made to show the linkage between pollinator movement and pollen flow. This model considers how service by the pollinators with different foraging patterns would affect the flow of pollen.
Indoor flight cage experiments allow for easier determination between test subjects and easier observation of behavior and patterns. Bees in small study environments seem to demonstrate less traplining tendencies than bees that were studied in environments that stretched over several hectares. A larger working area increases the need for traplining techniques to further conserve energy and maximize nutrient intake and that bees most often trapline due strictly to travel distance. The bees remember these complex flight paths by breaking them into small segments using vectors, landmarks and other environmental factors, each one pointing to the next destination.
Despite a long history of research on bee learning and navigation, most knowledge has been deduced from the behavior of foragers traveling between their nest and a single feeding location. Only recently, studies of bumblebees foraging in arrays of artificial flowers fitted with automated tracking systems have started to describe the learning mechanisms behind complex route formation between multiple locations. The demonstration that all these observations can be accurately replicated by a single learning heuristic model holds considerable promises to further investigate these questions and fill a major gap in cognitive ecology.
See also
Optimal foraging theory
References
Eating behaviors
Bird behavior | Trap-lining | Biology | 1,897 |
24,377,109 | https://en.wikipedia.org/wiki/C18H26O3 | {{DISPLAYTITLE:C18H26O3}}
The molecular formula C18H26O3 (molar mass: 290.40 g/mol) may refer to:
Inocoterone acetate, nonsteroidal antiandrogen
Octyl methoxycinnamate, a sunscreen ingredient
Oxabolone, an anabolic steroid | C18H26O3 | Chemistry | 79 |
40,682,389 | https://en.wikipedia.org/wiki/List%20of%20airborne%20wind%20energy%20organizations | This is a list of airborne wind energy or kite-energy organizations that are advancing airborne wind energy systems (AWES).
In 2011 there were over 40 organizations involved worldwide, but this number has increased to over 60 in 2017.
Categories of kite-energy or airborne-wind-energy organizations that are forming the nascent industry: education, academic, non-profit, for-profit, communication, research, original kite-energy equipment manufacturer, kite-line manufacturer, industry-wide association, history, testing, forum entity, library, cooperative, consortium, group, club, school, training school.
Generation by kite-energy systems may involve pumping, electricity generators flown in the upper flying system (flygen), electric generators situated on the land or sea or on board a vessel (groundgen), simple lifting of objects (lifting), pulling hulls or other objects (traction), or transportation; systems generate energy to do special tasks. Systems may be scaled from tiny to utility size.
Organizations
References
Airborne wind power
Aviation-related lists
Kites
Lists of scientific organizations | List of airborne wind energy organizations | Engineering | 218 |
62,038,875 | https://en.wikipedia.org/wiki/Oidiodendron%20cereale | Oidiodendron cereale is a species of ascomycetes fungi in the order Helotiales. This fungus is found globally in temperate climates where average summer temperatures are below 25 °C, but there have been scattered reports from tropical and subtropical environments. It is predominantly found in soil, but little is known regarding their ecological roles in nature. However, an enzymatic study from Agriculture Canada showed that O. cereale can break down a variety of plant, fungal, and animal based substrates found in soil, which may have beneficial effects for plants. On rare occasions, this fungus is found on human skin and hair. There has been one reported case of O. cereale infection in 1969, causing Neurodermitis Nuchae.
History and taxonomy
The anamorphic fungus was first described in the Belgian journal Hedwigia by Dr. Von F. von Thümen as Sporotrichum cerealis in 1880. Then in 1932, a Swedish mycologist, Dr. H. Robak, identified Oidiodendron nigrum while investigating fungal infections at wood pulp mills. Further investigation of the genus Oidiodendron by Dr. G. L. Barron revealed that Sporotrichum cerealis and Oidiodendron nigrum were the same organism and thereafter, named the species Oidiodendron cereale. In 1998, Hambleton et al. using ribosomal DNA sequences confirmed that O. cereale is from the Oidiodendron genus and related to the other Oidiodendron species.
Growth and morphology
Growth
This fungus grows hyphally and its asexual reproduction cycles have been well described in literature. Asexual reproduction occurs through its lens-shaped arthroconidia with thickened rings of cell wall material. Young colonies appear grey and turns purple-black as it matures. The conidiophores develop by dividing their branches into sections of equal length. Then it rounds off the branch and develops into the same number of spores. At maturity, the spores fall away from each other and the old wall remains attached to the adjacent conidia as a fringe. Conidia are dispersed by wind and arthropods, where the conidia adhere to the carrier's exoskeleton electrostatically. Conidiophores are produced in all species of Oidiodendron, but the production of conidiophores are not obligatory.
Morphology
Oidiodendron cereale colonies appear green-grey, but dark brown to black under areas of heavy sporulation. Due to its dark colony colour, it is generally classified as a dematiaceous fungus. Its hyphae are 1-2 μm broad, some branches at the foot (treelike), while others don't, and are irregularly curved. The conidia are dark grey, short-chained, clumped at conidiophore apex, has a ring, and spans 2.2-5.4 μm by 2.0-2.7 μm. The conidiophores are short, branched, and hyaline to lightly melanized. It is important to note that the hyaline conidiophores and lens-shaped arthroconidia with thickened rings of cell wall material make this species unique. Henceforth, the initial placement of this species was outside of the genus Oidiodendron. With molecular analysis, evidence supports its placement within Oidiodendron and its morphological distinction is significant only at the species levels.
Physiology
Oidiodendron cereale is psychrotolerant and has an optimal temperature between 20-25°C. However, it also has the ability to grow at temperatures as low as 5°C. Decreased growth is observed when the temperature is below 5°C or above 25°C. Oidiodendron cereale is acidophilic with an optimal pH range of 3-5, and it does not grow in high salt conditions. Enzymatic studies have revealed that O. cereale has cellulolytic abilities. In addition, it has pectinases, gelatinases, lipases, and polyphenol oxidases that facilitate the degradation of a variety of plant, fungal, and animal substrates.
Habitat and ecology
Oidiodendron cereale is found predominantly in soil, but it can also be found in wood and peat, and on human skin and hair. In addition, there has been an isolation of this fungus in human food supplies. Due to the physiology of this species, it prefers to live in temperate climates. However, there have been reports from tropical and subtropical locations of this fungus.
Although this fungus has been identified from a plethora of locations globally and different growing environments, little is understood about their ecological roles. An association study on the mycorrhizal status of this fungus has been inconclusive. Targeted isolation studies are required to determine the ecological role of O. cereale.
Human disease
There has only been one published case of infection caused by O. cereale. In 1969, a female clerk at the Skin Department of the Helsinki University Central Hospital reported itchiness on her neck. Near the nape of the neck, there was an archetypal presentation of neurodermitis nuchae, or more commonly known as atopic dermatitis. A sample was cultured from her neck and on all three occasions, O. cereale was present. After further investigation, this fungus was found in the mycoflora of old Finnish wooden saunas, where the patient had previously visited.
References
Onygenales
Fungi described in 1962
Fungus species | Oidiodendron cereale | Biology | 1,167 |
27,874,168 | https://en.wikipedia.org/wiki/Amanita%20virosiformis | Amanita virosiformis, commonly known as the narrow-spored destroying angel, is a poisonous basidiomycete fungus, one of many in the genus Amanita. Originally described from Florida, it is found from coastal North Carolina through to eastern Texas in the southeastern United States.
See also
List of Amanita species
List of deadly fungi
References
virosiformis
Deadly fungi
Poisonous fungi
Fungi of the United States
Fungi described in 1941
Fungi without expected TNC conservation status
Fungus species | Amanita virosiformis | Biology,Environmental_science | 102 |
382,708 | https://en.wikipedia.org/wiki/Cofiniteness | In mathematics, a cofinite subset of a set is a subset whose complement in is a finite set. In other words, contains all but finitely many elements of If the complement is not finite, but is countable, then one says the set is cocountable.
These arise naturally when generalizing structures on finite sets to infinite sets, particularly on infinite products, as in the product topology or direct sum.
This use of the prefix "" to describe a property possessed by a set's mplement is consistent with its use in other terms such as "meagre set".
Boolean algebras
The set of all subsets of that are either finite or cofinite forms a Boolean algebra, which means that it is closed under the operations of union, intersection, and complementation. This Boolean algebra is the on
In the other direction, a Boolean algebra has a unique non-principal ultrafilter (that is, a maximal filter not generated by a single element of the algebra) if and only if there exists an infinite set such that is isomorphic to the finite–cofinite algebra on In this case, the non-principal ultrafilter is the set of all cofinite subsets of .
Cofinite topology
The cofinite topology or the finite complement topology is a topology that can be defined on every set It has precisely the empty set and all cofinite subsets of as open sets. As a consequence, in the cofinite topology, the only closed subsets are finite sets, or the whole of For this reason, the cofinite topology is also known as the finite-closed topology. Symbolically, one writes the topology as
This topology occurs naturally in the context of the Zariski topology. Since polynomials in one variable over a field are zero on finite sets, or the whole of the Zariski topology on (considered as affine line) is the cofinite topology. The same is true for any irreducible algebraic curve; it is not true, for example, for in the plane.
Properties
Subspaces: Every subspace topology of the cofinite topology is also a cofinite topology.
Compactness: Since every open set contains all but finitely many points of the space is compact and sequentially compact.
Separation: The cofinite topology is the coarsest topology satisfying the T1 axiom; that is, it is the smallest topology for which every singleton set is closed. In fact, an arbitrary topology on satisfies the T1 axiom if and only if it contains the cofinite topology. If is finite then the cofinite topology is simply the discrete topology. If is not finite then this topology is not Hausdorff (T2), regular or normal because no two nonempty open sets are disjoint (that is, it is hyperconnected).
Double-pointed cofinite topology
The double-pointed cofinite topology is the cofinite topology with every point doubled; that is, it is the topological product of the cofinite topology with the indiscrete topology on a two-element set. It is not T0 or T1, since the points of each doublet are topologically indistinguishable. It is, however, R0 since topologically distinguishable points are separated. The space is compact as the product of two compact spaces; alternatively, it is compact because each nonempty open set contains all but finitely many points.
For an example of the countable double-pointed cofinite topology, the set of integers can be given a topology such that every even number is topologically indistinguishable from the following odd number . The closed sets are the unions of finitely many pairs or the whole set. The open sets are the complements of the closed sets; namely, each open set consists of all but a finite number of pairs or is the empty set.
Other examples
Product topology
The product topology on a product of topological spaces has basis where is open, and cofinitely many
The analog without requiring that cofinitely many factors are the whole space is the box topology.
Direct sum
The elements of the direct sum of modules are sequences where cofinitely many
The analog without requiring that cofinitely many summands are zero is the direct product.
See also
References
(See example 18)
Basic concepts in infinite set theory
General topology | Cofiniteness | Mathematics | 910 |
7,610,388 | https://en.wikipedia.org/wiki/Master%20of%20Quantitative%20Finance | A master's degree in quantitative finance is a postgraduate degree focused on the application of mathematical methods to the solution of problems in financial economics. There are several like-titled degrees which may further focus on financial engineering, computational finance, mathematical finance, and/or financial risk management.
In general, these degrees aim to prepare students for roles as "quants" (quantitative analysts), including analysis, structuring, investing and other related in the financial field.
Formal master's-level training in quantitative finance has existed since 1990.
Structure
The program is usually one to one and a half years in duration, and may include a thesis component. Entrance requirements are generally multivariable calculus, linear algebra, differential equations and some exposure to computer programming (usually C++); programs emphasizing financial mathematics may require some background in measure theory.
Initially, the curriculum builds quantitative skills, and simultaneously develops the underlying finance theory:
The quantitative component draws on applied mathematics, computer science and statistical modelling, and emphasizes stochastic calculus, numerical methods and simulation techniques; see . Some programs also focus on econometrics / time series analysis.
The theory component usually includes a formal study of financial economics, addressing asset pricing and financial markets; some programs may also include general coverage of economics, accounting, corporate finance and portfolio management.
The components are then integrated, addressing the modelling, valuation and hedging of equity derivatives, commodity derivatives, foreign exchange derivatives, and fixed income instruments and their related credit- and interest rate derivatives; see .
Programs often include dedicated modules in market risk and credit risk, with some degrees offered as specialized “Masters in Financial Risk Management”;
the techniques covered are value at risk, stress testing, and "sensitivities" analysis, and in parallel, the Basel capital / liquidity requirements.
Increasingly, programs include quantitative portfolio management and -optimization;
see and § Portfolio theory.
Recently, topics (or specializations) in data science and machine learning are becoming common.
The title of the degree will depend on emphasis, the major differences between programs being the curriculum's distribution between mathematical theory, quantitative techniques and financial applications. The more theoretically oriented degrees are usually termed "Master's in Mathematical Finance" or "Master's in Financial Mathematics" while those oriented toward practice are termed "Master's in Financial Engineering" (MFE or MSFE), "Master's in Computational Finance" (MCF or MSCF), or sometimes simply "Master's in Finance" (MFin). "Master's in Quantitative Finance" is the more general degree title, although "MQF" degrees are often less theoretical and more practical. The practice oriented programs are often positioned as professional degrees (and in the United States, are sometimes offered as Professional Science Master's). Programs are sometimes offered as a Master of Engineering,
or as a Master of Operations Research.
Comparison with other qualifications
The program differs from a Master of Science in Finance (MSF), and an MBA in finance, in that these degrees aim to produce finance generalists as opposed to "quants", and therefore focus on corporate finance, accounting, equity valuation and portfolio management. The treatment of any common topics—usually "derivatives", financial modeling, and risk management—will be less (or even non) technical. Entrance requirements are similarly less mathematical. Note that Master of Finance (M.Fin.) and MSc. in Finance degrees, as distinct from the MSF, may be substantially similar to the MQF.
There is some overlap with degrees in actuarial science, and both degrees are occasionally offered by the same department. Nevertheless, the programs are almost always separate and distinct. Specifically, whereas actuarial programs cover risk and uncertainty as applied to pensions, insurance and investments, quantitative finance programs are broader (although offer less depth in these areas), and prepare graduates for various of the highly numerate roles in finance and for other areas that require "quants".
There is similarly overlap with a Master of Financial Economics, although the emphasis is very different. That degree focuses on the underlying economics, and on developing and testing theoretical models, and aims to prepare graduates for research based roles and for doctoral study. The curriculum therefore emphasises coverage of financial theory, and of econometrics, while the treatment of model implementation (through mathematical modeling and programming), while important, is secondary. Entrance requirements are similarly less mathematical. Some Financial Economics degrees are substantially quantitative, and are largely akin to the MQF.
For students whose interests in finance are commercial rather than academic, a Master's in Quantitative Finance may be seen as an alternative to a PhD in finance. At the same time though, "Master's in Mathematical Finance" programs are often positioned as providing a basis for doctoral study.
History
In 1989 Cornell University's Operations Research and Information Engineering department hosted the first ever academic meeting to focus on financial engineering, which led to the development of the first research journal in the field, Mathematical Finance. The first quantitative finance master's programs in the US were offered by Illinois Institute of Technology in 1990, under Dr. Michael Ong.
The programs offered were the "Master of Science in Quantitative Finance" and "Master of Science in Financial Markets and Trading", and were combined in 2008 to become the "Master of Science in Finance, with Financial Engineering Concentration".
The NYU-Poly Financial Engineering degree was the second program of its kind,
and the first to be certified by the International Association of Financial Engineers.
Carnegie Mellon introduced its "Master of Computational Finance" program in 1994.
OGI's Computational Finance Program (1996, now discontinued) was the first such program based in a computer science department.
Other pioneering programs include those at NYU's Courant Institute, Columbia, Princeton, Cornell, UCLA, DePaul and MIT. The program in Quantitative Finance is also popular in the DACH region with renowned programs at Vienna University of Economics and Business, ETH Zurich (together with University of Zurich), and University of St. Gallen.
Subsequent growth in the number and location of programs has paralleled the growth of financial engineering—with its growing importance across all aspects of the financial services industries—and of risk management as professions.
Programs are now widely offered internationally—see links below—and in some cases are available online or via distance education
(e.g. Washington,
York,
Stevens,
USC,
NUS,
TU Kaiserslautern,).
In a few cases, a quantitative-finance MBA-specialization is offered.
More recently undergraduate programs are available, both in the US
(e.g. Ball State,
James Madison,
McIntire.)
and internationally
(e.g. Essex,
HKUST,
UNISA).
See also
Certificate in Quantitative Finance
Financial modeling
List of quantitative analysts
Master of Finance
Master of Financial Economics
Mathematical finance
QEM
Quantitative analyst
Context:
References
External links
Financial Engineering Core Body of Knowledge, International Association for Quantitative Finance
Listing of academic programs in financial engineering/financial mathematics, International Association for Quantitative Finance
Quantitative Finance, Master
Mathematical finance
Business qualifications | Master of Quantitative Finance | Mathematics | 1,441 |
35,231,573 | https://en.wikipedia.org/wiki/Work%20motivation | Work motivation is a person's internal disposition toward work. To further this, an incentive is the anticipated reward or aversive event available in the environment. While motivation can often be used as a tool to help predict behavior, it varies greatly among individuals and must often be combined with ability and environmental factors to actually influence behavior and performance. Results from a 2012 study, which examined age-related differences in work motivation, suggest a "shift in people's motives" rather than a general decline in motivation with age. That is, it seemed that older employees were less motivated by extrinsically related features of a job, but more by intrinsically rewarding job features. Work motivation is strongly influenced by certain cultural characteristics. Between countries with comparable levels of economic development, collectivist countries tend to have higher levels of work motivation than do countries that tend toward individualism. Similarly measured, higher levels of work motivation can be found in countries that exhibit a long versus a short-term orientation. Also, while national income is not itself a strong predictor of work motivation, indicators that describe a nation's economic strength and stability, such as life expectancy, are. Work motivation decreases as a nation's long-term economic strength increases. Currently work motivation research has explored motivation that may not be consciously driven. This method goal setting is referred to as goal priming.
It is important for organizations to understand and to structure the work environment to encourage productive behaviors and discourage those that are unproductive given work motivation's role in influencing workplace behavior and performance. Motivational systems are at the center of behavioral organization. Emmons states, “Behavior is a discrepancy-reduction process, whereby individuals act to minimize the discrepancy between their present condition and a desired standard or goal” (1999, p. 28). If we look at this from the standpoint of how leaders can motivate their followers to enhance their performance, participation in any organization involves exercising choice; a person chooses among alternatives, responding to the motivation to perform or ignore what is offered. This suggests that a follower's consideration of personal interests and the desire to expand knowledge and skill has significant motivational impact, requiring the leader to consider motivating strategies to enhance performance. There is general consensus that motivation involves three psychological processes: arousal, direction, and intensity. Arousal is what initiates action. It is fueled by a person's need or desire for something that is missing from their lives at a given moment, either totally or partially. Direction refers to the path employees take in accomplishing the goals they set for themselves. Finally, intensity is the vigor and amount of energy employees put into this goal-directed work performance. The level of intensity is based on the importance and difficulty of the goal. These psychological processes result in four outcomes. First, motivation serves to direct attention, focusing on particular issues, people, tasks, etc. It also serves to stimulate an employee to put forth effort. Next, motivation results in persistence, preventing one from deviating from the goal-seeking behavior. Finally, motivation results in task strategies, which as defined by Mitchell & Daniels, are "patterns of behavior produced to reach a particular goal".
Theories
A number of various theories attempt to describe employee motivation within the discipline of industrial and organizational psychology. At the macro level, work motivation can be categorized into two types, endogenous process (individual, cognitive) theories and exogenous cause (environmental) theories. Many theories fit simply into one type, but hybrid types such as self-determination theory attempt to account for both.
It can be helpful to further divide theories into the four broad categories of need-based, cognitive process, behavioral, and job-based.
Need-based theories
Need-based theories of motivation focus on an employee's drive to satisfy a variety of needs through their work. These needs range from basic physiological needs for survival to higher psychoemotional needs like belonging and self-actualization.
Maslow's hierarchy of needs
Abraham Maslow's Hierarchy of Needs (1943) was applied to offer an explanation of how the work environment motivates employees. In accordance with Maslow's theory, which was not specifically developed to explain behavior in the workplace, employees strive to satisfy their needs in a hierarchical order.
At the most basic level, an employee is motivated to work in order to satisfy basic physiological needs for survival, such as having enough money to purchase food. The next level of need in the hierarchy is safety, which could be interpreted to mean adequate housing or living in a safe neighborhood. The next three levels in Maslow's theory relate to intellectual and psycho-emotional needs: love and belonging, esteem (which refers to competence and mastery), and finally the highest order need, self-actualization.
Although Maslow's theory is widely known, in the workplace it has proven to be a poor predictor of employee behavior. Maslow theorized that people will not seek to satisfy a higher level need until their lower level needs are met. There has been little empirical support for the idea that employees in the workplace strive to meet their needs only in the hierarchical order prescribed by Maslow.
Building on Maslow's theory, Clayton Alderfer (1959) collapsed the levels in Maslow's theory from five to three: existence, relatedness and growth. This theory, called the ERG theory, does not propose that employees attempt to satisfy these needs in a strictly hierarchical manner. Empirical support for this theory has been mixed.
Need for achievement
Atkinson & McClelland's Need for Achievement Theory is the most relevant and applicable need-based theory in the I–O psychologist's arsenal. Unlike other need-based theories, which try to interpret every need, Need for Achievement allows the I–O psychologist to concentrate research into a tighter focus. Studies show those who have a high need for achievement prefer moderate levels of risk, seek feedback, and are likely to immerse themselves in their work. Achievement motivation can be broken down into three types:
Achievement – seeks position advancement, feedback, and sense of accomplishment
Authority – need to lead, make an impact and be heard by others
Affiliation – need for friendly social interactions and to be liked.
Because most individuals have a combination of these three types (in various proportions), an understanding of these achievement motivation characteristics can be a useful assistance to management in job placement, recruitment, etc.
The theory is referred to as Need for Achievement because these individuals are theorized to be the most effective employees and leaders in the workplace. These individuals strive to achieve their goals and advance in the organization. They tend to be dedicated to their work and strive hard to succeed. Such individuals also demonstrate a strong desire for increasing their knowledge and for feedback on their performance, often in the form of performance appraisal .
The Need for Achievement is in many ways similar to the need for mastery and self-actualization in Maslow's hierarchy of needs and growth in the ERG theory. The achievement orientation has garnered more research interest as compared to the need for affiliation or power.
Cognitive process theories
Equity theory
Equity Theory is derived from social exchange theory. It explains motivation in the workplace as a cognitive process of evaluation, whereby the employee seeks to achieve a balance between inputs or efforts in the workplace and the outcomes or rewards received or anticipated.
In particular, Equity Theory research has tested employee sentiments regarding equitable compensation. Employee inputs take the form of work volume and quality, performance, knowledge, skills, attributes and behaviors. The company-generated outcomes include rewards such as compensation, praise and advancement opportunities. The employee compares their inputs relative to outcomes; and, then, extrapolating to the social context, the employee compares their input/outcome ratio with the perceived ratios of others. If the employee perceives an inequity, the theory posits that the employee will adjust their behavior to bring things into balance.
Equity Theory has proven relevance in situations where an employee is under-compensated. If an employee perceives that they are under-compensated, they can adjust their behavior to achieve equilibrium in several different ways:
reduce input to a level they believe better matches their level of compensation
change or adjust the comparative standard to which they are comparing their situation
cognitively adjust their perception of their inputs or the outcomes received
withdraw
ask their employer for increased compensation
engage in employee theft
If the employee is able to achieve a ratio of inputs to outputs that they perceive to be equitable, then the employee will be satisfied. The employee's evaluation of input-to-output ratios and subsequent striving to achieve equilibrium is an ongoing process.
While it has been established that Equity Theory provides insight into scenarios of under-compensation, the theory has generally failed to demonstrate its usefulness in understanding scenarios of overcompensation. In this way, it could be said Equity Theory is more useful in describing factors that contribute to a lack of motivation rather than increasing motivation in the workplace. Concepts of organizational justice later expanded upon the fundamentals of Equity Theory and pointed to the importance of fairness perceptions in the workplace.
There are four fairness perceptions applied to organizational settings:
Distributive justice, or the perception of equality of an individual's outcomes
Procedural justice, or the fairness of the procedures used to determine one's outcomes
Interactional justice, or the perception that one has been treated fairly with dignity and respect
Informational Justice, or the perception that one has been given all the information one needs in order to best perform their jobs
When workplace processes are perceived as fair, the benefits to an organization can be high. In such environments, employees are more likely to comply with policies even if their personal outcome is less than optimal. When workplace policies are perceived as unfair, risks for retaliation and related behaviors such as sabotage and workplace violence can increase.
Leventhal (1980) described six criteria for creating fair procedures in an organization. He proposed that procedures and policies should be:
consistently applied to everyone in the organization
free from bias
accurate
correctable
representative of all concerns
based on prevailing ethics
Expectancy theory
According to Vroom's Expectancy Theory, an employee will work smarter and/or harder if they believe their additional efforts will lead to valued rewards. Expectancy theory explains this increased output of effort by means of the equation
whereas:
F (Effort or Motivational Force) = Effort the employee will expend to achieve the desired performance;
E (Expectancy) = Belief that effort will result in desired level of performance;
I (Instrumentality) = Belief that desired level of performance will result in desired outcome;
V (Valence) = Value of the outcome to the employee
Expectancy theory has been shown to have useful applications in designing a reward system. If policies are consistently, clearly and fairly implemented, then the instrumentality would be high. If the rewards are substantial enough to be meaningful to an employee, then the valence would be also considered high. A precursor to motivation is that the employee finds the reward(s) attractive. In some instances, the reward or outcome might inadvertently be unattractive, such as increased workload or demanding travel that may come with a promotion. In such an instance, the valence might be lower for individuals who feel work–life balance is important, for example.
Expectancy theory posits employee satisfaction to be an outcome of performance rather than the cause of performance. However, if a pattern is established whereas an employee understands his performance will lead to certain desired rewards, an employee's motivation can be strengthened based on anticipation. If the employees foresee a high probability that they can successfully carry out a desired behavior, and that their behavior will lead to a valued outcome, then they will direct their efforts toward that end.
Expectancy theory has been shown to have greater validity in research in within-subject designs rather than between-subjects designs. That is, it is more useful in predicting how an employee might choose among competing choices for their time and energy, rather than predicting the choices two different employees might make.
Goal-setting theory
An I–O psychologist can assist an employer in designing task-related goals for their employees that are
attainable
specific
appropriately difficult,
feedback providing
in hopes of rousing tunnel vision focus in the employees. Following S.M.A.R.T criteria is also suggested.
Studies have shown both feedback from the employer and self-efficacy (belief in one's capabilities to achieve a goal) within the employee must be present for goal-setting to be effective. However, because of the tunnel vision focus created by goal-setting theory, several studies have shown this motivational theory may not be applicable in all situations. In fact, in tasks that require creative on-the-spot improvising, goal-setting can even be counterproductive. Furthermore, because clear goal specificity is essential to a properly designed goal-setting task, multiple goals can create confusion for the employee and the result is a muted overall drive. Despite its flaws, Goal-setting Theory is arguably the most dominant theory in the field of I–O psychology; over one thousand articles and reviews published in just over thirty years.
Locke suggested several reasons why goals are motivating: they direct attention, lead to task persistence and the development of task strategies for accomplishing the goal. In order for a goal to be motivating, the employee or work group must first accept the goal. While difficult goals can be more motivating, a goal still needs to appear achievable, which in turn will lead to greater goal acceptance. The person or group should have the necessary skills and resources to achieve the goal, or goal acceptance could be negatively impacted. Specific goals that set a performance expectation are more motivating than those that are vague. Similarly, more proximal goals have greater motivation impact than those that are very long range or distal goals.
There are three types of factors that influence goal commitment:
External- The external factors that affect it are authority, peer influence and external rewards. Complying with the dictates of an authority figure such as boss has been shown to be an inducement to high goal commitment. Goal commitment increases when the authority figure is physically present, supportive, pay increases, peer pressure and external rewards.
Interactive- The factors that influence commitment here are competition and the opportunity to participate in setting goals. It has been shown to be an inducement to setting higher goals and working harder to reach them.
Internal- these come from self-administered rewards and the expectation of success. The commitment decreases when the expectation to achieve is decreased.
From: Psychology and Work Today by Schultz and Schultz.
Feedback while the employee or group is striving for the goal is seen as crucial. Feedback keeps employees on track and reinforces the importance of the goal as well as supporting the employees in adjusting their task strategies.
Goal-setting Theory has strong empirical support dating back thirty years. However, there are some boundary conditions that indicate in some situations, goal-setting can be detrimental to performance on certain types of tasks. Goals require a narrowing of one's focus, so for more complex or creative tasks, goals can actually inhibit performance because they demand cognitive resources. Similarly, when someone is learning a new task, performance-related goals can distract from the learning process. During the learning process, it may be better to focus on mastering the task than achieving a particular result. Finally, too many goals can become distracting and counterproductive, especially if they conflict with one another.
Social cognitive theory
Bandura's Social Cognitive Theory is another cognitive process theory that offers the important concept of self-efficacy for explaining employee's level of motivation relative to workplace tasks or goals. Self-efficacy is an individual's belief in their ability to achieve results in a given scenario. Empirically, studies have shown a strong correlation between self-efficacy and performance. The concept has been extended to group efficacy, which is a group's belief that it can achieve success with a given task or project.
Self-efficacy is seen to mediate important aspects of how an employee undertakes a given task, such as the level of effort and persistence. An employee with high self-efficacy is confident that effort they put forth has a high likelihood of resulting in success. In anticipation of success, an employee is willing to put forth more effort, persist longer, remain focused on the task, seek feedback and choose more effective task strategies.
The antecedents of self-efficacy may be influenced by expectations, training or past experience and requires further research. It has been shown that setting high expectations can lead to improved performance, known as the Pygmalion effect. Low expectations can lower self-efficacy and is referred to as the golem effect.
Relative to training, a mastery-oriented approach has been shown to be an effective way to bolster self-efficacy. In such an approach, the goal of training is to focus on mastering skills or tasks rather than focusing on an immediate performance-related outcome. Individuals who believe that mastery can be achieved through training and practice are more likely to develop greater self-efficacy than those who see mastery as a product of inherent talent than is largely immutable.
Major concepts of Social Cognitive Theory correlated with the effect of individual behavior change:
Self-efficacy, or an individual's confidence in accomplishing a behavior
Behavioral capability, or knowledge and skill to execute a behavior
Expectations, or anticipation of outcomes of a behavior
Expectancies, or giving values to the outcome of behavior change
Self-control, or regulating behavior or performance
Observational learning, or watching the actions and outcomes performed by others
Reinforcements, or encouraging motivations and rewards to promote behavior change
Behavioral approach to motivation
The behavioral approach to workplace motivation is known as Organizational Behavioral Modification. This approach applies the tenets of behaviorism developed by B.F. Skinner to promote employee behaviors that an employer deems beneficial and discourage those that are not.
Any stimulus that increases the likelihood of a behavior increasing is a reinforcer. An effective use of positive reinforcement would be frequent praise while an employee is learning a new task. An employee's behavior can also be shaped during the learning process if approximations of the ideal behavior are praised or rewarded. The frequency of reinforcement is an important consideration. While frequent praise during the learning process can be beneficial, it can be hard to sustain indefinitely.
A variable-ratio schedule of reinforcement, where the frequency of reinforcement varies unpredictably, also can be highly effective if used in instances where it is ethical to do so. Providing praise on a variable-ratio schedule would be appropriate, whereas paying an employee on an unpredictable variable-ratio schedule would not be.
Compensation and other reward programs provide behavioral reinforcement, and if carefully crafted, can provide powerful incentives to employees. Behavioral principles can also be used to address undesirable behaviors in the workplace, but punishment should be used judiciously. If overused, punishment can negatively impact employee's perception of fairness in the workplace.
In general, the less time that elapses between a behavior and its consequence, the more impactful a consequence is likely to be.
Job-based theories
The job-based theories hold that the key to motivation is within an employee's job itself. Generally, these theories say that jobs can be motivating by their very design. This is a particularly useful view for organizations, because the practices set out in the theories can be implemented more practically in an organization. Ultimately, according to the job-based theories, the key to finding motivation through one's job is being able to derive satisfaction from the job content.
Motivation–hygiene theory
Herzberg's Motivation–Hygiene Theory holds that the content of a person's job is the primary source of motivation. In other words, he argued against the commonly held belief that money and other compensation is the most effective form of motivation to an employee. Instead, Herzberg posed that high levels of what he dubbed hygiene factors (pay, job security, status, working conditions, fringe benefits, job policies, and relations with co-workers) could only reduce employee dissatisfaction (not create satisfaction). Motivation factors (level of challenge, the work itself, responsibility, recognition, advancement, intrinsic interest, autonomy, and opportunities for creativity) however, could stimulate satisfaction within the employee, provided that minimum levels of the hygiene factors were reached. For an organization to take full advantage of Herzberg's theory, they must design jobs in such a way that motivators are built in, and thus are intrinsically rewarding. While the Motivation–Hygiene Theory was the first to focus on job content, it has not been strongly supported through empirical studies.
Frederick Herzberg also came up with the concept of job enrichment, which expands jobs to give employees a greater role in planning, performing, and evaluating their work, thus providing the chance to satisfy their motivators needs. Some suggested ways would be to remove some management control, provide regular and continuously feedback. Proper job enrichment, therefore, involves more than simply giving the workers extra tasks to perform. It means expanding the level of knowledge and skills needed to perform the job.
Job characteristics theory
Shortly after Herzberg's Two-factor theory, Hackman and Oldham contributed their own, more refined, job-based theory; Job characteristic theory (JCT). JCT attempts to define the association between core job dimensions, the critical psychological states that occur as a result of these dimensions, the personal and work outcomes, and growth-need strength. Core job dimensions are the characteristics of a person's job. The core job dimensions are linked directly to the critical psychological states. The Job Characteristics Model (JCM), as designed by Hackman and Oldham attempts to use job design to improve employee intrinsic motivation. They show that any job can be described in terms of five key job characteristics:
According to the JCT, an organization that provides workers with sufficient levels of skill variety (using different skills and talents in performing work), task identity (contributing to a clearly identifiable larger project), and task significance (impacting the lives or work of other people) is likely to have workers who feel their work has meaning and value. Sufficiently high levels of autonomy (independence, freedom and discretion in carrying out the job) will inspire the worker to feel responsibility for the work; and sufficiently high levels of Task Feedback (receiving timely, clear, specific, detailed, actionable information about the effectiveness of their job performance) will inspire the worker to feel the organization is authentically interested in helping to foster their professional development and growth. The combined effect of these psychological states results in desired personal and work outcomes: intrinsic motivation, job satisfaction, performance quality, low absenteeism, and low turnover rate.
Lastly, the glue of this theory is the "growth-need strength" factor which ultimately determines the effectiveness of the core job dimensions on the psychological states, and likewise the effectiveness of the critical psychological states on the affective outcomes. Further analysis of Job Characteristics Theory can be found in the Work Design section below.
Hackman and Oldman created the Job Diagnostic Survey (JDS) which measures three parts of their theory.
Employees views of the job characteristics
The level of growth needed by each employee
Employees overall job satisfaction
JDS is the most frequently and commonly used tool to measure job and work design. JDS is a self-report which has small detailed phrases for the different job characteristics. An employee will be asked to fill out the JDS and rate how precise each statement describes their job.
Self-regulation theory
A theory based in self-efficacy, Self Regulation is "A theory of motivation based on the setting of goals and the receipt of accurate feedback that is monitored to enhance the likelihood of goal attainment". It is presumed that people consciously set goals for themselves that guide and direct their behavior toward the attainment of these goals. These people also engage in self-monitoring or self-evaluation. Self-evaluation can be helped along if feedback is given when a person is working on their goals because it can align how a person feels about how they are doing to achieve a goal and what they are actually doing to achieve their goals. In short, feedback provides an "error" message that a person who is off-track can reevaluate their goal.
This theory has been linked to Goal setting and Goal Setting Theory, which has been mentioned above.
Work engagement
A new approach to work motivation is the idea of Work Engagement or "A conception of motivation whereby individuals are physically immersed in emotionally and intellectually fulfilling work." This theory draws on many aspects of I/O Psychology. This theory proposes that motivation taps into energy where it allows a person to focus on a task. According to Schaufeli and Bakker there are three dimensions to work engagement.
Vigor- a sense of personal energy for work
Dedication- experiencing a sense of pride in one's work and challenge from it
Absorption- The Capacity to be engrossed in work and experiencing a sense of flow.
Work Engagement forwards the notion that individuals have the ability to contribute more to their own productivity than organizations typically allow. An example would be to allow workers to take some risks and not punish them if the risks leads to unsuccessful outcomes. "In short, work engagement can be thought of as an interaction of individuals and work. Engagement can occur when both facilitate each other, and engagement will not occur when either (or both) thwarts each other." Some critics of work engagement say that this is nothing new, just "old wine in a new bottle."
Applications of motivation
Organizational reward systems
Organizational reward systems have a significant impact on employees' level of motivation. Rewards can be either tangible or intangible. Various forms of pay, such as salary, commissions, bonuses, employee ownership programs and various types of profit or gain sharing programs, are all important tangible rewards. While fringe benefits have a positive impact on attraction and retention, their direct impact on motivation and performance is not well-defined.
Salaries play a crucial role in the tangible reward system. They are an important factor in attracting new talent to an organization as well as retaining talent. Compensating employees well is one way for an organization to reinforce an employee's value to the organization. If an organization is known for paying their employees top dollar, then they may develop a positive reputation in the job market as a result.
Through incentive compensation structures, employees can be guided to focus their attention and efforts on certain organizational goals. The goals that are reinforced through incentive pay should be carefully considered to make sure they are in alignment with the organizational objectives. If there are multiple rewards programs, it is important to consider if there might be any conflicting goals. For example, individual and team-based rewards can sometime work at cross-purposes.
Important forms of intangible rewards include praise, recognition and rewards. Intangible rewards are ones from which an employee does not derive any material gain. Such rewards have the greatest impact when they soon follow the desired behavior and are closely tied to the performance. If an organization wants to use praise or other intangible rewards effectively, praise should be offered for a high level of performance and for things that they employee has control over. Some studies have shown that praise can be as effective as tangible rewards.
Other forms of intangible performance include status symbols, such as a corner office, and increased autonomy and freedom. Increased autonomy demonstrates trust in an employee, may decrease occupational stress and improve job satisfaction. A 2010 study found positive relationships between job satisfaction and life satisfaction, happiness at work, positive affect, and the absence of negative affect which may also be interrelated with work motivation. Since it may be hard for an employee to achieve a similar level of trust in a new organization, increased autonomy may also help improve retention.
Motivation through design of work
Reward-based systems are certainly the more common practice for attempting to influence motivation within an organization, but some employers strive to design the work itself to be more conducive. There are multiple ways an organization can leverage job design principles to increase motivation. Three of the predominant approaches will be discussed here: the Humanistic Approach, the Job Characteristics Approach, and the Interdisciplinary Approach.
Humanistic Approach
The Humanistic Approach to job design was a reaction to "worker dissatisfaction over Scientific Management" and focused on providing employees with more input and an opportunity to maximize their personal achievement as referenced by Jex and Britt. Jobs should also provide intellectual stimulation, opportunities for creativity, and greater discretion over work-related activities. Two approaches used in the Humanistic Approach to job design are job rotation and job enrichment. Job rotation allows employees to switch to different jobs which allows them to learn new skills and provides them with greater variety. According to Jex and Britt, this would be most effective for simple jobs that can become mundane and boring over time. Job enrichment is focused on leveraging those aspects of jobs that are labeled motivators, such as control, intellectual challenge, and creativity. The most common form of job enrichment is vertical loading where additional tasks or discretion enhances the initial job design. While there is some evidence to support that job enrichment improves motivation, it is important to note that it is not effective for all people. Some employees are not more motivated by enriched jobs.
Job Characteristics Approach
The Job Characteristics Approach to job design is based on how core dimensions affect motivation. These dimensions include autonomy, variety, significance, feedback, and identity. The goal of JCT job design is to utilize specific interventions in an effort to enhance these core dimensions.
Vertical Loading – Like the tactic used in the Humanistic Job Enrichment approach, this intervention is designed to enhance autonomy, task identity, task significance, and skill variety by increasing the number of tasks and providing greater levels of control over how those tasks are completed.
Task Combination – By combining tasks into larger units of work and responsibility, task identity may be improved.
Natural Work Units – A form of task combination that represents a logical body of work and responsibility that may enhance both task significance and task identity.
Establishing Client Relationships – Designs interactions between employees and customers, both internal and external, to enhance task identity, feedback, and task significance. This is accomplished by improving the visibility of beneficial effects on customers.
Feedback – By designing open feedback channels, this intervention attempts to increase the amount and value of feedback received.
The process of designing work so as to enhance individual motivation to perform the work is called Job enrichment
While the JCT approach to job design has a significant impact on job satisfaction, the effects on performance are more mixed. Much of the success of implementation of JCT practices is dependent on the organization carefully planning interventions and changes to ensure impact throughout the organization is anticipated. Many companies may have difficulty implementing JCT changes throughout the organization due to its high cost and complexity.
Interdisciplinary Approach
One of the most recent approaches to work design, the Interdisciplinary Approach is based on the use of careful assessment of current job design, followed by a cost/benefit analysis, and finally changes based on the area in which a job is lacking. The assessment is conducted using the Multi-method Job Design Questionnaire, which is used to determine if the job is deficient in the areas of motivational, mechanistic, biological, or perceptual motor support. Motivational improvements are aligned with the Job Characteristics theory dimensions. Mechanistic improvements are focused on improving the efficiency of the job design. Biological improvements focus on improvements to ergonomics, health conditions, and employee comfort. Finally, perceptual motor improvements focus on the nature and presentation of the information an employee must work with. If improvements are identified using the questionnaire, the company then evaluates the cost of making the improvements and determines if the potential gains in motivation and performance justify those costs. Because of the analysis and cost/benefit components of the Interdisciplinary Approach, it is often less costly for organizations and implementations can be more effective. Only changes deemed to be appropriate investments are made, thus improving motivation, productivity, and job satisfaction while controlling costs.
Other factors affecting motivation
Creativity
On the cutting edge of research pertaining to motivation in the workplace is the integration of motivation and creativity. Essentially, according to Ambrose and Kulik, the same variables that predict intrinsic motivation are associated with creativity. This is a helpful conclusion in that organizations can measure and influence both creativity and motivation simultaneously. Further, allowing employees to choose creative and challenging jobs/tasks has been shown to improve motivation. Malmelin and Virta indicate creating new processes or procedures goes along with the jobs/task. In order to increase creativity, setting "creativity goals" can positively influence the process, along with allowing more autonomy (i.e., giving employees freedom to feel/be creative). Other studies have found that team support may enable more creativity in a group setting, also increasing motivation. Keeping creative employees productive and satisfied could be the key to retaining even the most difficult employees.
Groups and teams
As the workplace is changing to include more group-based systems, researching motivation within these groups is of growing importance. To date, a great amount of research has focused on the Job characteristic theory and the Goal-setting Theory. While more research is needed that draws on a broader range of motivation theories, research thus far has concluded several things: (a) semi-autonomous groups report higher levels of job scope (related to intrinsic job satisfaction), extrinsic satisfaction, and organizational commitment; and (b) developmentally mature teams have higher job motivation and innovation. Further, voluntarily formed work teams report high work motivation. Though research shows that appropriate goal-setting influences group motivation and performance, more research is needed in this area (group goals, individual goals, cohesiveness, etc.). There are inseparable mediating variables consisting of group cohesiveness, commitment, and performance. As the workplace environment calls for more and more teams to be formed, research into motivation of teams is ever-pressing. Thus far, overarching research merely suggests that individual-level and team-level sources of motivation are congruent with each other. Consequently, research should be expanded to apply more theories of motivation; look at group dynamics; and essentially conclude how groups can be most impacted to increase motivation and, consequently, performance.
Culture
Organizational cultures can be broken down into three groups: Strong, Strategically Appropriate, and Adaptive. Each has been identified with high performing organizations and has particular implications on motivation in the workplace.
Strength
The most widely reported effect of culture on performance is that strong cultures result in high performance. The three reasons for this are goal alignment, motivation, and the resulting structure provided. Goal alignment is driven by the proposed unified voice that drives employees in the same direction. Motivation comes from the strength of values and principles in such a culture. And structure is provided by these same attributes which obviate the need for formal controls that could stifle employees. There are questions that concern researchers about causality and the veracity of the driving voice of a strong culture.
Strategic Appropriateness
A strategically appropriate culture motivates due to the direct support for performance in the market and industry: "The better the fit, the better the performance; the poorer the fit, the poorer the performance," state Kotter & Heskett. There is an appeal to the idea that cultures are designed around the operations conditions a firm encounters although an outstanding issue is the question of adapting culture to changes in the environment.
Adaptability
Another perspective in culture literature asserts that in order for an organization to perform at a high level over a long period of time, it must be able to adapt to changes in the environment. According to Ralph Kilmann, in such a culture "there is a shared feeling of confidence: the members believe, without a doubt, that they can effectively manage whatever new problems and opportunities will come their way." In effect, the culture is infused with a high degree of self-efficacy and confidence. As with the strong culture, critics point to the fact that the theory provides nothing in the way of appropriate direction of adaptation that leads to high performance.
Competing Values Framework
Another perspective on culture and motivation comes from the work of Cameron & Quinn and the Competing Values Framework. They divide cultures into four quadrants: Clan, Adhocracy, Market, Hierarchy, with particular characteristics that directly affect employee motivation.
Clan cultures are collaborative and driven by values such as commitment, communication, and individual development. Motivation results from human development, employee engagement, and a high degree of open communication.
Adhocracy cultures are creative and innovative. Motivation in such cultures arises from finding creative solutions to problems, continually improving, and empowering agility.
Market cultures focus on value to the customer and are typically competitive and aggressive. Motivation in the market culture results from winning in the marketplace and creating external partnerships.
And finally, Hierarchy cultures value control, efficiency, and predictability. Motivation in such a culture relies on effectiveness, capability, and consistency. Effective hierarchy cultures have developed mature and capable processes which support smooth operations.
Culture has been shown to directly affect organizational performance. When viewed through the lens of accepted behaviors and ingrained values, culture also profoundly affects motivation. Whether one looks at the type of culture—strong, strategically appropriate, or adaptive—as Kotter & Heskett do, or at the style of culture—Clan, Adhocracy, Market, or Hierarchy—as Cameron & Quinn do, the connection between culture and motivation becomes clear and provides insights into how to hire, task, and motivate employees.
Personality Approach
Personality traits, pre-dispositions, and behaviors can have an outcome on work motivation. Influences can be conceptualized in the Big Five trait theory (Barrick & Mount, 1991; John & Srivastava, 1999).
There are two types of personalities: Type A and Type B. Type A's are considered more dominant, aggressive, and work oriented. Type B's are detail focused, task oriented, and possess higher self-control. Individual perceptions may differ based on the job stressor or outcome (Day, & Jreige, 2002). Work demands that reflect on personality attributes can depend on tasks, job complexity, relationships, and work stress. The personality attributes most important for your workplace comes down to understanding the organizational work behaviors, characteristics of the jobs, and future strategies of the company.
Personalities can be an influence on creativity in the workforce and behavioral expectations.
See also
Public service motivation
References
Motivation
Industrial and organizational psychology
Workplace
zh-yue:工作動機 | Work motivation | Biology | 7,798 |
5,031,680 | https://en.wikipedia.org/wiki/Dating%20violence | Dating abuse or dating violence is the perpetration or threat of an act of violence by at least one member of an unmarried couple on the other member in the context of dating or courtship. It also arises when one partner tries to maintain power and control over the other through abuse or violence, for example when a relationship has broken down. This abuse or violence can take a number of forms, such as sexual assault, sexual harassment, threats, physical violence, verbal, mental, or emotional abuse, social sabotage, and stalking. In extreme cases it may manifest in date rape. It can include psychological abuse, emotional blackmail, sexual abuse, physical abuse and psychological manipulation.
Dating violence crosses all racial, age, economic and social lines. The Center for Relationship Abuse Awareness describes dating abuse as a "pattern of abusive and coercive behaviors used to maintain power and control over a former or current intimate partner."
Profiles of abuser and victim
Abuse can occur regardless of the individual's age, race, income, or other demographic traits. There are, however, many traits that abusers and victims share in common.
The Centre for Promoting Alternatives to Violence describes abusers as being obsessively jealous and possessive, overly confident, having mood swings or a history of violence or temper, seeking to isolate their partner from family, friends and colleagues, and having a tendency to blame external stressors.
Meanwhile, victims of relationship abuse share many traits as well, including: physical signs of injury, missing time at work or school, slipping performance at work or school, changes in mood or personality, increased use of drugs or alcohol, and increasing isolation from friends and family. Victims may blame themselves for any abuse that occurs or may minimize the severity of the crime. This often leads to victims choosing to stay in abusive relationships.
Strauss (2005) argues that while men inflict the greater share of injuries in domestic violence, researchers and society at large must not overlook the substantial minority of injuries inflicted by women. Additionally, Strauss notes that even relatively minor acts of physical aggression by women are a serious concern:
'Minor' assaults perpetrated by women are also a major problem, even when they do not result in injury, because they put women in danger of much more severe retaliation by men. [...] It will be argued that in order to end 'wife beating,' it is essential for women also to end what many regard as a 'harmless' pattern of slapping, kicking, or throwing something at a male partner who persists in some outrageous behavior and 'won't listen to reason.'
Similarly, Deborah Capaldi reports that a 13-year longitudinal study found that a woman's aggression towards a man was equally important as the man's tendency towards violence in predicting the likelihood of overall violence: "Since much IPV [Intimate Partner Violence] is mutual and women as well as men initiate IPV, prevention and treatment approaches should attempt to reduce women's violence as well as men's violence. Such an approach has a much higher chance of increasing women's safety." However, Capaldi's research only focused on at-risk youth, not women in general, and, therefore, may not apply to the entire population.
Characteristics
Emotional abuse
They are afraid of their date
They are afraid of making the date angry and are unable to even disagree with the date.
Their date has publicly embarrassed and humiliated them.
Psychological abuse
The date threatens to use violence against them or against themself. (e.g. "If you leave me, I will kill myself".)
Sexual abuse
The date forces their partner to have sex with them.
They are afraid to say 'no' to the date's demand for a sexual act from them.
The date does not respect them, and is only interested in gratifying their own sexual needs.
The date does not care about the consequences of the sexual act or how their partner feels about it.
Physical abuse
They were subjected to some physical attacks by their partner
The date has held them down, pushed them, or even punched, kicked or thrown things at them.
Controlling behaviour
The date has tried to keep them from seeing friends.
They are restricted from contacting their family
They are even forced to choose between the date and their family and friends.
The date insists on knowing where they are at all times and demands that they justify everything they do.
The date will be furious if they spoke with another person of their preferred sex.
The date expects them to ask permission before seeking health care for themselves.
The date dictates what they wear and how they appear in public.
See also
Date rape
Loveisrespect, National Teen Dating Abuse Helpline, of the National Domestic Violence Hotline
Sexual bullying
Teen dating violence
Violence against women
Violence against men
References
Further reading
External links
Canadian resources
RespectED, Provided by the Canadian Red Cross, give information to teens, parents, and teachers about abuse in dating relationships.
UK resources
The Hideout
Women's Aid
Respect
US resources
Center for Relationship Abuse Awareness
National Domestic Violence Hotline
ACADV.org - created by the Alabama Coalition Against Dating Violence, provides a Dating Bill of Rights.
Jennifer Ann.org - provides free educational materials to schools and groups and sponsors video game contests about teen dating violence from Jennifer Ann's Group.
Love Is Not Abuse.org - sponsored by Liz Claiborne, provides educational materials.
Love Is Respect.org - runs the National Teen Dating Abuse Helpline.
- offers articles and fact sheets.
The Safe Space.org - created by Break the Cycle, offers information and allows teens to submit questions.
Abuse
Violence
violence
Intimate partner violence
Gender-related violence | Dating violence | Biology | 1,142 |
15,214,431 | https://en.wikipedia.org/wiki/STH%20%28gene%29 | Saitohin is a protein that in humans is encoded by the STH gene. This intronless gene encodes for 128 amino acids in an open reading frame. It is located in the human tau gene, in the intron between exons 9 and 10. Also, a single polymorphism of a nucleotide is seen through a change of glutamine residue 7(Q7R) to arginine. It is found to be susceptible to multiple degenerative diseases, however, the exact function of the gene is still unknown.
References
Further reading | STH (gene) | Chemistry | 117 |
17,192,773 | https://en.wikipedia.org/wiki/Kink%20instability | A kink instability (also known as a kink oscillation or kink mode) is a current-driven plasma instability characterized by transverse displacements of a plasma column's cross-section from its center of mass without any change in the characteristics of the plasma. It typically develops in a thin plasma column carrying a strong axial current which exceeds the Kruskal–Shafranov limit and is sometimes known as the Kruskal–Shafranov (kink) instability, named after Martin David Kruskal and Vitaly Shafranov.
The kink instability was first widely explored in fusion power machines with Z-pinch configurations in the 1950s. It is one of the common magnetohydrodynamic instability modes which can develop in a pinch plasma and is sometimes referred to as the mode. (The other is the mode known as the sausage instability.)
If a "kink" begins to develop in a column the magnetic forces on the inside of the kink become larger than those on the outside, which leads to growth of the perturbation. As it develops at fixed areas in the plasma, kinks belong to the class of "absolute plasma instabilities", as opposed to convective processes.
References
Plasma instabilities | Kink instability | Physics | 256 |
21,930 | https://en.wikipedia.org/wiki/Northern%20blot | The northern blot, or RNA blot, is a technique used in molecular biology research to study gene expression by detection of RNA (or isolated mRNA) in a sample.
With northern blotting it is possible to observe cellular control over structure and function by determining the particular gene expression rates during differentiation and morphogenesis, as well as in abnormal or diseased conditions. Northern blotting involves the use of electrophoresis to separate RNA samples by size, and detection with a hybridization probe complementary to part of or the entire target sequence. Strictly speaking, the term 'northern blot' refers specifically to the capillary transfer of RNA from the electrophoresis gel to the blotting membrane. However, the entire process is commonly referred to as northern blotting. The northern blot technique was developed in 1977 by James Alwine, David Kemp, and George Stark at Stanford University. Northern blotting takes its name from its similarity to the first blotting technique, the Southern blot, named for biologist Edwin Southern. The major difference is that RNA, rather than DNA, is analyzed in the northern blot.
Procedure
A general blotting procedure starts with extraction of total RNA from a homogenized tissue sample or from cells. Eukaryotic mRNA can then be isolated through the use of oligo (dT) cellulose chromatography to isolate only those RNAs with a poly(A) tail. RNA samples are then separated by gel electrophoresis. Since the gels are fragile and the probes are unable to enter the matrix, the RNA samples, now separated by size, are transferred to a nylon membrane through a capillary or vacuum blotting system. A nylon membrane with a positive charge is the most effective for use in northern blotting since the negatively charged nucleic acids have a high affinity for them. The transfer buffer used for the blotting usually contains formamide because it lowers the annealing temperature of the probe-RNA interaction, thus eliminating the need for high temperatures, which could cause RNA degradation. Once the RNA has been transferred to the membrane, it is immobilized through covalent linkage to the membrane by UV light or heat. After a probe has been labeled, it is hybridized to the RNA on the membrane. Experimental conditions that can affect the efficiency and specificity of hybridization include ionic strength, viscosity, duplex length, mismatched base pairs, and base composition. The membrane is washed to ensure that the probe has bound specifically and to prevent background signals from arising. The hybrid signals are then detected by X-ray film and can be quantified by densitometry. To create controls for comparison in a northern blot, samples not displaying the gene product of interest can be used after determination by microarrays or RT-PCR.
Gels
The RNA samples are most commonly separated on agarose gels containing formaldehyde as a denaturing agent for the RNA to limit secondary structure. The gels can be stained with ethidium bromide (EtBr) and viewed under UV light to observe the quality and quantity of RNA before blotting. Polyacrylamide gel electrophoresis with urea can also be used in RNA separation but it is most commonly used for fragmented RNA or microRNAs. An RNA ladder is often run alongside the samples on an electrophoresis gel to observe the size of fragments obtained but in total RNA samples the ribosomal subunits can act as size markers. Since the large ribosomal subunit is 28S (approximately 5kb) and the small ribosomal subunit is 18S (approximately 2kb) two prominent bands appear on the gel, the larger at close to twice the intensity of the smaller.
Probes
Probes for northern blotting are composed of nucleic acids with a complementary sequence to all or part of the RNA of interest. They can be DNA, RNA, or oligonucleotides with a minimum of 25 complementary bases to the target sequence. RNA probes (riboprobes) that are transcribed in vitro are able to withstand more rigorous washing steps preventing some of the background noise. Commonly cDNA is created with labelled primers for the RNA sequence of interest to act as the probe in the northern blot. The probes must be labelled either with radioactive isotopes (32P) or with chemiluminescence in which alkaline phosphatase or horseradish peroxidase (HRP) break down chemiluminescent substrates producing a detectable emission of light. The chemiluminescent labelling can occur in two ways: either the probe is attached to the enzyme, or the probe is labelled with a ligand (e.g. biotin) for which the ligand (e.g., avidin or streptavidin) is attached to the enzyme (e.g. HRP). X-ray film can detect both the radioactive and chemiluminescent signals and many researchers prefer the chemiluminescent signals because they are faster, more sensitive, and reduce the health hazards that go along with radioactive labels. The same membrane can be probed up to five times without a significant loss of the target RNA.
Applications
Northern blotting allows one to observe a particular gene's expression pattern between tissues, organs, developmental stages, environmental stress levels, pathogen infection, and over the course of treatment. The technique has been used to show overexpression of oncogenes and downregulation of tumor-suppressor genes in cancerous cells when compared to 'normal' tissue, as well as the gene expression in the rejection of transplanted organs. If an upregulated gene is observed by an abundance of mRNA on the northern blot the sample can then be sequenced to determine if the gene is known to researchers or if it is a novel finding. The expression patterns obtained under given conditions can provide insight into the function of that gene. Since the RNA is first separated by size, if only one probe type is used variance in the level of each band on the membrane can provide insight into the size of the product, suggesting alternative splice products of the same gene or repetitive sequence motifs. The variance in size of a gene product can also indicate deletions or errors in transcript processing. By altering the probe target used along the known sequence it is possible to determine which region of the RNA is missing.
Advantages and disadvantages
Analysis of gene expression can be done by several different methods including RT-PCR, RNase protection assays, microarrays, RNA-Seq, serial analysis of gene expression (SAGE), as well as northern blotting. Microarrays are quite commonly used and are usually consistent with data obtained from northern blots; however, at times northern blotting is able to detect small changes in gene expression that microarrays cannot. The advantage that microarrays have over northern blots is that thousands of genes can be visualized at a time, while northern blotting is usually looking at one or a small number of genes.
A problem in northern blotting is often sample degradation by RNases (both endogenous to the sample and through environmental contamination), which can be avoided by proper sterilization of glassware and the use of RNase inhibitors such as DEPC (diethylpyrocarbonate). The chemicals used in most northern blots can be a risk to the researcher, since formaldehyde, radioactive material, ethidium bromide, DEPC, and UV light are all harmful under certain exposures. Compared to RT-PCR, northern blotting has a low sensitivity, but it also has a high specificity, which is important to reduce false positive results.
The advantages of using northern blotting include the detection of RNA size, the observation of alternate splice products, the use of probes with partial homology, the quality and quantity of RNA can be measured on the gel prior to blotting, and the membranes can be stored and reprobed for years after blotting.
For northern blotting for the detection of acetylcholinesterase mRNA the nonradioactive technique was compared to a radioactive technique and found as sensitive as the radioactive one, but requires no protection against radiation and is less time-consuming.
Reverse northern blot
Researchers occasionally use a variant of the procedure known as the reverse northern blot. In this procedure, the substrate nucleic acid (that is affixed to the membrane) is a collection of isolated DNA fragments, and the probe is RNA extracted from a tissue and radioactively labelled.
The use of DNA microarrays that have come into widespread use in the late 1990s and early 2000s is more akin to the reverse procedure, in that they involve the use of isolated DNA fragments affixed to a substrate, and hybridization with a probe made from cellular RNA. Thus the reverse procedure, though originally uncommon, enabled northern analysis to evolve into gene expression profiling, in which many (possibly all) of the genes in an organism may have their expression monitored.
See also
Western blot
Eastern blot
Northwestern blot
Far-eastern blot
Far-western blot
Differential display
References
External links
OpenWetWare
Molecular biology techniques | Northern blot | Chemistry,Biology | 1,918 |
15,903,307 | https://en.wikipedia.org/wiki/OGLE-TR-123 | OGLE-TR-123 is a binary stellar system containing one of the smallest main-sequence stars whose radius has been measured. It was discovered when the Optical Gravitational Lensing Experiment (OGLE) survey observed the smaller star eclipsing the larger primary. The orbital period is approximately 1.80 days.
OGLE-TR-123B
The smaller star, OGLE-TR-123B, is estimated to have a radius around 0.13 solar radii, and a mass of around 0.085 solar masses (), or approximately 90 times Jupiter's. OGLE-TR-123b's mass is close to the lowest possible mass, estimated to be around 0.07 or 0.08 , for a hydrogen-fusing star.
See also
OGLE-TR-122
EBLM J0555-57
References
Carina (constellation)
Eclipsing binaries
Carinae, V816
bg:OGLE-TR-122b
pl:OGLE-TR-122b
sk:OGLE-TR-122b | OGLE-TR-123 | Astronomy | 219 |
664 | https://en.wikipedia.org/wiki/Astronaut | An astronaut (from the Ancient Greek (), meaning 'star', and (), meaning 'sailor') is a person trained, equipped, and deployed by a human spaceflight program to serve as a commander or crew member aboard a spacecraft. Although generally reserved for professional space travelers, the term is sometimes applied to anyone who travels into space, including scientists, politicians, journalists, and tourists.
"Astronaut" technically applies to all human space travelers regardless of nationality. However, astronauts fielded by Russia or the Soviet Union are typically known instead as cosmonauts (from the Russian "kosmos" (космос), meaning "space", also borrowed from Greek ). Comparatively recent developments in crewed spaceflight made by China have led to the rise of the term taikonaut (from the Mandarin "tàikōng" (), meaning "space"), although its use is somewhat informal and its origin is unclear. In China, the People's Liberation Army Astronaut Corps astronauts and their foreign counterparts are all officially called hángtiānyuán (, meaning "heaven navigator" or literally "heaven-sailing staff").
Since 1961, 600 astronauts have flown in space. Until 2002, astronauts were sponsored and trained exclusively by governments, either by the military or by civilian space agencies. With the suborbital flight of the privately funded SpaceShipOne in 2004, a new category of astronaut was created: the commercial astronaut.
Definition
The criteria for what constitutes human spaceflight vary, with some focus on the point where the atmosphere becomes so thin that centrifugal force, rather than aerodynamic force, carries a significant portion of the weight of the flight object. The (FAI) Sporting Code for astronautics recognizes only flights that exceed the Kármán line, at an altitude of . In the United States, professional, military, and commercial astronauts who travel above an altitude of are awarded astronaut wings.
, 552 people from 36 countries have reached or more in altitude, of whom 549 reached low Earth orbit or beyond.
Of these, 24 people have traveled beyond low Earth orbit, either to lunar orbit, the lunar surface, or, in one case, a loop around the Moon. Three of the 24—Jim Lovell, John Young and Eugene Cernan—did so twice.
, under the U.S. definition, 558 people qualify as having reached space, above altitude. Of eight X-15 pilots who exceeded in altitude, only one, Joseph A. Walker, exceeded 100 kilometers (about 62.1 miles) and he did it two times, becoming the first person in space twice. Space travelers have spent over 41,790 man-days (114.5-man-years) in space, including over 100 astronaut-days of spacewalks. , the man with the longest cumulative time in space is Oleg Kononenko, who has spent over 1100 days in space. Peggy A. Whitson holds the record for the most time in space by a woman, at 675 days.
Terminology
In 1959, when both the United States and Soviet Union were planning, but had yet to launch humans into space, NASA Administrator T. Keith Glennan and his Deputy Administrator, Hugh Dryden, discussed whether spacecraft crew members should be called astronauts or cosmonauts. Dryden preferred "cosmonaut", on the grounds that flights would occur in and to the broader cosmos, while the "astro" prefix suggested flight specifically to the stars. Most NASA Space Task Group members preferred "astronaut", which survived by common usage as the preferred American term. When the Soviet Union launched the first man into space, Yuri Gagarin in 1961, they chose a term which anglicizes to "cosmonaut".
Astronaut
A professional space traveler is called an astronaut. The first known use of the term "astronaut" in the modern sense was by Neil R. Jones in his 1930 short story "The Death's Head Meteor". The word itself had been known earlier; for example, in Percy Greg's 1880 book Across the Zodiac, "astronaut" referred to a spacecraft. In Les Navigateurs de l'infini (1925) by J.-H. Rosny aîné, the word astronautique (astronautics) was used. The word may have been inspired by "aeronaut", an older term for an air traveler first applied in 1784 to balloonists. An early use of "astronaut" in a non-fiction publication is Eric Frank Russell's poem "The Astronaut", appearing in the November 1934 Bulletin of the British Interplanetary Society.
The first known formal use of the term astronautics in the scientific community was the establishment of the annual International Astronautical Congress in 1950, and the subsequent founding of the International Astronautical Federation the following year.
NASA applies the term astronaut to any crew member aboard NASA spacecraft bound for Earth orbit or beyond. NASA also uses the term as a title for those selected to join its Astronaut Corps. The European Space Agency similarly uses the term astronaut for members of its Astronaut Corps.
Cosmonaut
By convention, an astronaut employed by the Russian Federal Space Agency (or its predecessor, the Soviet space program) is called a cosmonaut in English texts. The word is an Anglicization of kosmonavt ( ). Other countries of the former Eastern Bloc use variations of the Russian kosmonavt, such as the (although Poles also used , and the two words are considered synonyms).
Coinage of the term has been credited to Soviet aeronautics (or "cosmonautics") pioneer Mikhail Tikhonravov (1900–1974). The first cosmonaut was Soviet Air Force pilot Yuri Gagarin, also the first person in space. He was part of the first six Soviet citizens, with German Titov, Yevgeny Khrunov, Andriyan Nikolayev, Pavel Popovich, and Grigoriy Nelyubov, who were given the title of pilot-cosmonaut in January 1961. Valentina Tereshkova was the first female cosmonaut and the first and youngest woman to have flown in space with a solo mission on the Vostok 6 in 1963. On 14 March 1995, Norman Thagard became the first American to ride to space on board a Russian launch vehicle, and thus became the first "American cosmonaut".
Taikonaut
In Chinese, the term (, "cosmos navigating personnel") is used for astronauts and cosmonauts in general, while (, "navigating celestial-heaven personnel") is used for Chinese astronauts. Here, (, literally "heaven-navigating", or spaceflight) is strictly defined as the navigation of outer space within the local star system, i.e. Solar System. The phrase (, "spaceman") is often used in Hong Kong and Taiwan.
The term taikonaut is used by some English-language news media organizations for professional space travelers from China. The word has featured in the Longman and Oxford English dictionaries, and the term became more common in 2003 when China sent its first astronaut Yang Liwei into space aboard the Shenzhou 5 spacecraft. This is the term used by Xinhua News Agency in the English version of the Chinese People's Daily since the advent of the Chinese space program. The origin of the term is unclear; as early as May 1998, Chiew Lee Yih () from Malaysia used it in newsgroups.
Parastronaut
For its 2022 Astronaut Group, the European Space Agency envisioned recruiting an astronaut with a physical disability, a category they called "parastronauts", with the intention but not guarantee of spaceflight. The categories of disability considered for the program were individuals with lower limb deficiency (either through amputation or congenital), leg length difference, or a short stature (less than ). On 23 November 2022, John McFall was selected to be the first ESA parastronaut.
Other terms
With the rise of space tourism, NASA and the Russian Federal Space Agency agreed to use the term "spaceflight participant" to distinguish those space travelers from professional astronauts on missions coordinated by those two agencies.
While no nation other than Russia (and previously the Soviet Union), the United States, and China have launched a crewed spacecraft, several other nations have sent people into space in cooperation with one of these countries, e.g. the Soviet-led Interkosmos program. Inspired partly by these missions, other synonyms for astronaut have entered occasional English usage. For example, the term spationaut () is sometimes used to describe French space travelers, from the Latin word for "space"; the Malay term (deriving from angkasa meaning 'space') was used to describe participants in the Angkasawan program (note its similarity with the Indonesian term antariksawan). Plans of the Indian Space Research Organisation to launch its crewed Gaganyaan spacecraft have spurred at times public discussion if another term than astronaut should be used for the crew members, suggesting vyomanaut (from the Sanskrit word meaning 'sky' or 'space') or gagannaut (from the Sanskrit word for 'sky'). In Finland, the NASA astronaut Timothy Kopra, a Finnish American, has sometimes been referred to as , from the Finnish word . Across Germanic languages, the word for "astronaut" typically translates to "space traveler", as it does with German's Raumfahrer, Dutch's ruimtevaarder, Swedish's rymdfarare, and Norwegian's romfarer.
As of 2021 in the United States, astronaut status is conferred on a person depending on the authorizing agency:
one who flies in a vehicle above for NASA or the military is considered an astronaut (with no qualifier)
one who flies in a vehicle to the International Space Station in a mission coordinated by NASA and Roscosmos is a spaceflight participant
one who flies above in a non-NASA vehicle as a crewmember and demonstrates activities during flight that are essential to public safety, or contribute to human space flight safety, is considered a commercial astronaut by the Federal Aviation Administration
one who flies to the International Space Station as part of a "privately funded, dedicated commercial spaceflight on a commercial launch vehicle dedicated to the mission ... to conduct approved commercial and marketing activities on the space station (or in a commercial segment attached to the station)" is considered a private astronaut by NASA (as of 2020, nobody has yet qualified for this status)
a generally-accepted but unofficial term for a paying non-crew passenger who flies a private non-NASA or military vehicles above is a space tourist (as of 2020, nobody has yet qualified for this status)
On July 20, 2021, the FAA issued an order redefining the eligibility criteria to be an astronaut in response to the private suborbital spaceflights of Jeff Bezos and Richard Branson. The new criteria states that one must have "[d]emonstrated activities during flight that were essential to public safety, or contributed to
human space flight safety" to qualify as an astronaut. This new definition excludes Bezos and Branson.
Space travel milestones
The first human in space was Soviet Yuri Gagarin, who was launched on 12 April 1961, aboard Vostok 1 and orbited around the Earth for 108 minutes. The first woman in space was Soviet Valentina Tereshkova, who launched on 16 June 1963, aboard Vostok 6 and orbited Earth for almost three days.
Alan Shepard became the first American and second person in space on 5 May 1961, on a 15-minute sub-orbital flight aboard Freedom 7. The first American to orbit the Earth was John Glenn, aboard Friendship 7 on 20 February 1962. The first American woman in space was Sally Ride, during Space Shuttle Challenger's mission STS-7, on 18 June 1983. In 1992, Mae Jemison became the first African American woman to travel in space aboard STS-47.
Cosmonaut Alexei Leonov was the first person to conduct an extravehicular activity (EVA), (commonly called a "spacewalk"), on 18 March 1965, on the Soviet Union's Voskhod 2 mission. This was followed two and a half months later by astronaut Ed White who made the first American EVA on NASA's Gemini 4 mission.
The first crewed mission to orbit the Moon, Apollo 8, included American William Anders who was born in Hong Kong, making him the first Asian-born astronaut in 1968.
The Soviet Union, through its Intercosmos program, allowed people from other "socialist" (i.e. Warsaw Pact and other Soviet-allied) countries to fly on its missions, with the notable exceptions of France and Austria participating in Soyuz TM-7 and Soyuz TM-13, respectively. An example is Czechoslovak Vladimír Remek, the first cosmonaut from a country other than the Soviet Union or the United States, who flew to space in 1978 on a Soyuz-U rocket. Rakesh Sharma became the first Indian citizen to travel to space. He was launched aboard Soyuz T-11, on 2 April 1984.
On 23 July 1980, Pham Tuan of Vietnam became the first Asian in space when he flew aboard Soyuz 37. Also in 1980, Cuban Arnaldo Tamayo Méndez became the first person of Hispanic and black African descent to fly in space, and in 1983, Guion Bluford became the first African American to fly into space. In April 1985, Taylor Wang became the first ethnic Chinese person in space. The first person born in Africa to fly in space was Patrick Baudry (France), in 1985. In 1985, Saudi Arabian Prince Sultan Bin Salman Bin AbdulAziz Al-Saud became the first Arab Muslim astronaut in space. In 1988, Abdul Ahad Mohmand became the first Afghan to reach space, spending nine days aboard the Mir space station.
With the increase of seats on the Space Shuttle, the U.S. began taking international astronauts. In 1983, Ulf Merbold of West Germany became the first non-US citizen to fly in a US spacecraft. In 1984, Marc Garneau became the first of eight Canadian astronauts to fly in space (through 2010).
In 1985, Rodolfo Neri Vela became the first Mexican-born person in space. In 1991, Helen Sharman became the first Briton to fly in space.
In 2002, Mark Shuttleworth became the first citizen of an African country to fly in space, as a paying spaceflight participant. In 2003, Ilan Ramon became the first Israeli to fly in space, although he died during a re-entry accident.
On 15 October 2003, Yang Liwei became China's first astronaut on the Shenzhou 5 spacecraft.
On 30 May 2020, Doug Hurley and Bob Behnken became the first astronauts to launch to orbit on a private crewed spacecraft, Crew Dragon.
Age milestones
The youngest person to reach space is Oliver Daemen, who was 18 years and 11 months old when he made a suborbital spaceflight on Blue Origin NS-16. Daemen, who was a commercial passenger aboard the New Shepard, broke the record of Soviet cosmonaut Gherman Titov, who was 25 years old when he flew Vostok 2. Titov remains the youngest human to reach orbit; he rounded the planet 17 times. Titov was also the first person to suffer space sickness and the first person to sleep in space, twice. The oldest person to reach space is William Shatner, who was 90 years old when he made a suborbital spaceflight on Blue Origin NS-18. The oldest person to reach orbit is John Glenn, one of the Mercury 7, who was 77 when he flew on STS-95.
Duration and distance milestones
The longest time spent in space was by Russian Valeri Polyakov, who spent 438 days there.
As of 2006, the most spaceflights by an individual astronaut is seven, a record held by both Jerry L. Ross and Franklin Chang-Diaz. The farthest distance from Earth an astronaut has traveled was , when Jim Lovell, Jack Swigert, and Fred Haise went around the Moon during the Apollo 13 emergency.
Civilian and non-government milestones
The first civilian in space was Valentina Tereshkova aboard Vostok 6 (she also became the first woman in space on that mission).
Tereshkova was only honorarily inducted into the USSR's Air Force, which did not accept female pilots at that time. A month later, Joseph Albert Walker became the first American civilian in space when his X-15 Flight 90 crossed the line, qualifying him by the international definition of spaceflight. Walker had joined the US Army Air Force but was not a member during his flight.
The first people in space who had never been a member of any country's armed forces were both Konstantin Feoktistov and Boris Yegorov aboard Voskhod 1.
The first non-governmental space traveler was Byron K. Lichtenberg, a researcher from the Massachusetts Institute of Technology who flew on STS-9 in 1983. In December 1990, Toyohiro Akiyama became the first paying space traveler and the first journalist in space for Tokyo Broadcasting System, a visit to Mir as part of an estimated $12 million (USD) deal with a Japanese TV station, although at the time, the term used to refer to Akiyama was "Research Cosmonaut". Akiyama suffered severe space sickness during his mission, which affected his productivity.
The first self-funded space tourist was Dennis Tito on board the Russian spacecraft Soyuz TM-3 on 28 April 2001.
Self-funded travelers
The first person to fly on an entirely privately funded mission was Mike Melvill, piloting SpaceShipOne flight 15P on a suborbital journey, although he was a test pilot employed by Scaled Composites and not an actual paying space tourist. Jared Isaacman was the first person to self-fund a mission to orbit, commanding Inspiration4 in 2021. Nine others have paid Space Adventures to fly to the International Space Station:
Dennis Tito (American): 28 April – 6 May 2001
Mark Shuttleworth (South African): 25 April – 5 May 2002
Gregory Olsen (American): 1–11 October 2005
Anousheh Ansari (Iranian / American): 18–29 September 2006
Charles Simonyi (Hungarian / American): 7–21 April 2007, 26 March – 8 April 2009
Richard Garriott (British / American): 12–24 October 2008
Guy Laliberté (Canadian): 30 September 2009 – 11 October 2009
Yusaku Maezawa and Yozo Hirano (both Japanese): 8 – 24 December 2021
Training
The first NASA astronauts were selected for training in 1959. Early in the space program, military jet test piloting and engineering training were often cited as prerequisites for selection as an astronaut at NASA, although neither John Glenn nor Scott Carpenter (of the Mercury Seven) had any university degree, in engineering or any other discipline at the time of their selection. Selection was initially limited to military pilots. The earliest astronauts for both the US and the USSR tended to be jet fighter pilots, and were often test pilots.
Once selected, NASA astronauts go through twenty months of training in a variety of areas, including training for extravehicular activity in a facility such as NASA's Neutral Buoyancy Laboratory. Astronauts-in-training (astronaut candidates) may also experience short periods of weightlessness (microgravity) in an aircraft called the "Vomit Comet," the nickname given to a pair of modified KC-135s (retired in 2000 and 2004, respectively, and replaced in 2005 with a C-9) which perform parabolic flights. Astronauts are also required to accumulate a number of flight hours in high-performance jet aircraft. This is mostly done in T-38 jet aircraft out of Ellington Field, due to its proximity to the Johnson Space Center. Ellington Field is also where the Shuttle Training Aircraft is maintained and developed, although most flights of the aircraft are conducted from Edwards Air Force Base.
Astronauts in training must learn how to control and fly the Space Shuttle; further, it is vital that they are familiar with the International Space Station so they know what they must do when they get there.
NASA candidacy requirements
The candidate must be a citizen of the United States.
The candidate must complete a master's degree in a STEM field, including engineering, biological science, physical science, computer science or mathematics.
The candidate must have at least two years of related professional experience obtained after degree completion or at least 1,000 hours pilot-in-command time on jet aircraft.
The candidate must be able to pass the NASA long-duration flight astronaut physical.
The candidate must also have skills in leadership, teamwork and communications.
The master's degree requirement can also be met by:
Two years of work toward a doctoral program in a related science, technology, engineering or math field.
A completed Doctor of Medicine or Doctor of Osteopathic Medicine degree.
Completion of a nationally recognized test pilot school program.
Mission Specialist Educator
Applicants must have a bachelor's degree with teaching experience, including work at the kindergarten through twelfth grade level. An advanced degree, such as a master's degree or a doctoral degree, is not required, but is strongly desired.
Mission Specialist Educators, or "Educator Astronauts", were first selected in 2004; as of 2007, there are three NASA Educator astronauts: Joseph M. Acaba, Richard R. Arnold, and Dorothy Metcalf-Lindenburger.
Barbara Morgan, selected as back-up teacher to Christa McAuliffe in 1985, is considered to be the first Educator astronaut by the media, but she trained as a mission specialist.
The Educator Astronaut program is a successor to the Teacher in Space program from the 1980s.
Health risks of space travel
Astronauts are susceptible to a variety of health risks including decompression sickness, barotrauma, immunodeficiencies, loss of bone and muscle, loss of eyesight, orthostatic intolerance, sleep disturbances, and radiation injury. A variety of large scale medical studies are being conducted in space via the National Space Biomedical Research Institute (NSBRI) to address these issues. Prominent among these is the Advanced Diagnostic Ultrasound in Microgravity Study in which astronauts (including former ISS commanders Leroy Chiao and Gennady Padalka) perform ultrasound scans under the guidance of remote experts to diagnose and potentially treat hundreds of medical conditions in space. This study's techniques are now being applied to cover professional and Olympic sports injuries as well as ultrasound performed by non-expert operators in medical and high school students. It is anticipated that remote guided ultrasound will have application on Earth in emergency and rural care situations, where access to a trained physician is often rare.
A 2006 Space Shuttle experiment found that Salmonella typhimurium, a bacterium that can cause food poisoning, became more virulent when cultivated in space. More recently, in 2017, bacteria were found to be more resistant to antibiotics and to thrive in the near-weightlessness of space. Microorganisms have been observed to survive the vacuum of outer space.
On 31 December 2012, a NASA-supported study reported that human spaceflight may harm the brain and accelerate the onset of Alzheimer's disease.
In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars.
Over the last decade, flight surgeons and scientists at NASA have seen a pattern of vision problems in astronauts on long-duration space missions. The syndrome, known as visual impairment intracranial pressure (VIIP), has been reported in nearly two-thirds of space explorers after long periods spent aboard the International Space Station (ISS).
On 2 November 2017, scientists reported that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space, based on MRI studies. Astronauts who took longer space trips were associated with greater brain changes.
Being in space can be physiologically deconditioning on the body. It can affect the otolith organs and adaptive capabilities of the central nervous system. Zero gravity and cosmic rays can cause many implications for astronauts.
In October 2018, NASA-funded researchers found that lengthy journeys into outer space, including travel to the planet Mars, may substantially damage the gastrointestinal tissues of astronauts. The studies support earlier work that found such journeys could significantly damage the brains of astronauts, and age them prematurely.
Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts.
A study by Russian scientists published in April 2019 stated that astronauts facing space radiation could face temporary hindrance of their memory centers. While this does not affect their intellectual capabilities, it temporarily hinders formation of new cells in brain's memory centers. The study conducted by Moscow Institute of Physics and Technology (MIPT) concluded this after they observed that mice exposed to neutron and gamma radiation did not impact the rodents' intellectual capabilities.
A 2020 study conducted on the brains of eight male Russian cosmonauts after they returned from long stays aboard the International Space Station showed that long-duration spaceflight causes many physiological adaptions, including macro- and microstructural changes. While scientists still know little about the effects of spaceflight on brain structure, this study showed that space travel can lead to new motor skills (dexterity), but also slightly weaker vision, both of which could possibly be long lasting. It was the first study to provide clear evidence of sensorimotor neuroplasticity, which is the brain's ability to change through growth and reorganization.
Food and drink
An astronaut on the International Space Station requires about mass of food per meal each day (inclusive of about packaging mass per meal).
Space Shuttle astronauts worked with nutritionists to select menus that appealed to their individual tastes. Five months before flight, menus were selected and analyzed for nutritional content by the shuttle dietician. Foods are tested to see how they will react in a reduced gravity environment. Caloric requirements are determined using a basal energy expenditure (BEE) formula. On Earth, the average American uses about of water every day. On board the ISS astronauts limit water use to only about per day.
Insignia
In Russia, cosmonauts are awarded Pilot-Cosmonaut of the Russian Federation upon completion of their missions, often accompanied with the award of Hero of the Russian Federation. This follows the practice established in the USSR where cosmonauts were usually awarded the title Hero of the Soviet Union.
At NASA, those who complete astronaut candidate training receive a silver lapel pin. Once they have flown in space, they receive a gold pin. U.S. astronauts who also have active-duty military status receive a special qualification badge, known as the Astronaut Badge, after participation on a spaceflight. The United States Air Force also presents an Astronaut Badge to its pilots who exceed in altitude.
Deaths
, eighteen astronauts (fourteen men and four women) have died during four space flights. By nationality, thirteen were American, four were Russian (Soviet Union), and one was Israeli.
, eleven people (all men) have died training for spaceflight: eight Americans and three Russians. Six of these were in crashes of training jet aircraft, one drowned during water recovery training, and four were due to fires in pure oxygen environments.
Astronaut David Scott left a memorial consisting of a statuette titled Fallen Astronaut on the surface of the Moon during his 1971 Apollo 15 mission, along with a list of the names of eight of the astronauts and six cosmonauts known at the time to have died in service.
The Space Mirror Memorial, which stands on the grounds of the Kennedy Space Center Visitor Complex, is maintained by the Astronauts Memorial Foundation and commemorates the lives of the men and women who have died during spaceflight and during training in the space programs of the United States. In addition to twenty NASA career astronauts, the memorial includes the names of an X-15 test pilot, a U.S. Air Force officer who died while training for a then-classified military space program, and a civilian spaceflight participant.
See also
Explanatory notes
References
External links
NASA: How to become an astronaut 101
List of International partnership organizations
Encyclopedia Astronautica: Phantom cosmonauts
collectSPACE: Astronaut appearances calendar
spacefacts Spacefacts.de
Manned astronautics: facts and figures
Astronaut Candidate Brochure online
Science occupations
1959 introductions | Astronaut | Biology | 5,887 |
59,108,153 | https://en.wikipedia.org/wiki/NGC%20688 | NGC 688 is a barred spiral galaxy with starburst activity located 190 million light-years away in the constellation Triangulum. It was discovered by astronomer Heinrich d'Arrest on September 16, 1865 and is a member of the galaxy cluster Abell 262.
See also
List of NGC objects (1–1000)
References
External links
688
1302
6799
Triangulum
Astronomical objects discovered in 1865
Barred spiral galaxies
Abell 262
Starburst galaxies
Markarian galaxies | NGC 688 | Astronomy | 97 |
38,478,835 | https://en.wikipedia.org/wiki/Edge%20index | An edge index is a form of index that consists of marks on the edges of the pages of a printed work. These marks are step-like printed and usually contain order words, letters, or numbers, (e.g., A to Z in a dictionary or telephone book). Usually, they are colored and help to find desired points, especially in reference works. They are created by printing to the edge of the sheet so that they are visible on the closed books' edge.
Advantageously, edge indexing is part of the printing process, permits nearly unlimited headings, and does not add the cost of binding. These are offset by the disadvantage of being unable to know what a mark refers to without opening the book.
Description
Edge indices are forms of index that consists of marks on the edges of the pages of a printed work. These marks are step-like printed and usually contain order words, letters, or numbers, (e.g., A to Z in a dictionary or telephone book). Usually, they are colored and help to find desired points, especially in reference works. They are created by printing to the edge of the sheet so that they are visible on the closed books' edge. When each edge index mark labels one chapter, the desired one can be found by counting the marks. When the edge index marks label first letters in a dictionary or telephone book, some can be identified by their "thickness", (e.g., in English there are only few words beginning with "Q", but many beginning with "S").
Advantages and disadvantages
Edge indexing has several advantages: It is part of the printing process, allows near unlimited headings, and does not add to the binding costs. Its main disadvantage is generally being unable to know what a mark refers to without opening the book, though a coloring scheme may be employed to minimize this downside.
Terminology
An edge index is distinct from a thumb index, but the terms thumb index or chapter thumbs have been applied to edge indices.
Gallery
See also
Index (publishing)
References
Book design
Printing | Edge index | Engineering | 422 |
4,070,102 | https://en.wikipedia.org/wiki/Source%20code%20escrow | Source code escrow is the deposit of the source code of software with a third-party escrow agent. Escrow is typically requested by a party licensing software (the licensee), to ensure maintenance of the software instead of abandonment or orphaning. The software's source code is released to the licensee if the licensor files for bankruptcy or otherwise fails to maintain and update the software as promised in the software license agreement.
Necessity of escrow
As the continued operation and maintenance of custom software is critical to many companies, they usually desire to make sure that it continues even if the licensor becomes unable to do so, such as because of bankruptcy. This is most easily achieved by obtaining a copy of the up-to-date source code. The licensor, however, will often be unwilling to agree to this, as the source code will generally represent one of their most closely guarded trade secrets.
As a solution to this conflict of interest, source code escrow ensures that the licensee obtains access to the source code only when the maintenance of the software cannot otherwise be assured, as defined in contractually agreed-upon conditions.
Escrow agreements
Source code escrow takes place in a contractual relationship, formalized in a source code escrow agreement, between at least three parties:
one or several licensors,
one or several licensees,
the escrow agent.
The service provided by the escrow agent – generally a business dedicated to that purpose and independent from either party – consists principally in taking custody of the source code from the licensor and releasing it to the licensee only if the conditions specified in the escrow agreement are met.
Source code escrow agreements provide for the following:
They specify the subject and scope of the escrow. This is generally the source code of a specific software, accompanied by everything that the licensee requires to independently maintain the software, such as documentation, software tools or specialized hardware.
They oblige the licensor to put updated versions of the software in escrow in specific intervals.
They specify the conditions that must be met for the agent to release the source code to the licensee. Typical conditions include the bankruptcy of the licensor, the cancellation of a software development project or the express unwillingness of the licensor to fulfil his contractual maintenance obligations. Because it is often important to the licensee that the code be released as soon as possible once the conditions are met, the conditions tend to be worded as plainly and unambiguously as possible.
They circumscribe the rights obtained by the licensee with respect to the source code after the release of the software. These rights are generally limited and may include the right to modify the source code for the purpose of fixing errors, or the right to continue independent development of the software.
They specify the services provided by the escrow agent beyond a simple custody of the source code. Specialised agents may, for instance, verify that the source code storage media is readable, or even build the software based on the source code, verifying that its features match the binary version used by the licensee.
They may provide that non-compete clauses in the licence agreement, such as any that prohibit the licensee from employing the licensor's employees, are void in the event of the release conditions being met, enabling the licensee to acquire the know-how required for the maintenance of the software.
They also provide for the fees due to the escrow agent for his services.
Whether a source code escrow agreement is entered into at all, and who bears its costs, is subject to agreement between the licensor and the licensee. Software license agreements often provide for a right of the licensee to demand that the source code be put into escrow, or to join an existing escrow agreement.
Bankruptcy laws may interfere with the execution of a source code escrow agreement, if the bankrupt licensor's creditors are legally entitled to seize the licensor's assets – including the code in escrow – upon bankruptcy, preventing the release of the code to the licensee.
Third party escrow agents
Museums, archives and other GLAM organizations have begun to act as independent escrow agents due to growing digital obsolescence. Notable examples are the Internet Archive in 2007, the Library of Congress in 2006, ICHEG, Computer History Museum, or the MOMA.
There are also some cases where software communities act as escrow agent, for instance for Wing Commander video game series or Ultima 9 of the Ultima series.
Software open-sourcing to the public
The escrow agreements described above are most applicable to custom-developed software which is not available to the general public. In some cases, source code for commercial off-the-shelf software may be deposited into escrow to be released as free and open-source software under an open source license when the original developer ceases development and/or when certain fundraising conditions are met (the threshold pledge system).
For instance, the Blender graphics suite was released in this way following the bankruptcy of Not a Number Technologies; the widely used Qt toolkit is covered by a source code escrow agreement secured by the "KDE Free Qt Foundation".
There are many cases of end-of-life open-sourcing which allow the community continued self-support, see List of commercial video games with later released source code.
See also
Source code repository for open source
Orphan works
References
Further reading
Computerworld (7/20/92, page 99): Don't Rush Into Source Code Escrow
A Guide to IT Contracting: Checklists, Tools, and Techniques (, 2013) - Page 262
Software escrow agreement samples
Escrow
Computer law | Source code escrow | Technology | 1,168 |
51,684,664 | https://en.wikipedia.org/wiki/Deep%20Earth%20Carbon%20Degassing%20Project | The Deep Earth Carbon Degassing (DECADE) project is an initiative to unite scientists around the world to make tangible advances towards quantifying the amount of carbon outgassed from the Earth's deep interior (core, mantle, crust) into the surface environment (e.g. biosphere, hydrosphere, cryosphere, atmosphere) through naturally occurring processes. DECADE is an initiative within the Deep Carbon Observatory (DCO).
Volcanoes are the main pathway in which deeply sourced volatiles, including carbon, are transferred from the Earth's interior to the surface environment. An additional, though less well understood, pathway includes along faults and fractures within the Earth's crust, often referred to as tectonic degassing. When the DCO was first formed in 2009 estimates of global carbon flux from volcanic regions ranged from 65 to 540 Mt/yr, and constraints on global tectonic degassing were virtually unknown. The order of magnitude uncertainty in current volcanic/tectonic carbon outgassing makes answering fundamental questions about the global carbon budget virtually impossible. In particular, one fundamental unknown is if carbon transferred to the Earth's interior via subduction is efficiently recycled back to the Earth's mantle lithosphere, crust and surface environment through volcanic and tectonic degassing, or if significant quantities of carbon are being subducted into the deep mantle. Because significant quantities of mantle carbon are also released through mid-ocean ridge volcanism, if carbon inputs and outputs at subduction zone settings are in balance, then the net effect will be an imbalance in the global carbon budget, with carbon being preferentially removed from the Earth's deep interior and redistributed to more shallow reservoirs including the mantle lithosphere, crust, hydrosphere and atmosphere. The implications of this may mean that carbon concentrations in the surface environment have increased over Earth's history, which has a significant impact on climate change.
Findings from the DECADE project will increase our understanding of how carbon cycles through deep Earth, and patterns in volcanic emissions data could potentially alert scientists to an impending eruption.
Project goals
The main goal of the DECADE project is to refine estimates of global carbon outgassing using a multipronged approach. Specifically, the DECADE initiative unites scientists with expertise in geochemistry, petrology and volcanology to provide constraints on the global volcanic carbon flux by 1) establishing a database of volcanic and hydrothermal gas compositions and fluxes linked to EarthChem/PetDB and the Smithsonian Global Volcanism Program, 2) building a global monitoring network to measure the volcanic carbon flux of 20 active volcanoes continuously, 3) measure the carbon flux of remote volcanoes, for which no or only sparse data are currently available, 4) develop new field and analytical instrumentation for carbon measurements and flux monitoring, and 5) establish formal collaborations with volcano observatories around the world to support volcanic gas measurement and monitoring activities.
History
The DECADE initiative was conceived in September 2011 by the International Association of Volcanology and Chemistry of the Earth's Interior Commission on the Chemistry of Volcanic Gases during its 11th field workshop. Here the charge of the initiative was broadly defined and the governance structure established. The DECADE receives financial support from Deep Carbon Observatory to meet the project goals, with support distributed to DECADE members based on project proposal submission and external review and/or consensus by the board of directors. All projects are significantly matched by funding sources from the individual investigators or other funding agencies. The initiative is led by a board of directors that has nine members including one chair and two co-vice chairs. Currently, the DECADE initiative has around 80 members from 13 countries.
Achievements
, major achievements supported or partially supported by the DECADE initiative include:
Modification of the IEDA EarthChem database to include volcanic gas composition and gas flux data.
Instrumenting 9 volcanoes (Masaya Volcano, Turrialba Volcano, Poás Volcano, Nevado del Ruiz, Galeras, Villarrica (instruments destroyed by eruption), Popocatépetl, Mount Merapi, Whakaari / White Island) with permanent multi-component gas analyzer system (Multi-GAS) stations for near continuous CO2 and SO2 measurements and near continuous SO2 flux measurements using miniDOAS.
Quantification of volcanic gas emissions and compositions from remote regions such as the Aleutian, Vanuatu and Papua New Guinea volcanic arcs.
First measurements of gas emissions from Mount Bromo and Anak Krakatau Volcanoes, Krakatoa Indonesia.
Establishing volcanic gas chemical changes as eruption precursors at Poás and Turrialba Volcanoes, Costa Rica.
Airborne sampling of volcanic plumes for carbon isotopes and analyses using Delta Ray Infrared Isotope Spectrometer.
Determination of diffuse CO2 degassing in the Azores.
Quantification of global CO2 emissions from volcanoes during eruptions, passive degassing and diffuse degassing
Volcanoes
The following volcanoes are currently monitored by the DECADE initiative:
Map of the DCO DECADE project volcano installations
See also
References
External links
Deep Earth Carbon Degassing
Earthchem/petdb The Petrological Database
Global Volcanism Program
Volcanism
Geophysics
Carbon | Deep Earth Carbon Degassing Project | Physics | 1,049 |
40,971,412 | https://en.wikipedia.org/wiki/C23H28N2O4 | {{DISPLAYTITLE:C23H28N2O4}}
The molecular formula C23H28N2O4 may refer to:
Pacrinolol, a beta adrenergic receptor antagonist
Pleiocarpine, an anticholinergic alkaloid
Molecular formulas | C23H28N2O4 | Physics,Chemistry | 62 |
344,136 | https://en.wikipedia.org/wiki/Electrical%20termination | In electronics, electrical termination is the practice of ending a transmission line with a device that matches the characteristic impedance of the line. Termination prevents signals from reflecting off the end of the transmission line. Reflections at the ends of unterminated transmission lines cause distortion, which can produce ambiguous digital signal levels and misoperation of digital systems. Reflections in analog signal systems cause such effects as video ghosting, or power loss in radio transmitter transmission lines.
Transmission lines
Signal termination often requires the installation of a terminator at the beginning and end of a wire or cable to prevent an RF signal from being reflected back from each end, causing interference, or power loss. The terminator is usually placed at the end of a transmission line or daisy chain bus (such as in SCSI), and is designed to match the AC impedance of the cable and hence minimize signal reflections, and power losses. Less commonly, a terminator is also placed at the driving end of the wire or cable, if not already part of the signal-generating equipment.
Radio frequency currents tend to reflect from discontinuities in the cable, such as connectors and joints, and travel back down the cable toward the source, causing interference as primary reflections. Secondary reflections can also occur at the cable starts, allowing interference to persist as repeated echoes of old data. These reflections also act as bottlenecks, preventing the signal power from reaching the destination.
Transmission line cables require impedance matching to carry electromagnetic signals with minimal reflections and power losses. The distinguishing feature of most transmission line cables is that they have uniform cross-sectional dimensions along their length, giving them a uniform electrical characteristic impedance. Signal terminators are designed to specifically match the characteristic impedances at both cable ends. For many systems, the terminator is a resistor, with a value chosen to match the characteristic impedance of the transmission line and chosen to have acceptably low parasitic inductance and capacitance at the frequencies relevant to the system. Examples include 75-ohm resistors often used to terminate 75-ohm video transmission coaxial cables.
Types of transmission line cables include balanced line such as ladder line, and twisted pairs (Cat-6 Ethernet, Parallel SCSI, ADSL, Landline Phone, XLR audio, USB, Firewire, Serial); and unbalanced lines such as coaxial cable (Radio antenna, CATV, 10BASE5 Ethernet).
Types of electrical and signal terminators
Passive
Passive terminators often consist of a single resistor; however, significantly reactive loads may require other passive components such as inductors, capacitors, or transformers.
Active
Active terminators consist of a voltage regulator that keeps the voltage used for the terminating resistor(s) at a constant level.
Forced perfect termination
Forced perfect termination (FPT) can be used on single ended buses where diodes remove over and undershoot conditions. The signal is locked between two actively regulated voltage levels, which results in superior performance over a standard active terminator.
Signal Termination Applications
SCSI
All parallel SCSI units use terminators. SCSI is primarily used for storage and backup. An active terminator is a type of single-ended SCSI terminator with a built-in voltage regulator to compensate for variations in terminator power.
Controller Area Network
Controller area network, commonly known as CAN Bus, uses terminators consisting of a 120 ohm resistor.
Dummy load
Dummy loads are commonly used in HF to EHF circuits.
Ethernet coaxial 50 ohm
10BASE2 networks absolutely must have proper termination with a 50 ohm BNC terminator. If the bus network is not properly terminated, too much power will be reflected, causing all of the computers on the bus to lose network connectivity.
Antenna network 75 ohm
A terminating resistor for a television coaxial cable is often in the form of a cap, threaded to screw onto an F connector. Antenna cables are sometimes used for internet connections; however, RG-6 should not be used for 10BASE2 (which should use RG-58) as the impedance mismatch can cause phasing problems with the baseband signal.
Unibus
The Digital Equipment Corporation minicomputer Unibus systems used terminator cards with 178 Ω pull-up resistors on the multi-drop address and data lines and 383 Ω on the single-drop signal lines.
MIL-STD-1553
Terminating resistor values of 78.7 ohms 2 watt 1% are used on the MIL-STD-1553 bus. At the two ends of the bus, resistors connect between the positive (high) and negative (low) signal wires either in internally terminated bus couplers or external connectorized terminators.
The MIL-STD-1553B bus must be terminated at both ends to minimize the effects of signal reflections that can cause waveform distortion and disruption or intermittent communications failures.
Optionally, a high-impedance terminator (1000 to 3000 ohms) may be used in vehicle applications to simulate a future load from an unspecified device.
Connectorized terminators are available with or without safety chains.
See also
Electrical connector
Electrical network
MIL-STD-1553
Telecommunications pedestal
References
Electronic circuits
SCSI | Electrical termination | Engineering | 1,078 |
34,846,832 | https://en.wikipedia.org/wiki/Creation%20and%20evolution%20in%20public%20education%20in%20the%20United%20States | In American schools, the Genesis creation narrative was generally taught as the origin of the universe and of life until Darwin's scientific theories became widely accepted. While there was some immediate backlash, organized opposition did not get underway until the Fundamentalist–Modernist controversy broke out following World War I; several states passed laws banning the teaching of evolution while others debated them but did not pass them. The Scopes Trial was the result of a challenge to the law in Tennessee. Scopes lost his case, and further U.S. states passed laws banning the teaching of evolution.
In 1968, the U.S. Supreme Court ruled on Epperson v. Arkansas, another challenge to these laws, and the court ruled that allowing the teaching of creation, while disallowing the teaching of evolution, advanced a religion, and therefore violated the Establishment Clause of the U.S. Constitution. Creationists then starting lobbying to have laws passed that required teachers to Teach the Controversy, but this was also struck down by the Supreme Court in 1987 in Edwards v. Aguillard. Creationists then moved to frame the issue as one of intelligent design but this too was ruled against in a District Court in Kitzmiller v. Dover Area School District in 2005. Since December 2005, Google Trends found the popularity of search queries for intelligent design in Google Search has declined sufficiently from its height in November 2004.
As of 2024, all fifty U.S. states and the District of Columbia includes the teaching of evolution in their public school science standards, while none teach intelligent design and creationism is discussed in non-science classes, such as philosophy, comparative religion, or current affairs.
History
Early law
Until the late 19th century, the Genesis creation narrative was taught in nearly all schools in the United States, often from the position that the literal interpretation of the Bible is inerrant. With the widespread acceptance of the scientific theory of biological evolution in the 1860s after being first introduced in 1859, and developments in other fields such as geology and astronomy, public schools began to teach science that was reconciled with Christianity by most people, but human evolution, but not animal evolution, was considered by a number of early fundamentalists to be directly at odds with the Bible.
In the aftermath of World War I, the Fundamentalist–Modernist Controversy brought a surge of opposition to the idea of biological human evolution, and following the campaigning of William Jennings Bryan several states introduced legislation prohibiting the teaching of biological human evolution in public schools. Such legislation was voted on and defeated in 1922 in Kentucky and South Carolina and voted on and defeated in 1923 in Oklahoma and Florida. On March 13, 1925, the Tennessee House of Representatives passed the Butler Act, which prohibited the teaching of human evolution in public schools, with a vote of 71–5. On March 16, 1925, the Tennessee Senate approved the Butler Act with a vote of 24–6. On March 21, 1925, Governor Austin Peay signed the Butler Act into law, and it took effect immediately on the same day. Violating the Butler Act was a misdemeanor, punishable by a fine of $100 to $500.
The American Civil Liberties Union (ACLU) offered to defend anyone who wanted to bring a test case against one of these laws. John T. Scopes accepted, and he started teaching his class human evolution, in defiance of the Tennessee law. On May 5, 1925, Scopes was arrested for violating the Butler Act. On July 10, 1925, the trial, known as the Scopes Monkey Trial, began and on July 21, 1925, Scopes was found guilty by the jury and convicted by the judge. He was fined $100. The resulting trial was widely publicized by H. L. Mencken and others. Nobody else was ever arrested or convicted and the Butler Act remained unenforced for its remaining duration. On November 24, 1925, the Texas State Board of Education adopted a policy that mandated that textbooks used in public schools should not teach or mention the theory of human evolution, but the enforcement of this policy was not entirely uniform. Some textbooks used in Texas schools still included references to human evolution, though these were often minimized or presented in a way that downplayed the theory.
On March 12, 1926, the Mississippi House of Representatives passed an anti-human evolution law with a vote of 78–16. On March 16, 1926, the Mississippi Senate approved the law with a vote of 27–12. On March 18, 1926, Governor Dennis Murphree signed the law, which took effect immediately. The penalty for violating this law was a fine of up to $500. Nobody was ever arrested under the law and it remained unenforced for its remaining duration. On January 15, 1927, in the case of Scopes v. State, the Tennessee Supreme Court reversed, by a vote of 3–1, the conviction of John T. Scopes on a technicality, ruling that the judge, rather than the jury, had imposed the $100 fine, which violated Tennessee law. However, the court upheld, by a vote of 4–0, the constitutionality of the Butler Act itself. On February 10, 1928, the Arkansas House of Representatives passed a law banning the teaching of human evolution in public schools with a vote of 75–18. The Court held:
We are not able to see how the prohibition of teaching the theory that man has descended from a lower order of animals gives preference to any religious establishment or mode of worship. So far as we know there is no religious establishment or organized body that has its creed or confession of faith any article denying or affirming such a theory. — John Thomas Scopes v. The State 154 Tenn. 105, 289 S.W. 363 (1927)
The interpretation of the Establishment Clause of the First Amendment up to that time was that Congress could not establish a particular religion as the State religion. Consequently, the Court held that the ban on the teaching of evolution did not violate the Establishment Clause, because it did not establish one religion as the "State religion." As a result of the holding, the teaching of evolution remained illegal in Tennessee, and continued campaigning succeeded in removing evolution from school textbooks throughout the United States.
On February 14, 1928, the Arkansas Senate passed the same anti-human evolution law with a vote of 23–7. On March 16, 1928, Governor John Ellis Martineau signed the law into effect immediately. The penalty for violating the law was a fine of $500. Nobody was ever arrested under the law and it remained unenforced for its remaining duration. In 1947, the Texas State Board of Education reversed its policy of discouraging or minimizing the teaching of human evolution in public school textbooks.
Modern legal cases
On April 1, 1967, the Tennessee House of Representatives voted to repeal the Butler Act by a vote of 69–20. On May 17, 1967, the Tennessee Senate passed the repeal by a vote of 19–13. On May 18, 1967, Governor Buford Ellington signed the repeal into law and it took effect on the same day. On January 24, 1969, the House of Representatives voted to repeal the anti-human evolution law by a vote of 65–25. On January 30, 1969, the Arkansas Senate passed the repeal by a vote of 22–11. On February 3, 1969, Governor Winthrop Rockefeller signed the repeal into law and it took effect the same day. On February 12, 1970, the Mississippi House of Representatives voted to repeal the anti-human evolution law by a vote of 85–30. On February 16, 1970, the Mississippi Senate passed the repeal by a vote of 30–10. On February 20, 1970, Governor John Bell Williams signed the repeal into law and it took effect that same day.
In 1967, the Tennessee public schools were threatened with another lawsuit over the Butler Act's constitutionality, and, fearing public reprisal, Tennessee's legislature repealed the Butler Act. In the following year, the Supreme Court of the United States ruled in Epperson v. Arkansas (1968) that Arkansas's law prohibiting the teaching of evolution was in violation of the First Amendment. The Supreme Court held that the Establishment Clause prohibits the state from advancing any religion, and determined that the Arkansas law which allowed the teaching of creation while disallowing the teaching of evolution advanced a religion, and was therefore in violation of the Establishment Clause. This holding reflected a broader understanding of the Establishment Clause: instead of just prohibiting laws that established a state religion, the clause was interpreted to prohibit laws that furthered any particular religion over others. Opponents, pointing to the previous decision, argued that this amounted to judicial activism.
In reaction to the Epperson case, creationists in Louisiana passed a law requiring that public schools should give "equal time" to "alternative theories" of origin. The Supreme Court ruled in 1987 in Edwards v. Aguillard that the Louisiana statute, which required creation to be taught alongside evolution every time evolution was taught, was unconstitutional.
The Court laid out its rule in Edwards as follows:
The Establishment Clause forbids the enactment of any law 'respecting an establishment of religion.' The Court has applied a three-pronged test to determine whether legislation comports with the Establishment Clause. First, the legislature must have adopted the law with a secular purpose. Second, the statute's principal or primary effect must be one that neither advances nor inhibits religion. Third, the statute must not result in an excessive entanglement of government with religion. Lemon v. Kurtzman, 403 U.S. 602, 612–613, 91 S.Ct. 2105, 2111, 29 L.Ed.2d 745 (1971). State action violates the Establishment Clause if it fails to satisfy any of these prongs. — Edwards v. Aguillard
The Court held that the law was not adopted with a secular purpose, because its purported purpose of "protecting academic freedom" was not furthered by limiting the freedom of teachers to teach what they thought appropriate; ruled that the act was discriminatory because it provided certain resources and guarantees to "creation scientists" which were not provided to those who taught evolution; and ruled that the law was intended to advance a particular religion because several state senators that had supported the bill stated that their support for the bill stemmed from their religious beliefs.
While the Court held that creationism is an inherently religious belief, it did not hold that every mention of creationism in a public school is unconstitutional:
We do not imply that a legislature could never require that scientific critiques of prevailing scientific theories be taught. Indeed, the Court acknowledged in Stone that its decision forbidding the posting of the Ten Commandments did not mean that no use could ever be made of the Ten Commandments, or that the Ten Commandments played an exclusively religious role in the history of Western Civilization. 449 U.S., at 42, 101 S.Ct., at 194. In a similar way, teaching a variety of scientific theories about the origins of humankind to schoolchildren might be validly done with the clear secular intent of enhancing the effectiveness of science instruction. But because the primary purpose of the Creationism Act is to endorse a particular religious doctrine, the Act furthers religion in violation of the Establishment Clause. — Edwards v. Aguillard
Intelligent Design and Kitzmiller v. Dover Area School District
The ruling was one in a series of developments addressing issues related to the American creationist movement and the separation of church and state. The scope of the ruling affected state schools and did not include independent schools, home schools, Sunday schools and Christian schools, all of whom remained free to teach creationism.
Within two years of the Edwards ruling a creationist textbook was produced: Of Pandas and People (1989), which attacked evolutionary biology without mentioning the identity of the supposed "intelligent designer." Drafts of the text used "creation" or "creator" before being changed to "intelligent design" or "designer" after the Edwards v. Aguillard ruling. This form of creationism, known as intelligent design creationism, was developed in the early 1990s.
This would eventually lead to another court case, Kitzmiller v. Dover Area School District, which went to trial on September 26, 2005, and was decided in U.S. District Court on December 20, 2005, in favor of the plaintiffs, who charged that a mandate that intelligent design (ID) be taught was an unconstitutional establishment of religion. The opinion of Kitzmiller v. Dover was hailed as a landmark decision, firmly establishing that creationism and intelligent design were religious teachings and not areas of legitimate scientific research. Because the Dover Area School Board chose not to appeal, the case never reached a circuit court or the U.S. Supreme Court.
Just as it is permissible to discuss the crucial role of religion in medieval European history, creationism may be discussed in a civics, current affairs, philosophy, or comparative religions class where the intent is to factually educate students about the diverse range of human political and religious beliefs. The line is crossed only when creationism is taught as science.
Movements to teach creationism in schools
There continue to be numerous efforts to introduce creationism in U.S. classrooms. One strategy is to declare that evolution is a religion, and therefore it should not be taught in the classroom either, or that if evolution is a religion, then surely creationism as well can be taught in the classroom.
In the 1980s, UC Berkeley law professor Phillip E. Johnson began reading the scientific literature on evolution. This led him to author Darwin on Trial (1991), which examined the evidence for evolution from a religious point of view and challenged the assumption that the only reasonable explanation for the origin of species must be a naturalistic one. This book, and his subsequent efforts to encourage and coordinate creationists with more scientific credentials, was the start of the intelligent design movement. Intelligent design asserts that there is evidence that life was created by an "intelligent designer" (mainly that the physical properties of living organisms are so complex that they must have been "designed"). Proponents claim that intelligent design takes "all available facts" into account rather than just those available through naturalism. Opponents assert that intelligent design is a pseudoscience because its claims cannot be tested by experiment (see falsifiability) and do not propose any new hypotheses.
Many proponents of the intelligent design movement support requiring that it be taught in the public schools. For example, the Discovery Institute (DI), a conservative think tank, and Phillip E. Johnson support the policy of "Teach the Controversy," which entails presenting to students evidence for and against evolution, and then encouraging students to evaluate that evidence themselves.
While many proponents of intelligent design believe that it should be taught in schools, others believe that legislation is not appropriate. Answers in Genesis (AiG) has said:
"AiG is not a lobby group, and we oppose legislation for compulsion of creation teaching. ...why would we want an atheist forced to teach creation and give a distorted view? But we would like legal protection for teachers who present scientific arguments against the sacred cow of evolution such as staged pictures of peppered moths and forged embryo diagrams."
Position of Teaching and Scientific Societies
The National Science Teachers Association is opposed to teaching creationism as a science, as is the Association for Science Teacher Education, the National Association of Biology Teachers, the American Anthropological Association, the American Geosciences Institute, the Geological Society of America, the American Geophysical Union, and numerous other professional teaching and scientific societies.
Recent developments in state education programs
Developments by state
Alabama
In 1996, the Alabama State Board of Education adopted a textbook sticker that was a disclaimer about evolution. It has since been revised and moderated. In September 2015, the Alabama State Board of Education unanimously approved that evolution and climate change should be required material for the state educational curriculum, these changes to be implemented by 2016. At the same time, a referendum was set for potentially removing the textbook disclaimers.
Arizona
In January 2013, it was approved the Arizona's Senate Bill 1213 which enabled teachers of public state schools to discuss "the scientific strengths and scientific weaknesses" of the "teaching of some scientific subjects, including biological evolution, the chemical origins of life, global warming, and human cloning can cause controversy."
Arkansas
In March 2021, the Arkansas House passed House Bill 1701 by a vote of 72–21, which would have allowed public schools to teach intelligent design. The next month, however, the Arkansas Senate Education Committee rejected it by a vote of 3–3.
California
In August 2008 Judge S. James Otero ruled in favor of University of California in Association of Christian Schools International v. Roman Stearns agreeing with the university's position that various religious books on U.S. history and science, from A Beka Books and Bob Jones University Press, should not be used for college-preparatory classes. The case was filed in spring 2006 by Association of Christian Schools International (ACSI) against the University of California claiming religious discrimination over the rejection of five courses as college preparatory instruction. On August 8, 2008, Judge Otero entered summary judgment against plaintiff ACSI, upholding the University of California's standards. The university found the books "didn't encourage critical thinking skills and failed to cover 'major topics, themes and components' of U.S. history" and were thus ill-suited to prepare students for college.
Florida
On February 19, 2008, the Florida State Board of Education adopted new science standards in a 4–3 vote. The new science curriculum standards explicitly require the teaching of the "scientific theory of evolution," whereas the previous standards only referenced evolution using the words "change over time."
Georgia
In 2002, six parents in Cobb County, Georgia, in the case Selman v. Cobb County School District (2006) sued to have the following sticker removed from public school textbooks:
Defense attorney E. Linwood Gunn IV said, "The only thing the school board did is acknowledge there is a potential conflict [between the science of evolution and creationism] and there is a potential infringement on people's beliefs if you present it in a dogmatic way. We're going to do it in a respectful way." Gerald R. Weber, legal director of the ACLU of Georgia, said, "The progress of church-state cases has been that the [U.S.] Supreme Court sets a line, then government entities do what they can to skirt that line. ... Here the Supreme Court has said you can't teach creationism in the public schools. You can't have an equal-time provision for evolution and creationism. These disclaimers are a new effort to skirt the line." Jefferey Selman, who brought the lawsuit, claims, "It singles out evolution from all the scientific theories out there. Why single out evolution? It has to be coming from a religious basis, and that violates the separation of church and state." The Cobb County Board of Education said it adopted the sticker "to foster critical thinking among students, to allow academic freedom consistent with legal requirements, to promote tolerance and acceptance of diversity of opinion, and to ensure a posture of neutrality toward religion."
On January 13, 2005, a federal judge in Atlanta ruled that the stickers should be removed as they violated the Establishment Clause of the First Amendment. The Board subsequently decided to appeal the decision. In comments on December 15, 2005, in advance of releasing its decision, the appeal court panel appeared critical of the lower court ruling and a judge indicated that he did not understand the difference between evolution and abiogenesis.
On December 19, 2006, the Board abandoned all of its legal activities and will no longer mandate that biology texts contain a sticker stating "evolution is a theory, not a fact." Their decision was a result of compromise negotiated with a group of parents, represented by the ACLU, that were opposed to the sticker. The parents agreed, as their part of the compromise, to withdraw their legal actions against the Board.
Kansas
On August 11, 1999, by a 6–4 vote the Kansas State Board of Education changed their science education standards to remove any mention of "biological macroevolution, the age of the Earth, or the origin and early development of the universe," so that evolutionary theory no longer appeared in statewide standardized tests and "it was left to the 305 local school districts in Kansas whether or not to teach it." This decision was hailed by creationists, and sparked a statewide and nationwide controversy with scientists condemning the change. Challengers in the state's Republican primary who made opposition to the anti-evolution standards their focus were voted in on August 1, 2000, so on February 14, 2001, the Board voted 7–3 to reinstate the teaching of biological evolution and the origin of the earth into the state's science education standards.
In 2004, the Board elections gave religious conservatives a majority and, influenced by the Discovery Institute, they arranged the Kansas evolution hearings. On August 9, 2005, the Board drafted new "science standards that require critical analysis of evolution – including scientific evidence refuting the theory," which opponents analyzed as effectively stating that intelligent design should be taught. The new standards also provide a definition of science that does not preclude supernatural explanations, and were approved by a 6–4 vote on November 8, 2005—incidentally the day of the Dover Area School Board election which failed to re-elect incumbent creationists (see #Pennsylvania).
In Kansas' state Republican primary elections on August 1, 2006, moderate Republicans took control away from the anti-evolution conservatives, leading to an expectation that science standards which effectively embraced intelligent design and cast doubt on Darwinian evolution would now be changed.
On February 13, 2007, the Board approved a new curriculum which removed any reference to intelligent design as part of science. In the words of Bill Wagnon, the board chairman, "Today the Kansas Board of Education returned its curriculum standards to mainstream science." The new curriculum, as well as a document outlining the differences with the previous curriculum, has been posted on the Kansas State Department of Education's website.
In June 2013, Kansas adopted the national Next Generation Science Standards, which teaches evolution as a fundamental principle of life sciences.
Kentucky
In October 1999, the Kentucky Department of Education replaced the word "evolution" with "change over time" in state school standards.
Louisiana
On June 12, 2008, a bill (SB561) named the "Louisiana Academic Freedom Act" passed into law.
Ohio
In 2002, proponents of intelligent design asked the Ohio State Board of Education to adopt intelligent design as part of its standard biology curriculum, in line with the guidelines of the Edwards v. Aguillard holding. In December 2002, the Board adopted a proposal that required critical analysis of evolution, but did not specifically mention intelligent design. This decision was reversed in February 2006 following both the conclusion of the Dover lawsuit and repeated threats of lawsuit against the Board.
Pennsylvania
In 2004, the Dover Area School Board voted that a statement must be read to students of 9th grade biology mentioning intelligent design. This resulted in a firestorm of criticism from scientists and science teachers and caused a group of parents to begin legal proceedings (sometimes referred to as the Dover Panda Trial) to challenge the decision, based on their interpretation of the Aguillard precedent. Supporters of the school board's position noted that the Aguillard holding explicitly allowed for a variety of what they consider "scientific theories" of origins for the secular purpose of improving scientific education. Others have argued that intelligent design should not be allowed to use this "loophole." On November 8, 2005, the members of the Board in Dover were voted out and replaced by evolutionary theory supporters. This had no bearing on the case. On December 20, 2005, federal judge John E. Jones III ruled that the Dover Area School Board had violated the Constitution when they set their policy on teaching intelligent design, and stated that "In making this determination, we have addressed the seminal question of whether ID is science. We have concluded that it is not, and moreover that ID cannot uncouple itself from its creationist, and thus religious, antecedents."
Tennessee
On April 10, 2012, a bill (HB 368/SB 893) passed in protecting "teachers who explore the 'scientific strengths and scientific weaknesses' of evolution and climate change." Science education advocates said the law could make it easier for creationism and global warming denial to enter U.S. classrooms. Brenda Ekwurzel of the Union of Concerned Scientists saw it as a risk to education, quoting "We need to keep kids' curiosity about science alive and not limit their ability to understand the world around them by exposing them to misinformation." The passing of the law was praised by proponents of intelligent design.
Texas
On November 7, 2007, the Texas Education Agency (TEA) director of science curriculum Christine Comer was forced to resign over an e-mail she had sent announcing a talk given by an anti-intelligent design author. In a memo obtained under the Texas Public Information Act, TEA officials wrote "Ms. Comer's e-mail implies endorsement of the speaker and implies that TEA endorses the speaker's position on a subject on which the agency must remain neutral." In response over 100 biology professors from Texas universities signed a letter to the state education commissioner denouncing the requirement to be neutral on the subject of intelligent design. The 2017 science curriculum eliminated language that openly questioned evolution, but still leaves room for teaching creationism.
In July 2011, the Texas State Board of Education (SBOE), which oversees the Texas Education Agency, did not approve anti-evolution instructional materials submitted by International Databases, LLC, while continuing to approve materials from mainstream publishers.
Virginia
Despite proponents' urging that intelligent design be included in the school system's science curriculum, the school board of Chesterfield County Public Schools in Virginia decided on May 23, 2007, to approve science textbooks for middle and high schools which do not include the idea of intelligent design. However, during the board meeting a statement was made that their aim was self-directed learning which "occurs only when alternative views are explored and discussed," and directed that professionals supporting curriculum development and implementation are to be required "to investigate and develop processes that encompass a comprehensive approach to the teaching and learning" of the theory of evolution, "along with all other topics that raise differences of thought and opinion." During the week before the meeting, one of the intelligent design proponents claimed that "Students are being excluded from scientific debate. It's time to bring this debate into the classroom," and presented A Scientific Dissent from Darwinism.
In 2017, Bertha Vazquez, a middle school science teacher and director of the Teacher Institute for Evolutionary Science at the Richard Dawkins Foundation for Reason and Science, published a comparison of the nation's middle school science standards.
Polls
In 2000, a poll commissioned by People for the American Way found that among Americans:
29% believe public schools should teach evolution in science class but can discuss creationism there as a belief;
20% believe public schools should teach evolution only;
17% believe public schools should teach evolution in science class and religious theories elsewhere;
16% believe public schools should teach creation only;
13% believe public schools should teach both evolution and creationism in science class;
4% believe public schools should teach both but are not sure how.
In 2006, a poll conducted by Zogby International commissioned by the Discovery Institute found that more than three to one of voters surveyed chose the option that biology teachers should teach Darwin's theory of evolution, but also "the scientific evidence against it." Approximately seven in ten (69%) sided with this view. In contrast, one in five (21%) chose the other option given, that biology teachers should teach only Darwin's theory of evolution and the scientific evidence that supports it. One in ten was not sure.
A 2019 Gallup creationism survey found that 40% of adults in the United States inclined to the belief that "God created humans in their present form at one time within the last 10,000 years" when asked for their beliefs regarding the origin and development of human beings. 22% believed that "human beings have developed over millions of years from less advanced forms of life, but God had no part in this process".
Teachers have also been polled. In 2019, following up on a 2007 survey, teachers reported increasing numbers of hours spent teaching evolution, and more teachers were likely to emphasize broad scientific consensus on evolution and not give credence to creationism. The results also suggested that personally creationist teachers were less likely to be represented among public high school biology teachers. Part, but not all, of the explanation involves adoption in at least twenty states of the Next Generation Science Standards.
U.S. legal quotations
Epperson v. Arkansas (1968):
...the First Amendment does not permit the state to require that teaching and learning must be tailored to the principles or prohibitions of any religious sect or dogma...the state has no legitimate interest in protecting any or all religions from views distasteful to them.
McLean v. Arkansas (1982), the judge wrote that creation scientists:
...cannot properly describe the methodology used as scientific, if they start with a conclusion and refuse to change it regardless of the evidence developed during the course of the investigation.
Edwards v. Aguillard (1987):
...Because the primary purpose of the Creationism Act is to advance a particular religious belief, the Act endorses religion in violation of the First Amendment.
Webster v. New Lenox School District (1990), the United States Court of Appeals for the Seventh Circuit stated:
If a teacher in a public school uses religion and teaches religious beliefs or espouses theories clearly based on religious underpinnings, the principles of the separation of church and state are violated as clearly as if a statute ordered the teacher to teach religious theories such as the statutes in Edwards did.
Peloza v. Capistrano School District (1994), the United States Court of Appeals for the Ninth Circuit wrote:
The Supreme Court has held unequivocally that while belief in a Divine Creator of the universe is a religious belief, the scientific theory that higher forms of life evolved from lower ones is not.
Kitzmiller v. Dover Area School District (2005):
The proper application of both the endorsement and Lemon tests to the facts of this case makes it abundantly clear that the Board's ID Policy violates the Establishment Clause. In making this determination, we have addressed the seminal question of whether ID is science. We have concluded that it is not, and moreover that ID cannot uncouple itself from its creationist, and thus religious, antecedents.
See also
A Scientific Support for Darwinism
Clergy Letter Project
Creation and evolution in public education
National Center for Science Education
Project Steve
Rejection of evolution by religious groups
Science, Evolution, and Creationism
"A Scientific Dissent from Darwinism"
References
External links
National Center for Science Education
Creation education materials and articles at Answers in Genesis
Education controversies in the United States
Creationism
Evolution and religion
Public education in the United States
Religious controversies in the United States
Textbook controversies | Creation and evolution in public education in the United States | Biology | 6,403 |
14,095,325 | https://en.wikipedia.org/wiki/Dispersant | A dispersant or a dispersing agent is a substance, typically a surfactant, that is added to a suspension of solid or liquid particles in a liquid (such as a colloid or emulsion) to improve the separation of the particles and to prevent their settling or clumping.
Dispersants are widely used to stabilize various industrial and artisanal products, such as paints, ferrofluids, and salad dressings. The plasticizers or superplasticizers, used to improve the workability of pastes like concrete and clay, are typically dispersants. The concept also largely overlaps with that of detergent, used to bring oily contamination into water suspension, and of emulsifier, used to create homogeneous mixtures of immiscible liquids like water and oil. Natural suspensions like milk and latex contain substances that act as dispersants.
Applications
Automotive
Automotive engine oils contain both detergents and dispersants. Metallic-based detergents prevent the accumulation of varnish like deposits on the cylinder walls. They also neutralize acids. Dispersants maintain contaminants in suspension.
Dispersants added to gasoline prevent the buildup of gummy residues.
Bio-dispersing
Dispersants are used to prevent formation of biofouling or biofilms in industrial processes. It is also possible to disperse bacterial slime and increase the efficiency of biocides.
Concrete and stucco
Dispersants are used as plasticizers or superplasticizers in concrete formulations to lower the use of water while retaining the needed slump (flow) property. A lower water content makes the concrete stronger and more impervious to water penetration.
Similarly, dispersants are used as plasticizers in the gypsum slurry during wallboard manufacture, to reduce the amount of water used. The lower water usage allows lower energy use to dry the wallboard.
Detergents
Dispersing is the principal goal in the use of detergents, which the liquid bath is water (detergents also are used as emulsifiers in some applications). Laundry detergents encase dirt and grime in miscelles, which naturally disperse.
Oil drilling
Dispersants in oil drilling aid in breaking up solids or liquids as fine particles or droplets into another medium. This term is often applied incorrectly to clay deflocculants. Clay dispersants prevent formation of "fish-eye" globules. For dispersing (emulsification) of oil into water (or water into oils), surfactants selected on the basis of hydrophilic-lipophilic balance (HLB) number can be used. For foam drilling fluids, synthetic detergents and soaps are used, along with polymers, to disperse foam bubbles into the air or gas.
Oil spill
Dispersants can be used to dissipate oil slicks. They may rapidly disperse large amounts of certain oil types from the sea surface by transferring it into the water column. They will cause the oil slick to break up and form water-soluble micelles that are rapidly diluted. Then effectively spread throughout a larger volume of water than the surface from where the oil was dispersed. They can also delay the formation of persistent oil-in-water emulsions. However, laboratory experiments showed that dispersants increased toxic hydrocarbon levels in fish by a factor of up to 100 and may kill fish eggs.
Dispersant Corexit 9527 was for example used to disperse an oil slick in the Gulf of Mexico in 1979 (Ixtoc) over one thousand square miles of sea. The same dispersant was also used in an attempt to clean up the Exxon Valdez oil spill in 1989, though its use was discontinued as there was not enough wave action to mix the dispersant with the oil in the water. During the Deepwater Horizon oil spill in 2010, unprecedented amounts of the dispersants Corexit 9500 and 9527 were used (approximately 7 million liters).
Process industry
In the process industry dispersing agents are added to process liquids to prevent unwanted deposits by keeping them finely dispersed. They function in both aqueous and nonaqueous media.
Surface coating
In order to provide optimal performance, pigment particles must act independently of each other in the coating film and thus must remain well dispersed throughout manufacture, storage, application, and film formation. Unfortunately, colloidal dispersions such as the pigment dispersions in liquid coatings are inherently unstable, and they must be stabilized against the flocculation that might occur.
See also
Plasticizer
Deflocculant
Detergent
Surfactant
Superplasticizer
Suspension (chemistry)
Solubilization
References
Colloidal chemistry
Fouling
Process chemicals
Oil spill remediation technologies
Solvents
Heterogeneous chemical mixtures | Dispersant | Chemistry,Materials_science | 1,012 |
22,003,959 | https://en.wikipedia.org/wiki/Albatross%20expedition | The Albatross expedition (Albatrossexpeditionen) was a Swedish oceanographic expedition that between July 4, 1947, and October 3, 1948, sailed around the world during 15 months covering 45 000 nautical miles. The expedition is considered the second largest Swedish research expedition after the Vega expedition. The expedition was very successful, received international attention, and is considered one of the important steps in the history of oceanography.
The Albatross
The expedition was carried out on board the newly built training ship Albatross. The 70 meter long and 11 meter wide vessel was a combined motor and sailing vessel. The Boström line (Broströmskoncernen) had just built the student ship to train prospective ship's officers and this vessel with associated crew was lent to the expedition.
Since the Boström line lent the ship at almost no cost, the expedition could be financed and carried out with only private donations. The leader of the expedition was Swedish physicist and oceanographer Hans Pettersson.
The main task of the expedition was to take up to 20 m long sediment cores from the ocean floor. This was made using a newly developed corer, known as piston sampler, developed by Börje Kullenberg. Until then the longest cores that could be taken were 2 m.
The expedition also carried out the first seismic reflection measurements of the sediment thickness, using sink bombs. The results of the sediment studies were ground-breaking since they revealed that the sediment thickness increased away from the mid-oceanic ridges, along with the sediment accumulation time. This was one of several pieces of evidence that eventually led to the acceptance of the theory of plate tectonics.
Apart from sediments, the expedition looked at biology. The first deep sea trawling, at 7 600-7 900 m depth, revealed that those depths were not the dead zone that previously had been the accepted view.
Notes
Other sources
Hans Pettersson (1950) Med Albatross över havsdjupen (Stockholm: Bonnier)
Eric Olausson (1996) The Swedish Deep-Sea Expedition with the "Albatross" 1947-1948 (Novum, Grafiska AB)
Oceanography
Science and technology in Sweden
Oceanographic expeditions
Expeditions from Sweden | Albatross expedition | Physics,Environmental_science | 453 |
853,175 | https://en.wikipedia.org/wiki/Characterizations%20of%20the%20exponential%20function | In mathematics, the exponential function can be characterized in many ways.
This article presents some common characterizations, discusses why each makes sense, and proves that they are all equivalent.
The exponential function occurs naturally in many branches of mathematics. Walter Rudin called it "the most important function in mathematics".
It is therefore useful to have multiple ways to define (or characterize) it.
Each of the characterizations below may be more or less useful depending on context.
The "product limit" characterization of the exponential function was discovered by Leonhard Euler.
Characterizations
The six most common definitions of the exponential function for real values are as follows.
Product limit. Define by the limit:
Power series. Define as the value of the infinite series (Here denotes the factorial of . One proof that is irrational uses a special case of this formula.)
Inverse of logarithm integral. Define to be the unique number such that That is, is the inverse of the natural logarithm function , which is defined by this integral.
Differential equation. Define to be the unique solution to the differential equation with initial value: where denotes the derivative of .
Functional equation. The exponential function is the unique function with the multiplicative property for all and . The condition can be replaced with together with any of the following regularity conditions: For the uniqueness, one must impose some regularity condition, since other functions satisfying can be constructed using a basis for the real numbers over the rationals, as described by Hewitt and Stromberg.
Elementary definition by powers. Define the exponential function with base to be the continuous function whose value on integers is given by repeated multiplication or division of , and whose value on rational numbers is given by . Then define to be the exponential function whose base is the unique positive real number satisfying:
Larger domains
One way of defining the exponential function over the complex numbers is to first define it for the domain of real numbers using one of the above characterizations, and then extend it as an analytic function, which is characterized by its values on any infinite domain set.
Also, characterisations (1), (2), and (4) for apply directly for a complex number. Definition (3) presents a problem because there are non-equivalent paths along which one could integrate; but the equation of (3) should hold for any such path modulo . As for definition (5), the additive property together with the complex derivative are sufficient to guarantee . However, the initial value condition together with the other regularity conditions are not sufficient. For example, for real x and y, the functionsatisfies the three listed regularity conditions in (5) but is not equal to . A sufficient condition is that and that is a conformal map at some point; or else the two initial values and together with the other regularity conditions.
One may also define the exponential on other domains, such as matrices and other algebras. Definitions (1), (2), and (4) all make sense for arbitrary Banach algebras.
Proof that each characterization makes sense
Some of these definitions require justification to demonstrate that they are well-defined. For example, when the value of the function is defined as the result of a limiting process (i.e. an infinite sequence or series), it must be demonstrated that such a limit always exists.
Characterization 1
The error of the product limit expression is described by:
where the polynomial's degree (in x) in the term with denominator nk is 2k.
Characterization 2
Since
it follows from the ratio test that converges for all x.
Characterization 3
Since the integrand is an integrable function of , the integral expression is well-defined. It must be shown that the function from to defined by
is a bijection. Since is positive for positive , this function is strictly increasing, hence injective. If the two integrals
hold, then it is surjective as well. Indeed, these integrals do hold; they follow from the integral test and the divergence of the harmonic series.
Characterization 6
The definition depends on the unique positive real number satisfying: This limit can be shown to exist for any , and it defines a continuous increasing function with and , so the Intermediate value theorem guarantees the existence of such a value .
Equivalence of the characterizations
The following arguments demonstrate the equivalence of the above characterizations for the exponential function.
Characterization 1 ⇔ characterization 2
The following argument is adapted from Rudin, theorem 3.31, p. 63–65.
Let be a fixed non-negative real number. Define
By the binomial theorem,
(using x ≥ 0 to obtain the final inequality) so that:
One must use lim sup because it is not known if tn converges.
For the other inequality, by the above expression for tn, if 2 ≤ m ≤ n, we have:
Fix m, and let n approach infinity. Then
(again, one must use lim inf because it is not known if tn converges). Now, take the above inequality, let m approach infinity, and put it together with the other inequality to obtain:
so that
This equivalence can be extended to the negative real numbers by noting and taking the limit as n goes to infinity.
Characterization 1 ⇔ characterization 3
Here, the natural logarithm function is defined in terms of a definite integral as above. By the first part of fundamental theorem of calculus,
Besides,
Now, let x be any fixed real number, and let
, which implies that , where is in the sense of definition 3. We have
Here, the continuity of ln(y) is used, which follows from the continuity of 1/t:
Here, the result lnan = nlna has been used. This result can be established for n a natural number by induction, or using integration by substitution. (The extension to real powers must wait until ln and exp have been established as inverses of each other, so that ab can be defined for real b as eb lna.)
Characterization 1 ⇔ characterization 4
Let denote the solution to the initial value problem . Applying the simplest form of Euler's method with increment and sample points gives the recursive formula:This recursion is immediately solved to give the approximate value , and since Euler's Method is known to converge to the exact solution, we have:
Characterization 2 ⇔ characterization 4
Let n be a non-negative integer. In the sense of definition 4 and by induction, .
Therefore
Using Taylor series,
This shows that definition 4 implies definition 2.
In the sense of definition 2,
Besides, This shows that definition 2 implies definition 4.
Characterization 2 ⇒ characterization 5
In the sense of definition 2, the equation follows from the term-by-term manipulation of power series justified by uniform convergence, and the resulting equality of coefficients is just the Binomial theorem. Furthermore:
Characterization 3 ⇔ characterization 4
Characterisation 3 first defines the natural logarithm:then as the inverse function with . Then by the Chain rule:i.e. . Finally, , so . That is, is the unique solution of the initial value problem , of characterization 4.
Conversely, assume has and , and define as its inverse function with and . Then:i.e. . By the Fundamental theorem of calculus,
Characterization 5 ⇒ characterization 4
The conditions {{math|1=f'''(0) = 1}} and imply both conditions in characterization 4. Indeed, one gets the initial condition by dividing both sides of the equation
by , and the condition that follows from the condition that and the definition of the derivative as follows:
Characterization 5 ⇒ characterization 4
Assum characterization 5, the multiplicative property together with the initial condition imply that:
Characterization 5 ⇔ characterization 6
By inductively applying the multiplication rule, we get:
and thus
for . Then the condition means that , so by definition.
Also, any of the regularity conditions of definition 5 imply that is continuous at all real (see below). The converse is similar.
Characterization 5 ⇒ characterization 6
Let be a Lebesgue-integrable non-zero function satisfying the mulitiplicative property with . Following Hewitt and Stromberg, exercise 18.46, we will prove that Lebesgue-integrability implies continuity. This is sufficient to imply according to characterization 6, arguing as above.
First, a few elementary properties:
If is nonzero anywhere (say at ), then it is non-zero everywhere. Proof: implies .
. Proof: and is non-zero.
. Proof: .
If is continuous anywhere (say at ), then it is continuous everywhere. Proof: as by continuity at .
The second and third properties mean that it is sufficient to prove for positive x.
Since is a Lebesgue-integrable function, then we may define . It then follows that
Since is nonzero, some can be chosen such that and solve for in the above expression. Therefore:
The final expression must go to zero as since and is continuous. It follows that is continuous.
References
Walter Rudin, Principles of Mathematical Analysis, 3rd edition (McGraw–Hill, 1976), chapter 8.
Edwin Hewitt and Karl Stromberg, Real and Abstract Analysis'' (Springer, 1965).
Mathematical analysis
Exponentials
Exponential function
Articles containing proofs | Characterizations of the exponential function | Mathematics | 1,902 |
17,101,980 | https://en.wikipedia.org/wiki/The%20Lightning%20Process | The Lightning Process (LP) is a three-day personal training programme developed and trademarked by British osteopath Phil Parker. It makes unsubstantiated claims to be beneficial for various conditions, including ME/CFS, depression and chronic pain.
Developed in the late 1990s, it aims to teach techniques for managing the acute stress response that the body experiences under threat. The course aims to help recognise the stress response, calm it and manage it in the long term. It also applies some ideas drawn from neurolinguistic programming (a pseudoscience), as well as elements of life coaching.
The approach has raised some controversy due to using psychological techniques in an attempt to cure a physical illness. The website was amended after the Advertising Standards Authority ruled that it was misleading. In 2021, after a review of the available evidence, the National Institute for Health and Care Excellence advised against the use of Lighting Process among patients with chronic fatigue syndrome.
Description
The Lightning Process comprises three group sessions conducted on three consecutive days, lasting about 12 hours altogether, conducted by trained practitioners.
According to its developer, Phil Parker, the programme aims to teach participants about the acute stress response the body experiences under threat. It aims to help trainees spot when this response is happening and learn how to calm it. Techniques based on movement, postural awareness and personal coaching are intended to modify the production of stress hormones. Participants practice a learnt series of steps to habituate the calming method.
The Lightning Process is based on the theory that the body can get stuck in a persistent stress response. The initial stressor may be a viral or bacterial infection, psychological stress, or trauma, which causes physical symptoms due to the body's stress response. These symptoms then act as a further stressor, resulting in overload of the central nervous system and chronic activation of the body's stress response. Neuroplasticity then causes this abnormal stress response to persist and be maintained. The Lightning Process suggests that while this disruption initially happens at an unconscious level, it is possible for the patient to exert conscious control and influence over the process, eventually breaking the cycle.
The rationale for the programme draws on ideas of osteopaths Andrew Taylor Still and J M Littlejohn regarding nervous system dysregulation and addressing clients' needs in a holistic manner rather than focusing solely on symptoms. It also incorporates ideas drawn from neuro-linguistic programming and life coaching. A basic premise is that individuals can influence their own physiological responses in controlled and repeatable ways. Such learnt emotional self-regulation, it is suggested, could help overcome illness and improve well-being, if the method is practised consistently.
Parker advocates attending the training course in order to gain a full understanding of the tools in a safe and supportive context. He also lays emphasis on the trainee playing an active role in recovery (the course is framed as a fully participatory 'training', not a passive 'treatment' or set of answers given to a 'patient'). He claims that the programme has helped to resolve various conditions including depression, panic attacks, insomnia, drug addictions, chronic pain and multiple sclerosis. The program has also been used with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS).
The Lightning Process is trademarked.
Criticism and support
There has been criticism of the cost of the three-day course. There has also been criticism of the claimed benefits (see also below). John Greensmith, of the British advocacy group ME Free For All, stated "We think their claims are extravagant... if patients get better, they claim the success of the treatment – but if they don't, they say the patient is responsible." In 2022 the World ME Alliance issued the statement "The World ME Alliance and its members do not endorse the Lightning Process for people with Myalgic Encephalomyelitis (ME), sometimes called Chronic Fatigue Syndrome (CFS)."
In a BBC "File on Four" episode, Rachel Schraer commented on a Lightning Process course she attended. She commented: "Not only did my coach say my thoughts were maintaining my symptoms, she also told me quite explicitly that there was nothing physical wrong with my body, that’s despite having no apparent medical qualification or requesting access to any test results." The practitioners statement is at odds with usual lightning process practice. Neuroscientist Camilla Nord a specialist in neuroscience and mental health comments on the instructions given to participants to use positive reinforcing language, saying “I’m afraid now we’ve strayed very, very far from neuroscience. What I would call neuro-bollocks. It’s a kind of abusive of neuro-scientific terms in order to give quite simple psychological techniques a kind of sheen of science about them.”
Some ME/CFS patient support groups have strongly objected to the perceived implication that the disease has psychological causes. However, the Lightning Process website states that it is a neuro-physiological approach and that it considers ME/CFS to be a physical illness.
Nigel Hawkes writing for The BMJ describes the Lightning Process as being "secretive about its methods, lacks overall medical supervision, and has a cultish quality because many of the therapists are former sufferers who deliver the programme with great conviction" and that "Some children who do not benefit have said that they feel blamed for the failure".
Advertising Standards Authority ruling
In 2011 Hampshire Trading Standards requested that the UK Advertising Standards Authority (ASA) give a ruling on the website www.lightningprocess.com, arguing that the information on the site was misleading in four areas. ASA upheld two of the four challenges. They concluded that although there seemed to be some evidence of participant improvement during trials conducted, the trials were not controlled, the evidence was not sufficient to draw robust conclusions, and more investigation was necessary; consequently, the website's claims at the time were deemed misleading and was amended.
Recommendations of medical bodies
The National Institute for Health and Care Excellence (NICE) states that "[d]o not offer the Lightning Process, or therapies based on it, to people with ME/CFS" in their guideline for the management of ME/CFS published in 2021.
References
Bibliography
External links
Official web site
Physiology
Mind–body interventions
Devices to alter consciousness
Osteopathic techniques
Myalgic encephalomyelitis/chronic fatigue syndrome | The Lightning Process | Biology | 1,326 |
19,127,190 | https://en.wikipedia.org/wiki/Muffin-tin%20approximation | The muffin-tin approximation is a shape approximation of the potential well in a crystal lattice. It is most commonly employed in quantum mechanical simulations of the electronic band structure in solids. The approximation was proposed by John C. Slater. Augmented plane wave method (APW) is a method which uses muffin-tin approximation. It is a method to approximate the energy states of an electron in a crystal lattice. The basic approximation lies in the potential in which the potential is assumed to be spherically symmetric in the muffin-tin region and constant in the interstitial region. Wave functions (the augmented plane waves) are constructed by matching solutions of the Schrödinger equation within each sphere with plane-wave solutions in the interstitial region, and linear combinations of these wave functions are then determined by the variational method. Many modern electronic structure methods employ the approximation. Among them APW method, the linear muffin-tin orbital method (LMTO) and various Green's function methods. One application is found in the variational theory developed by Jan Korringa (1947) and by Walter Kohn and N. Rostoker (1954) referred to as the KKR method. This method has been adapted to treat random materials as well, where it is called the KKR coherent potential approximation.
In its simplest form, non-overlapping spheres are centered on the atomic positions. Within these regions, the screened potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced.
In the interstitial region of constant potential, the single electron wave functions can be expanded in terms of plane waves. In the atom-centered regions, the wave functions can be expanded in terms of spherical harmonics and the eigenfunctions of a radial Schrödinger equation. Such use of functions other than plane waves as basis functions is termed the augmented plane-wave approach (of which there are many variations). It allows for an efficient representation of single-particle wave functions in the vicinity of the atomic cores where they can vary rapidly (and where plane waves would be a poor choice on convergence grounds in the absence of a pseudopotential).
See also
Anderson's rule
Band gap
Bloch waves
Kohn–Sham equations
Kronig–Penney model
Local-density approximation
References
Electronic band structures
Electronic structure methods
Computational physics
Condensed matter physics | Muffin-tin approximation | Physics,Chemistry,Materials_science,Engineering | 516 |
39,127,332 | https://en.wikipedia.org/wiki/High-altitude%20adaptation%20in%20humans | High-altitude adaptation in humans is an instance of evolutionary modification in certain human populations, including those of Tibet in Asia, the Andes of the Americas, and Ethiopia in Africa, who have acquired the ability to survive at altitudes above 2,500 meters (8,200 ft). This adaptation means irreversible, long-term physiological responses to high-altitude environments associated with heritable behavioral and genetic changes. While the rest of the human population would suffer serious health consequences at high altitudes, the indigenous inhabitants of these regions thrive in the highest parts of the world. These humans have undergone extensive physiological and genetic changes, particularly in the regulatory systems of oxygen respiration and blood circulation when compared to the general lowland population.
Around 81.6 million humans (approximately 1.1% of the world's human population) live permanently at altitudes above 2,500 meters (8,200 ft), which would seem to put these populations at risk for chronic mountain sickness (CMS). However, the high-altitude populations in South America, East Africa, and South Asia have lived there for millennia without apparent complications. This special adaptation is now recognized as an example of natural selection in action. The adaptation of the Tibetans is the fastest known example of human evolution, as it is estimated to have occurred between 1,000 BCE to 7,000 BCE.
Origin and basis
Humans are generally adapted to lowland environments where oxygen is abundant. At altitudes above , such humans experience altitude sickness, which is a type of hypoxia, a clinical syndrome of severe lack of oxygen. Some humans develop the illness beginning at above 1,500 meters (5,000 ft). Symptoms include fatigue, dizziness, breathlessness, headaches, insomnia, malaise, nausea, vomiting, body pain, loss of appetite, ear-ringing, blistering and purpling of the hands and feet, and dilated blood vessels.
The sickness is compounded by related symptoms such as cerebral oedema (swelling of brain) and pulmonary oedema (fluid accumulation in lungs) . Over a span of multiple days, individuals experiencing the effects of high-altitude hypoxia demonstrate raised respiratory activity and elevated metabolic conditions which persist during periods of rest. Subsequently, afflicted people will experience slowly declining heart rate. Hypoxia is a primary contributor to fatalities within mountaineering groups, making it a significant risk factor within high-altitude related challenges. In women, pregnancy can be severely affected, such as development of preeclampsia, which causes premature labor, low birth weight of babies, and often complicates with profuse bleeding, seizures, or death of the mother.
An estimated 81.6 million humans live at an elevation higher than above sea level, of which 21.7 million reside in Ethiopia, 12.5 million in China, 11.7 million in Colombia, 7.8 million in Peru, and 6.2 million in Bolivia. Certain natives of Tibet, Ethiopia, and the Andes have been living at these high altitudes for generations and are resistant to hypoxia as a consequence of genetic adaptation. It is estimated that at altitude, every lungful of air has approximately 60% of the oxygen molecules found in a lungful of air at sea level. Highlanders are thus constantly exposed to a low oxygen environment, yet they live without any debilitating problems.
One of the best-documented effects of high altitude on non-adapted women is a progressive reduction in birth weight. By contrast, the women of long-resident, high-altitude populations are known to give birth to heavier-weight infants than women of the lowland. This is particularly true among Tibetan babies, whose average birth weight is 294–650g (~470) g heavier than the surrounding Chinese population, and their blood-oxygen level is considerably higher.
Scientific investigation of high-altitude adaptation was initiated by A. Roberto Frisancho of the University of Michigan in the late 1960s among the Quechua people of Peru. Paul T. Baker of Penn State University’s Department of Anthropology also conducted a considerable amount of research into human adaptation to high altitudes, and mentored students who continued this research. One of these students, anthropologist Cynthia Beall of Case Western Reserve University, began conducting decades-long research on high altitude adaptation among the Tibetans in the early 1980s.
Physiological basis
Among the different native highlander populations, the underlying physiological responses to adaptation differ. For example, among four quantitative features, such as resting ventilation, hypoxic ventilatory response, oxygen saturation, and hemoglobin concentration, the levels of variations are significantly different between the Tibetans and the Aymaras. Methylation also influences oxygenation.
Tibetans
In the early 20th century, researchers observed the impressive physical abilities of Tibetans during Himalayan climbing expeditions. They considered the possibility that these abilities resulted from an evolutionary genetic adaptation to high-altitude conditions. The Tibetan plateau has an average elevation of above sea level and covers more than 2.5 million km2; it is the highest and largest plateau in the world. In 1990, it was estimated that 4,594,188 Tibetans live on the plateau, with 53% living at an altitude over . Fairly large numbers (approximately 600,000) live at an altitude exceeding in the Chantong-Qingnan area.
Tibetans who have been living in the Chantong-Qingnan area for 3,000 years do not exhibit the same elevated hemoglobin concentrations to cope with oxygen deficiency that are observed in other populations who have moved temporarily or permanently to high altitudes. Instead, the Tibetans inhale more air with each breath and breathe more rapidly than either sea-level populations or Andeans. Tibetans have better oxygenation at birth, enlarged lung volumes throughout life, and a higher capacity for exercise. They show a sustained increase in cerebral blood flow, lower hemoglobin concentration, and less susceptibility to chronic mountain sickness than other populations due to their longer history of high-altitude habitation.
With the proper physical preparation, individuals can develop short-term tolerance to high-altitude conditions. However, these biological changes are temporary and will reverse upon returning to lower elevations. Moreover, while lowland people typically experience increased breathing for only a few days after entering high altitudes, Tibetans maintain this rapid breathing and elevated lung capacity throughout their lifetime. This enables them to inhale large amounts of air per unit of time to compensate for low oxygen levels. Additionally, Tibetans typically have significantly higher levels of nitric oxide in their blood, often double that of lowlanders. This likely contributes to enhanced blood circulation by promoting vasodilation.
Furthermore, their hemoglobin level is not significantly different (average 15.6 g/dl in males and 14.2 g/dl in females) from those of humans living at low altitude. This is evidenced by mountaineers experiencing an increase of over 2 g/dl in hemoglobin levels within two weeks at the Mt. Everest base camp. Consequently, Tibetans demonstrate the capacity to mitigate the effects of hypoxia and mountain sickness throughout their lives. Even when ascending extraordinarily high peaks such as Mount Everest, they exhibit consistent oxygen uptake, heightened ventilation, augmented hypoxic ventilatory responses, expanded lung volumes, increased diffusing capacities, stable body weight, and improved sleep quality compared to lowland populations.
Andeans
In contrast to the Tibetans, Andean highlanders show different patterns of hemoglobin adaptation. Their hemoglobin concentration is higher than those of the lowlander population, which also happens to lowlanders who move to high altitudes. When they spend some weeks in the lowlands, their hemoglobin drops to the same levels as lowland humans. However, in contrast to lowland humans, they have increased oxygen levels in their hemoglobin; that is, more oxygen per blood volume. This confers an ability to carry more oxygen in each red blood cell, meaning a more effective transport of oxygen throughout their bodies. This enables Andeans to overcome hypoxia and normally reproduce without risk of death for the mother or baby. They have developmentally-acquired enlarged residual lung volume and an associated increase in alveolar area, which are supplemented with increased tissue thickness and moderate increase in red blood cells. Though Andean highlander children show delayed body growth, change in lung volume is accelerated.
Among the Quechua people of the Altiplano, there is a significant variation in NOS3 (the gene encoding endothelial nitric oxide synthase, eNOS), which is associated with higher levels of nitric oxide at high altitude. Nuñoa children of Quechua ancestry exhibit higher blood-oxygen content (91.3) and lower heart rate (84.8) than their peers of different ethnicities, who have an average of 89.9 blood-oxygen and 88–91 heart rate. Quechua women have comparatively enlarged lung volume for increased respiration.
Blood profile comparisons show that among the Andeans, Aymaran highlanders are better adapted to highlands than the Quechuas. Among the Bolivian Aymara people, the resting ventilation and hypoxic ventilatory response were quite low (roughly 1.5 times lower) compared to those of the Tibetans. The intrapopulation genetic variation was relatively smaller among the Aymara people. Moreover, when compared to Tibetans, blood hemoglobin levels at high altitudes among Aymaran is notably higher, with an average of 19.2 g/dl for males and 17.8 g/dl for females.
Ethiopians
The people of the Ethiopian highlands also live at extremely high altitudes, around to . Highland Ethiopians exhibit elevated hemoglobin levels, like Andeans and lowlander humans at high altitudes, but do not exhibit the Andeans’ increase in oxygen content of hemoglobin. Among healthy individuals, the average hemoglobin concentrations are 15.9 and 15.0 g/dl for males and females, respectively (which is lower than normal, similar to the Tibetans), and an average oxygen saturation of hemoglobin is 95.3% (which is higher than average, like the Andeans). Additionally, Ethiopian highlanders do not exhibit any significant change in blood circulation of the brain, which has been observed among the Peruvian highlanders and attributed to their frequent altitude-related illnesses. Yet, similar to the Andeans and Tibetans, the Ethiopian highlanders are immune to the extreme dangers posed by high-altitude environment, and their pattern of adaptation is unique from that of other highland people.
Genetic basis
The underlying molecular evolution of high-altitude adaptation has been explored in recent years. Depending on geographical and environmental pressures, high-altitude adaptation involves different genetic patterns, some of which have evolved not long ago. For example, Tibetan adaptations became prevalent in the past 3,000 years, an example of rapid recent human evolution. At the turn of the 21st century, it was reported that the genetic makeup of the respiratory components of the Tibetan and the Ethiopian populations were significantly different.
Tibetans
Substantial evidence from Tibetan highlanders suggests that variation in hemoglobin and blood-oxygen levels are adaptive as Darwinian fitness. It has been documented that Tibetan women with a high likelihood of possessing one to two alleles for high blood-oxygen content (which is rare in other women) had more surviving children; the higher the oxygen capacity, the lower the infant mortality. In 2010, for the first time, the genes responsible for the unique adaptive traits were identified following genome sequencing of 50 Tibetans and 40 Han Chinese from Beijing. Initially, the strongest signal of natural selection was a transcription factor involved in response to hypoxia, called endothelial Per-Arnt-Sim (PAS) domain protein 1 (EPAS1). It was found that one single-nucleotide polymorphism (SNP) at EPAS1 shows a 78% frequency difference between Tibetan and mainland Chinese samples, representing the fastest genetic change observed in any human gene to date. Hence, Tibetan adaptation to high altitude is recognized as one of the fastest processes of phenotypically observable evolution in humans, which is estimated to have occurred a few thousand years ago, when the Tibetans split from the mainland Chinese population. The time of genetic divergence has been variously estimated as 2,750 (original estimate), 4,725, 8,000, or 9,000 years ago.
Mutations in EPAS1 occur at a higher frequency in Tibetans than their Han neighbors and correlates with decreased hemoglobin concentrations among the Tibetans. This is known as the hallmark of their adaptation to hypoxia. Simultaneously, two genes, egl nine homolog 1 (EGLN1), which inhibits hemoglobin production under high oxygen concentration, and peroxisome proliferator-activated receptor alpha (PPARA), were also identified to be positively selected for decreased hemoglobin levels in the Tibetans.
Similarly, the Sherpas, known for their Himalayan hardiness, exhibit similar patterns in the EPAS1 gene, which is further evidence that the gene is under selection pressure for adaptation to the high-altitude life of Tibetans. A study in 2014 indicates that the mutant EPAS1 gene could have been inherited from archaic hominins, the Denisovans. EPAS1 and EGLN1 are believed to be important genes for unique adaptive traits when compared with those of the Chinese and Japanese. Comparative genome analysis in 2014 revealed that the Tibetans inherited an equal mixture of genomes from the Nepalese Sherpas and Hans, and that they acquired adaptive genes from the Sherpa lineage. Further, the population split was estimated to occur around 20,000 to 40,000 years ago, a range supported by archaeological, mitochondria DNA, and Y chromosome evidence for an initial colonization of the Tibetan plateau around 30,000 years ago.
The genes EPAS1, EGLN1, and PPARA function in concert with another gene named hypoxia inducible factors (HIF), which is in turn a principal regulator of red blood cell production (erythropoiesis) in response to oxygen metabolism. The genes are associated not only with decreased hemoglobin levels, but also with regulating metabolism. EPAS1 is significantly associated with increased lactate concentration, a product of anaerobic glycolysis, and PPARA is correlated with decrease in the activity of fatty acid oxidation. EGLN1 codes for an enzyme, prolyl hydroxylase 2 (PHD2), involved in erythropoiesis.
Among the Tibetans, a mutation in EGLN1 (specifically at position 12, where cytosine is replaced with guanine; and at 380, where G is replaced with C) results in mutant PHD2 (aspartic acid at position 4 becomes glutamine, and cysteine at 127 becomes serine) and this mutation inhibits erythropoiesis. This mutation is estimated to have occurred approximately 8,000 years ago. Further, the Tibetans are enriched for genes in the disease class of human reproduction (such as genes from the DAZ, BPY2, CDY, and HLA-DQ and HLA-DR gene clusters) and biological process categories of response to DNA damage stimulus and DNA repair (such as RAD51, RAD52, and MRE11A), which are related to the adaptive traits of high infant birth weight and darker skin tone and are most likely due to recent local adaptation.
Andeans
The patterns of genetic adaptation among the Andeans are largely distinct from those of the Tibetans, with both populations showing evidence of positive natural selection in different genes or gene regions. For genes in the HIF pathway, EGLN1 is the only instance where evidence of positive selection is observed in both Tibetans and Andeans. Even then, the pattern of variation for this gene differs between the two populations. Furthermore, there are no significant associations between EPAS1 or EGLN1 SNP genotypes and hemoglobin concentration among the Andeans, which is characteristic of the Tibetans.
The Andean pattern of adaptation is characterized by selection in a number of genes involved in cardiovascular development and function (such as BRINP3, EDNRA, NOS2A). This suggests that selection in Andeans, instead of targeting the HIF pathway like in the Tibetans, focused on adaptations of the cardiovascular system to combat chronic disease at high altitude. Analysis of ancient Andean genomes, some dating back 7,000 years, discovered selection in DST, a gene involved in cardiovascular function. The whole genome sequences of 20 Andeans (half of them having chronic mountain sickness) revealed that two genes, SENP1 (an erythropoiesis regulator) and ANP32D (an oncogene) play vital roles in their weak adaptation to hypoxia.
Ethiopians
The adaptive mechanism of Ethiopian highlanders differs from those of the Tibetans and Andeans due to the fact that their migration to the highland was relatively early. For example, the Amhara have inhabited altitudes above for at least 5,000 years and altitudes around to for more than 70,000 years. Genomic analysis of two ethnic groups, Amhara and Oromo, has revealed that gene variations associated with hemoglobin difference among Tibetans or other variants at the exact gene location do not influence the adaptation in Ethiopians. Several candidate genes have been identified as possible explanations for the adaptation of Ethiopians, including CBARA1, VAV3, ARNT2 and THRB. Two of these genes (THRB and ARNT2) are known to play a role in the HIF-1 pathway, a pathway implicated in previous work reported in Tibetan and Andean studies. This supports the hypothesis that adaptation to high altitude arose independently among different highlander populations as a result of convergent evolution.
See also
Altitude
Effects of high altitude on humans (including acclimatisation)
High-altitude adaptation
High-altitude football controversy
Tibetan Plateau
References
External links
Adapting to High Altitude
High Altitude and Cold: Adaptation to the extremes
Understanding adaptation to high altitude in the Andean region
BBC: Altitude tolerant
Understanding Evolution: The mysteries of Tibet
Scientific resources at the Center for Research on Tibet
Evolutionary Adaptations in High Altitude Tibet
The Challenge of Living at High Altitudes
Adapting to High Altitude
Mountaineering and health
Respiratory physiology
Human evolution
Anthropology
Evolutionary biology | High-altitude adaptation in humans | Biology | 3,797 |
74,422,505 | https://en.wikipedia.org/wiki/Georg%20Limnaeus | Georg Limnaeus (born Georg Wirn, also known as Georgius Lymneus, Limnæus or Limnäus; 24 October 1554 – 14 September 1611) was a German mathematician, astronomer and librarian, who provided noteworthy encouragement to Johannes Kepler shortly after his first heliocentric astronomical work was published.
Early life
Georg Limnaeus' father Antonius Wirn originated from Switzerland and served in the military forces of Frederick I, Elector of Saxony, who had been tutored by George Spalatin and closely followed and supported the works of Martin Luther. Around the time of the Capitulation of Wittenberg, Frederick conceived of the founding of the University of Jena, which was established in 1558 and became the university where Limnaeus was to spend all of his academic years.
Upon completion of his military service, Antonius moved to Jena, where Georg Wirn was born and lived, and where, in 1571, he enrolled at the university. In accordance with its tradition, upon enrollment he assumed the name Georgius Lymneus. At Jena, Limnaeus studied under Jacob Flach (1537–1611), who was a graduate of the University of Wittenberg and had exposure to Philip Melanchthon (1497–1569) and frequented the lectures of Erasmus Reinhold (1511–1553.)
In 1581, Limnaeus received the "Magisters der Philosophie" degree at the University of Jena.
Career and Kepler connections
Limnaeus issued a prognostication in 1585 in Erfurt and, in 1588, became the professor of mathematics at Jena, a position which he held until his death; concurrently, he also assumed the position of head librarian. He lectured on the Celestial sphere, astronomical and scientific calculations, the theory of planets and the use of astronomical tables, and in the areas of geography, geodesy and cosmography. Although he was not known to have produced any memorable manuscripts, he is known to have engaged in professional correspondence with peers, from time to time, including Tycho Brahe, Galileo Galilei, and Johannes Kepler, and to have maintained a respectable reputation as an academic prognosticator. In 1596, he founded the first observatory in Jena.
In 1597, Limnaeus (along with Galileo, Brahe and Ursus) received a draft copy from Kepler of his first major work, Mysterium Cosmographicum. On April 24, 1598, Limnaeus wrote to Kepler, expressing his firm belief that heliocentric considerations should not be dismissed from the studies of astronomy by declaring, "Most illustrious Sir, never was I estranged from the most ancient philosophy of the Platonists – nor have I thought, as have several petty philosophers in our time, that it ought to be shunted outside the borders of the territory of the republic of letters." These words have been used to illustrate that it was not uncommon for traditionalist academicians, such as Limnaeus, to covertly honor heliocentric views of the ancients, while at the same time skillfully avoiding any explicit reference to the more controversial views of Copernicus. Limnaeus added, however, the statement that for any serious student of astronomy, Kepler's work represents "a new path to knowledge of the stars." In light of the disconcerting imprisonment of Giordano Bruno in 1593 (who was executed in 1600), this open expression of both support to young Kepler, and delight in his mathematical astronomical approaches, provided him with some of the earliest, forceful words of encouragement, which he must have welcomed in contrast to the many strong criticisms his work quickly evoked.
In addition, Limnaeus provided information to Kepler on Tycho Brahe which may have promoted his final decision to go to Prague and study under him, thereby ensuring access to Brahe's data and the furtherance of his own work. However, beyond serving as an encourager to Kepler, and a facilitator to his decision to assist Brahe, there is no record that Limnaeus ever dove into specific details of Kepler's work or adopted it for his lessons.
Kepler assisted Brahe from 1599 until Brahe's sudden death in 1601. By 1609, Kepler would develop and introduce his laws of planetary motion, which would subsequently play a major role in the development of Isaac Newton's law of universal gravitation, as has been noted by Newton.
Limnaeus and his wife fell victim to the 1611 plague in Jena.
References
Astronomers
16th-century German astronomers
17th-century German astronomers
16th-century German mathematicians
17th-century German mathematicians
1554 births
1611 deaths
German mathematicians | Georg Limnaeus | Astronomy | 961 |
1,216,068 | https://en.wikipedia.org/wiki/Data%20room | Data rooms are spaces used for housing data, usually of a secure or privileged nature. They can be physical data rooms, virtual data rooms, or data centers. They are used for a variety of purposes, including data storage, document exchange, file sharing, financial transactions, legal transactions, and more.
In mergers and acquisitions, the traditional data room will genuinely be a physically secured and continually monitored room, normally in the vendor's offices (or those of their lawyers), which the bidders and their advisers will visit in order to inspect and report on the various documents and other data made available. Often only one bidder at a time will be allowed to enter and if new documents or new versions of documents are required these will have to be brought in by courier as hardcopy. Teams involved in large due diligence processes will typically have to be flown in from many regions or countries and remain available throughout the process. Such teams often comprise a number of experts in different fields and so the overall cost of keeping such groups on call near to the data room is often extremely high. Combating the significant cost of physical data rooms is the virtual data room, which provides for the secure, online dissemination of confidential information.
A virtual data room (VDR) is essentially a website with limited controlled access (using a secure log-on supplied by the vendor/authority which can be disabled at any time by the vendor/authority if a bidder withdraws) to which the bidders and their advisers are given access. Much of the information released will be confidential and restrictions should be applied to the viewers' ability to release this to third parties by forwarding, copying or printing. Digital rights management is sometimes applied to control information.
With annual growth of about 16% over seven years the virtual data room market forecast is $1.6 Billion. Detailed auditing must be provided for legal reasons so that a record is kept of who has seen which version of each document.
Data rooms are commonly used by legal, accounting, investment banking and private equity companies performing mergers and acquisitions, fundraising especially with startups, insolvency, corporate restructuring, and joint ventures including biotechnology and tender processes.
References
Data management
Rooms | Data room | Technology,Engineering | 445 |
30,311,701 | https://en.wikipedia.org/wiki/H.V.%20Dalling | Horace Victor Dalling (1854-1931) was a Canadian watchmaker, jeweller, optician and inventor. He was the watch inspector for the Canadian Pacific Railway, and is also known for manufacturing the first two telephones in Woodstock, New Brunswick, which he placed in his store and in his home.
Biography
Dalling was born in Richmond, New Brunswick on February 5, 1854, to Thomas M. and Matilda Jane (Gray) Dalling. In 1878 he moved to Woodstock and established his business, Dalling's Jewellery Store. In 1879 he married Mary Isabella McKilligan and the two had four children.
His store was damaged by fire on February 26, 1891. His losses ($150 () ) were covered by insurance.
In 1900, he donated a silver cup, known as the "Dalling Cup", for golf. Another cup, a gold-lined silver cup, was given in 1905 for the Woodstock Hockey League.
His son William Victor Dalling, fought in the First World War as a gunner. He was wounded in France on October 13, 1916, but died of pneumonia on October 19, 1918, contracted in Fredericton while waiting for his discharge.
Dalling continued to run his store until 1929, when he retired because of ill health. His daughter, Edith, ran the store until his death on January 6, 1931.
Telephone
He is most remembered because of his homemade telephone, which he constructed in 1885 and ran between his home on Richmond St. and his store on Main St. He supported the wires by running them in the branches of trees alongside the streets. Telephones (either wires or instruments) were not yet common in the region.
Bell Telephone later investigated his setup and threatened him with a lawsuit for infringing on the company's patent. However, a compromise was reached and Bell opened a small telephone exchange of twenty lines in his store, of which he was the manager. Because there was no service at night or on Sundays, Dalling built and installed a miniature switchboard of eight lines at his home to answer important messages after hours.
References
External links
Probable photo c. 1876
H.V. Dalling Jewellery Store in 1876, John Campbell
Death Certificate GNB Archives
1854 births
1931 deaths
Canadian inventors
Canadian jewellers
Opticians
People from Carleton County, New Brunswick | H.V. Dalling | Astronomy | 468 |
42,714,267 | https://en.wikipedia.org/wiki/Sally%20Benson%20%28professor%29 | Sally M. Benson is a professor of energy engineering at Stanford University. In 2014, she was appointed as director of the Precourt Institute for Energy, the university's hub of energy research and education. Benson will continue on as director of Stanford's Global Climate and Energy Project (GCEP), a position she has had since 2007.
On November 24, 2021, Benson was appointed to the White House's Office of Science and Technology Policy as deputy director for Energy and Chief Strategist for the Energy Transition.
Biography
Benson received a B.S. in geology from Barnard College of Columbia University, an M.Sc.and a Ph.D. in materials and mineral engineering from the University of California-Berkeley.
Benson has held several positions with the Lawrence Berkeley National Laboratory, Berkeley, California. These include 1980–2007, Staff scientist (director 1993–1997), Earth Sciences Division; 2001–2004, deputy director for operations; 1997–2001, Associate laboratory director, Energy Sciences.
Awards and honours
Benson has won various awards, including the 2012 Greenman Award, Michel T. Halbouty Distinguished Lecture Award from the Geological Society, and the ARCS American Pacesetter Award. She was elected to the American Academy of Arts and Sciences in 2023 and to the Australian Academy of Technological Sciences & Engineering in 2024.
See also
Mark Z. Jacobson
Tom Steyer
Lee Schipper
Al Gore
Hermann Scheer
Benjamin K. Sovacool
John A. "Skip" Laitner
Amory Lovins
Daniel Kammen
Renewable energy commercialization
References
External links
Precourt Institute for Energy
21st-century American women academics
21st-century American academics
21st-century American women engineers
American women engineers
Barnard College alumni
Energy policy
Living people
Energy engineers
Stanford University faculty
American environmentalists
UC Berkeley College of Engineering alumni
Year of birth missing (living people)
Office of Science and Technology Policy officials
Biden administration personnel
Fellows of the American Academy of Arts and Sciences | Sally Benson (professor) | Engineering,Environmental_science | 395 |
56,855,846 | https://en.wikipedia.org/wiki/List%20of%20human%20transcription%20factors | This list of manually curated human transcription factors is taken from Lambert, Jolma, Campitelli et al.
It was assembled by manual curation.
More detailed information is found in the manuscript and the web site accompanying the paper (Human Transcription Factors)
List of human transcription factors (1639)
References
Transcription factors
Biology-related lists | List of human transcription factors | Chemistry,Biology | 68 |
48,970,783 | https://en.wikipedia.org/wiki/Gq-mER | The Gq-coupled membrane estrogen receptor (Gq-mER) is a G protein-coupled receptor present in the hypothalamus that has not yet been cloned. It is a membrane-associated receptor that is Gq-coupled to a phospholipase C–protein kinase C–protein kinase A (PLC–PKC–PKA) pathway. The receptor has been implicated in the control of energy homeostasis. Gq-mER is bound and activated by estradiol, and is a putative membrane estrogen receptor (mER). A nonsteroidal diphenylacrylamide derivative, STX, which is structurally related to 4-hydroxytamoxifen (afimoxifene), is an agonist of the receptor with greater potency than estradiol (20-fold higher affinity) that has been discovered. Fulvestrant (ICI-182,780) has been identified as an antagonist of Gq-mER, but is not selective.
See also
Estrogen receptor
GPER (GPR30)
ER-X
ERx
References
{{DISPLAYTITLE:Gq-mER}}
G protein-coupled receptors
Human proteins | Gq-mER | Chemistry | 252 |
639,115 | https://en.wikipedia.org/wiki/Neolithic%20Revolution | The Neolithic Revolution, also known as the First Agricultural Revolution, was the wide-scale transition of many human cultures during the Neolithic period in Afro-Eurasia from a lifestyle of hunting and gathering to one of agriculture and settlement, making an increasingly large population possible. These settled communities permitted humans to observe and experiment with plants, learning how they grew and developed. This new knowledge led to the domestication of plants into crops.
Archaeological data indicate that the domestication of various types of plants and animals happened in separate locations worldwide, starting in the geological epoch of the Holocene 11,700 years ago, after the end of the last Ice Age. It was humankind's first historically verifiable transition to agriculture. The Neolithic Revolution greatly narrowed the diversity of foods available, resulting in a decrease in the quality of human nutrition compared with that obtained previously from foraging, but because food production became more efficient, it released humans to invest their efforts in other activities and was thus "ultimately necessary to the rise of modern civilization by creating the foundation for the later process of industrialization and sustained economic growth".
The Neolithic Revolution involved much more than the adoption of a limited set of food-producing techniques. During the next millennia, it transformed the small and mobile groups of hunter-gatherers that had hitherto dominated human prehistory into sedentary (non-nomadic) societies based in built-up villages and towns. These societies radically modified their natural environment by means of specialized food-crop cultivation, with activities such as irrigation and deforestation which allowed the production of surplus food. Other developments that are found very widely during this era are the domestication of animals, pottery, polished stone tools, and rectangular houses. In many regions, the adoption of agriculture by prehistoric societies caused episodes of rapid population growth, a phenomenon known as the Neolithic demographic transition.
These developments, sometimes called the Neolithic package, provided the basis for centralized administrations and political structures, hierarchical ideologies, depersonalized systems of knowledge (e.g. writing), densely populated settlements, specialization and division of labour, more trade, the development of non-portable art and architecture, and greater property ownership. The earliest known civilization developed in Sumer in southern Mesopotamia (); its emergence also heralded the beginning of the Bronze Age.
The relationship of the aforementioned Neolithic characteristics to the onset of agriculture, their sequence of emergence, and their empirical relation to each other at various Neolithic sites remains the subject of academic debate. It is usually understood to vary from place to place, rather than being the outcome of universal laws of social evolution.
Background
Prehistoric hunter-gatherers had different subsistence requirements and lifestyles from agriculturalists. Hunter-gatherers were often highly mobile and migratory, living in temporary shelters and in small tribal groups, and having limited contact with outsiders. Their diet was well-balanced though heavily dependent on what the environment could provide each season. In contrast, because the surplus and plannable supply of food provided by agriculture made it possible to support larger population groups, agriculturalists lived in more permanent dwellings in more densely populated settlements than what could be supported by a hunter-gatherer lifestyle. The agricultural communities' seasonal need to plan and coordinate resource and manpower encouraged division of labour, which gradually led to specialization of labourers and complex societies. The subsequent development of trading networks to exchange surplus commodities and services brought agriculturalists into contact with outside groups, which promoted cultural exchanges that led to the rise of civilizations and technological evolutions.
However, higher population and food abundance did not necessarily correlate with improved health. Reliance on a very limited variety of staple crops can adversely affect health even while making it possible to feed more people. Maize is deficient in certain essential amino acids (lysine and tryptophan) and is a poor source of iron. The phytic acid it contains may inhibit nutrient absorption. Other factors that likely affected the health of early agriculturalists and their domesticated livestock would have been increased numbers of parasites and disease-bearing pests associated with human waste and contaminated food and water supplies. Fertilizers and irrigation may have increased crop yields but also would have promoted proliferation of insects and bacteria in the local environment while grain storage attracted additional insects and rodents.
Agricultural transition
The term 'neolithic revolution' was invented by V. Gordon Childe in his book Man Makes Himself (1936). Childe introduced it as the first in a series of agricultural revolutions in Middle Eastern history, calling it a "revolution" to denote its significance, the degree of change to communities adopting and refining agricultural practices.
The beginning of this process in different regions has been dated from 10,000 to 8,000 BCE in the Fertile Crescent, and perhaps 8000 BCE in the Kuk Early Agricultural Site of Papua New Guinea in Melanesia. Everywhere, this transition is associated with a change from a largely nomadic hunter-gatherer way of life to a more settled, agrarian one, with the domestication of various plant and animal species – depending on the species locally available, and influenced by local culture. Archaeological research in 2003 suggests that in some regions, such as the Southeast Asian peninsula, the transition from hunter-gatherer to agriculturalist was not linear, but region-specific.
Domestication
Crops
Once agriculture started gaining momentum, around 9000 BP, human activity resulted in the selective breeding of cereal grasses (beginning with emmer, einkorn and barley), and not simply of those that favoured greater caloric returns through larger seeds. Plants with traits such as small seeds or bitter taste were seen as undesirable. Plants that rapidly shed their seeds on maturity tended not to be gathered at harvest, therefore not stored and not seeded the following season; successive years of harvesting spontaneously selected for strains that retained their edible seeds longer.
Daniel Zohary identified several plant species as "pioneer crops" or Neolithic founder crops. He highlighted the importance of wheat, barley and rye, and suggested that domestication of flax, peas, chickpeas, bitter vetch and lentils came a little later. Based on analysis of the genes of domesticated plants, he preferred theories of a single, or at most a very small number of domestication events for each taxon that spread in an arc from the Levantine corridor around the Fertile Crescent and later into Europe. Gordon Hillman and Stuart Davies carried out experiments with varieties of wild wheat to show that the process of domestication would have occurred over a relatively short period of between 20 and 200 years.
Some of the pioneering attempts failed at first and crops were abandoned, sometimes to be taken up again and successfully domesticated thousands of years later: rye, tried and abandoned in Neolithic Anatolia, made its way to Europe as weed seeds and was successfully domesticated in Europe, thousands of years after the earliest agriculture. Wild lentils presented a different problem: most of the wild seeds do not germinate in the first year; the first evidence of lentil domestication, breaking dormancy in their first year, appears in the early Neolithic at Jerf el Ahmar (in modern Syria), and lentils quickly spread south to the Netiv HaGdud site in the Jordan Valley. The process of domestication allowed the founder crops to adapt and eventually become larger, more easily harvested, more dependable in storage and more useful to the human population.
Selectively propagated figs, wild barley and wild oats were cultivated at the early Neolithic site of Gilgal I, where in 2006 archaeologists found caches of seeds of each in quantities too large to be accounted for even by intensive gathering, at strata datable to 11,000 years ago. Some of the plants tried and then abandoned during the Neolithic period in the Ancient Near East, at sites like Gilgal, were later successfully domesticated in other parts of the world.
Once early farmers perfected their agricultural techniques like irrigation (traced as far back as the 6th millennium BCE in Khuzistan), their crops yielded surpluses that needed storage. Most hunter-gatherers could not easily store food for long due to their migratory lifestyle, whereas those with a sedentary dwelling could store their surplus grain. Eventually granaries were developed that allowed villages to store their seeds longer. So with more food, the population expanded and communities developed specialized workers and more advanced tools.
The process was not as linear as was once thought, but a more complicated effort, which was undertaken by different human populations in different regions in many different ways.
One of the world's most important crops, barley, was domesticated in the Near East around 11,000 years ago (). Barley is a highly resilient crop, able to grow in varied and marginal environments, such as in regions of high altitude and latitude. Archaeobotanical evidence shows that barley had spread throughout Eurasia by 2,000 BCE. To further elucidate the routes by which barley cultivation was spread through Eurasia, genetic analysis was used to determine genetic diversity and population structure in extant barley taxa. Genetic analysis shows that cultivated barley spread through Eurasia via several different routes, which were most likely separated in both time and space.
Livestock
When hunter-gathering began to be replaced by sedentary food production it became more efficient to keep animals close at hand. Therefore, it became necessary to bring animals permanently to their settlements, although in many cases there was a distinction between relatively sedentary farmers and nomadic herders. The animals' size, temperament, diet, mating patterns, and life span were factors in the desire and success in domesticating animals. Animals that provided milk, such as cows and goats, offered a source of protein that was renewable and therefore quite valuable. The animal's ability as a worker (for example ploughing or towing), as well as a food source, also had to be taken into account. Besides being a direct source of food, certain animals could provide leather, wool, hides, and fertilizer. Some of the earliest domesticated animals included dogs (East Asia, about 15,000 years ago), sheep, goats, cows, and pigs.
West Asia was the source for many animals that could be domesticated, such as sheep, goats and pigs. This area was also the first region to domesticate the dromedary. Henri Fleisch discovered and termed the Shepherd Neolithic flint industry from the Bekaa Valley in Lebanon and suggested that it could have been used by the earliest nomadic shepherds. He dated this industry to the Epipaleolithic or Pre-Pottery Neolithic as it is evidently not Paleolithic, Mesolithic or even Pottery Neolithic.
The presence of these animals gave the region a large advantage in cultural and economic development. As the climate in the Middle East changed and became drier, many of the farmers were forced to leave, taking their domesticated animals with them. It was this massive emigration from the Middle East that later helped distribute these animals to the rest of Afroeurasia. This emigration was mainly on an east–west axis of similar climates, as crops usually have a narrow optimal climatic range outside of which they cannot grow for reasons of light or rain changes. For instance, wheat does not normally grow in tropical climates, just like tropical crops such as bananas do not grow in colder climates. Some authors, like Jared Diamond, have postulated that this east–west axis is the main reason why plant and animal domestication spread so quickly from the Fertile Crescent to the rest of Eurasia and North Africa, while it did not reach through the north–south axis of Africa to reach the Mediterranean climates of South Africa, where temperate crops were successfully imported by ships in the last 500 years. Similarly, the African Zebu of central Africa and the domesticated bovines of the fertile-crescent – separated by the dry sahara desert – were not introduced into each other's region.
Centers of agricultural origin
West Asia
Use-wear analysis of five glossed flint blades found at Ohalo II, a 23,000-years-old fisher-hunter-gatherers' camp on the shore of the Sea of Galilee, Northern Israel, provides the earliest evidence for the use of composite cereal harvesting tools. The Ohalo site is at the junction of the Upper Paleolithic and the Early Epipaleolithic, and has been attributed to both periods.
The wear traces indicate that tools were used for harvesting near-ripe semi-green wild cereals, shortly before grains are ripe and disperse naturally. The studied tools were not used intensively, and they reflect two harvesting modes: flint knives held by hand and inserts hafted in a handle. The finds shed new light on cereal harvesting techniques some 8,000 years before the Natufian and 12,000 years before the establishment of sedentary farming communities in the Near East. Furthermore, the new finds accord well with evidence for the earliest ever cereal cultivation at the site and the use of stone-made grinding implements.
Agriculture appeared first in West Asia about 2,000 years later, around 10,000–9,000 years ago. The region was the centre of domestication for three cereals (einkorn wheat, emmer wheat and barley), four legumes (lentil, pea, bitter vetch and chickpea), and flax. Domestication was a slow process that unfolded across multiple regions, and was preceded by centuries if not millennia of pre-domestication cultivation.
Finds of large quantities of seeds and a grinding stone at the Epipalaeolithic site of Ohalo II, dating to around 19,400 BP, has shown some of the earliest evidence for advanced planning of plants for food consumption and suggests that humans at Ohalo II processed the grain before consumption. Tell Aswad is the oldest site of agriculture, with domesticated emmer wheat dated to 10,800 BP. Soon after came hulled, two-row barley – found domesticated earliest at Jericho in the Jordan valley and at Iraq ed-Dubb in Jordan.
Other sites in the Levantine corridor that show early evidence of agriculture include Wadi Faynan 16 and Netiv Hagdud. Jacques Cauvin noted that the settlers of Aswad did not domesticate on site, but "arrived, perhaps from the neighbouring Anti-Lebanon, already equipped with the seed for planting". In the Eastern Fertile Crescent, evidence of cultivation of wild plants has been found in Choga Gholan in Iran dated to 12,000 BP, with domesticated emmer wheat appearing in 9,800 BP, suggesting there may have been multiple regions in the Fertile Crescent where cereal domestication evolved roughly contemporaneously. The Heavy Neolithic Qaraoun culture has been identified at around fifty sites in Lebanon around the source springs of the River Jordan, but never reliably dated.
In his book Guns, Germs, and Steel, Jared Diamond argues that the vast continuous east–west stretch of temperate climatic zones of Eurasia and North Africa gave peoples living there a highly advantageous geographical location that afforded them a head start in the Neolithic Revolution. Both shared the temperate climate ideal for the first agricultural settings, and both were near a number of easily domesticable plant and animal species. In areas where continents aligned north–south such as the Americas and Africa, crops—and later domesticated animals—could not spread across tropical zones.
East Asia
Agriculture in Neolithic China can be separated into two broad regions, Northern China and Southern China.
The agricultural centre in northern China is believed to be the homelands of the early Sino-Tibetan-speakers, associated with the Houli, Peiligang, Cishan, and Xinglongwa cultures, clustered around the Yellow River basin. It was the domestication centre for foxtail millet (Setaria italica) and broomcorn millet (Panicum miliaceum), with early evidence of domestication approximately 8,000 years ago, and widespread cultivation 7,500 years ago. (Soybean was also domesticated in northern China 4,500 years ago. Orange and peach also originated in China, being cultivated .)
The agricultural centres in southern China are clustered around the Yangtze River basin. Rice was domesticated in this region, together with the development of paddy field cultivation, between 13,500 and 8,200 years ago.
There are two possible centres of domestication for rice. The first is in the lower Yangtze River, believed to be the homelands of pre-Austronesians and associated with the Kauhuqiao, Hemudu, Majiabang, and Songze cultures. It is characterized by typical pre-Austronesian features, including stilt houses, jade carving, and boat technologies. Their diet were also supplemented by acorns, water chestnuts, foxnuts, and pig domestication. The second is in the middle Yangtze River, believed to be the homelands of the early Hmong-Mien-speakers and associated with the Pengtoushan and Daxi cultures. Both of these regions were heavily populated and had regular trade contacts with each other, as well as with early Austroasiatic speakers to the west, and early Kra-Dai speakers to the south, facilitating the spread of rice cultivation throughout southern China.
The millet and rice-farming cultures also first came into contact with each other at around 9,000 to 7,000 BP, resulting in a corridor between the millet and rice cultivation centres where both rice and millet were cultivated. At around 5,500 to 4,000 BP, there was increasing migration into Taiwan from the early Austronesian Dapenkeng culture, bringing rice and millet cultivation technology with them. During this period, there is evidence of large settlements and intensive rice cultivation in Taiwan and the Penghu Islands, which may have resulted in overexploitation. Bellwood (2011) proposes that this may have been the impetus of the Austronesian expansion which started with the migration of the Austronesian-speakers from Taiwan to the Philippines at around 5,000 BP.
Austronesians carried rice cultivation technology to Island Southeast Asia along with other domesticated species. The new tropical island environments also had new food plants that they exploited. They carried useful plants and animals during each colonization voyage, resulting in the rapid introduction of domesticated and semi-domesticated species throughout Oceania. They also came into contact with the early agricultural centres of Papuan-speaking populations of New Guinea as well as the Dravidian-speaking regions of South India and Sri Lanka by around 3,500 BP. They acquired further cultivated food plants like bananas and pepper from them, and in turn introduced Austronesian technologies like wetland cultivation and outrigger canoes. During the 1st millennium CE, they also colonized Madagascar and the Comoros, bringing Southeast Asian food plants, including rice, to East Africa.
Africa
On the African continent, three areas have been identified as independently developing agriculture: the Ethiopian highlands, the Sahel and West Africa. By contrast, Agriculture in the Nile River Valley is thought to have developed from the original Neolithic Revolution in the Fertile Crescent.
Many grinding stones are found with the early Egyptian Sebilian and Mechian cultures and evidence has been found of a neolithic domesticated crop-based economy dating around 7,000 BP.
Unlike the Middle East, this evidence appears as a "false dawn" to agriculture, as the sites were later abandoned, and permanent farming then was delayed until 6,500 BP with the Tasian culture and Badarian culture and the arrival of crops and animals from the Near East.
Bananas and plantains, which were first domesticated in Southeast Asia, most likely Papua New Guinea, were re-domesticated in Africa possibly as early as 5,000 years ago. Asian yams and taro were also cultivated in Africa.
The most famous crop domesticated in the Ethiopian highlands is coffee. In addition, khat, ensete, noog, teff and finger millet were also domesticated in the Ethiopian highlands. Crops domesticated in the Sahel region include sorghum and pearl millet. The kola nut was first domesticated in West Africa. Other crops domesticated in West Africa include African rice, yams and the oil palm.
Agriculture spread to Central and Southern Africa in the Bantu expansion during the 1st millennium BCE to 1st millennium CE.
Americas
The term "Neolithic" is not customarily used in describing cultures in the Americas. However, a broad similarity exists between Eastern Hemisphere cultures of the Neolithic and cultures in the Americas. Maize (corn), beans and squash were among the earliest crops domesticated in Mesoamerica: squash as early as 6000 BCE, beans no later than 4000 BCE, and maize beginning about 7000 BCE. Potatoes and manioc were domesticated in South America. In what is now the eastern United States, Native Americans domesticated sunflower, sumpweed and goosefoot . In the highlands of central Mexico, sedentary village life based on farming did not develop until the "formative period" in the second millennium BCE.
New Guinea
Evidence of drainage ditches at Kuk Swamp on the borders of the Western and Southern Highlands of Papua New Guinea indicates cultivation of taro and a variety of other crops, dating back to 11,000 BP. Two potentially significant economic species, taro (Colocasia esculenta) and yam (Dioscorea sp.), have been identified dating at least to 10,200 calibrated years before present (cal BP). Further evidence of bananas and sugarcane dates to 6,950 to 6,440 BCE. This was at the altitudinal limits of these crops, and it has been suggested that cultivation in more favourable ranges in the lowlands may have been even earlier. CSIRO has found evidence that taro was introduced into the Solomon Islands for human use, from 28,000 years ago, making taro the earliest cultivated crop in the world.
It seems to have resulted in the spread of the Trans–New Guinea languages from New Guinea east into the Solomon Islands and west into Timor and adjacent areas of Indonesia. This seems to confirm the theories of Carl Sauer who, in "Agricultural Origins and Dispersals", suggested as early as 1952 that this region was a centre of early agriculture.
Spread of agriculture
Europe
Archaeologists trace the emergence of food-producing societies in the Levantine region of southwest Asia at the close of the last glacial period around 12,000 BCE, and developed into a number of regionally distinctive cultures by the eighth millennium BCE. Remains of food-producing societies in the Aegean have been carbon-dated to at Knossos, Franchthi Cave, and a number of mainland sites in Thessaly. Neolithic groups appear soon afterwards in the Balkans and south-central Europe. The Neolithic cultures of southeastern Europe (the Balkans and the Aegean) show some continuity with groups in southwest Asia and Anatolia (e.g., Çatalhöyük).
Current evidence suggests that Neolithic material culture was introduced to Europe via western Anatolia. All Neolithic sites in Europe contain ceramics, and contain the plants and animals domesticated in Southwest Asia: einkorn, emmer, barley, lentils, pigs, goats, sheep, and cattle. Genetic data suggest that no independent domestication of animals took place in Neolithic Europe, and that all domesticated animals were originally domesticated in Southwest Asia. The only domesticate not from Southwest Asia was broomcorn millet, domesticated in East Asia.The earliest evidence of cheese-making dates to 5500 BCE in Kujawy, Poland.
The diffusion across Europe, from the Aegean to Britain, took about 2,500 years (8500–6000 BP). The Baltic region was penetrated a bit later, around 5500 BP, and there was also a delay in settling the Pannonian plain. In general, colonization shows a "saltatory" pattern, as the Neolithic advanced from one patch of fertile alluvial soil to another, bypassing mountainous areas. Analysis of radiocarbon dates show clearly that Mesolithic and Neolithic populations lived side by side for as much as a millennium in many parts of Europe, especially in the Iberian peninsula and along the Atlantic coast.
Carbon 14 evidence
The spread of the Neolithic from the Near East Neolithic to Europe was first studied quantitatively in the 1970s, when a sufficient number of Carbon 14 age determinations for early Neolithic sites had become available. In 1973, Ammerman and Cavalli-Sforza discovered a linear relationship between the age of an Early Neolithic site and its distance from the conventional source in the Near East (Jericho), demonstrating that the Neolithic spread at an average speed of about 1 km/yr. More recent studies (2005) confirm these results and yield the speed of 0.6–1.3 km/yr (at 95% confidence level).
Analysis of mitochondrial DNA
Since the original human expansions out of Africa 200,000 years ago, different prehistoric and historic migration events have taken place in Europe. Considering that the movement of the people implies a consequent movement of their genes, it is possible to estimate the impact of these migrations through the genetic analysis of human populations. Agricultural and husbandry practices originated 10,000 years ago in a region of the Near East known as the Fertile Crescent. According to the archaeological record this phenomenon, known as "Neolithic", rapidly expanded from these territories into Europe.
However, whether this diffusion was accompanied or not by human migrations is greatly debated. Mitochondrial DNA – a type of maternally inherited DNA located in the cell cytoplasm – was recovered from the remains of Pre-Pottery Neolithic B (PPNB) farmers in the Near East and then compared to available data from other Neolithic populations in Europe and also to modern populations from South Eastern Europe and the Near East. The obtained results show that substantial human migrations were involved in the Neolithic spread and suggest that the first Neolithic farmers entered Europe following a maritime route through Cyprus and the Aegean Islands.
South Asia
The earliest Neolithic sites in South Asia are Bhirrana in Haryana dated to , and Mehrgarh, dated to between 6500 and 5500 BP, in the Kachi plain of Balochistan, Pakistan; the site has evidence of farming (wheat and barley) and herding (cattle, sheep and goats).
There is strong evidence for causal connections between the Near-Eastern Neolithic and that further east, up to the Indus Valley. There are several lines of evidence that support the idea of connection between the Neolithic in the Near East and in the Indian subcontinent. The prehistoric site of Mehrgarh in Baluchistan (modern Pakistan) is the earliest Neolithic site in the north-west Indian subcontinent, dated as early as 8500 BCE.
Neolithic domesticated crops in Mehrgarh include more than 90% barley and a small amount of wheat. There is good evidence for the local domestication of barley and the zebu cattle at Mehrgarh, but the wheat varieties are suggested to be of Near-Eastern origin, as the modern distribution of wild varieties of wheat is limited to Northern Levant and Southern Turkey.
A detailed satellite map study of a few archaeological sites in the Baluchistan and Khybar Pakhtunkhwa regions also suggests similarities in early phases of farming with sites in Western Asia. Pottery prepared by sequential slab construction, circular fire pits filled with burnt pebbles, and large granaries are common to both Mehrgarh and many Mesopotamian sites.
The postures of the skeletal remains in graves at Mehrgarh bear strong resemblance to those at Ali Kosh in the Zagros Mountains of southern Iran. Despite their scarcity, the Carbon-14 and archaeological age determinations for early Neolithic sites in Southern Asia exhibit remarkable continuity across the vast region from the Near East to the Indian Subcontinent, consistent with a systematic eastward spread at a speed of about 0.65 km/yr.
Causes
The most prominent of several theories (not mutually exclusive) as to factors that caused populations to develop agriculture include:
The Oasis Theory, originally proposed by Raphael Pumpelly in 1908, popularized by V. Gordon Childe in 1928 and summarised in Childe's book Man Makes Himself. This theory maintains that as the climate got drier due to the Atlantic depressions shifting northward, communities contracted to oases where they were forced into close association with animals, which were then domesticated together with planting of seeds. However, this theory now has little support amongst archaeologists because subsequent climate data suggests that the region was getting wetter rather than drier.
The Hilly Flanks hypothesis, proposed by Robert John Braidwood in 1948, suggests that agriculture began in the hilly flanks of the Taurus and Zagros Mountains, where the climate was not drier as Childe had believed, and fertile land supported a variety of plants and animals amenable to domestication.
The Feasting model by Brian Hayden suggests that agriculture was driven by ostentatious displays of power, such as giving feasts, to exert dominance. This required assembling large quantities of food, which drove agricultural technology.
The Demographic theories proposed by Carl Sauer and adapted by Lewis Binford and Kent Flannery posit an increasingly sedentary population that expanded up to the carrying capacity of the local environment and required more food than could be gathered. Various social and economic factors helped drive the need for food.
The evolutionary/intentionality theory, developed by David Rindos and others, considers agriculture as an evolutionary adaptation of plants and humans. Starting with domestication by protection of wild plants, it resulted specialization of location and then complete domestication.
Peter Richerson, Robert Boyd, and Robert Bettinger make a case for the development of agriculture coinciding with an increasingly stable climate at the beginning of the Holocene. Ronald Wright's book and Massey Lecture Series A Short History of Progress popularized this hypothesis.
Leonid Grinin argues that whatever plants were cultivated, the independent invention of agriculture always occurred in special natural environments (e.g., South-East Asia). It is supposed that the cultivation of cereals started somewhere in the Near East: in the hills of Israel or Egypt. So Grinin dates the beginning of the agricultural revolution within the interval 12,000 to 9,000 BP, though in some cases the first cultivated plants or domesticated animals' bones are even of a more ancient age of 14–15 thousand years ago.
Andrew Moore suggested that the Neolithic Revolution originated over long periods of development in the Levant, possibly beginning during the Epipaleolithic. In "A Reassessment of the Neolithic Revolution", Frank Hole further expanded the relationship between plant and animal domestication. He suggested the events could have occurred independently during different periods of time, in as yet unexplored locations. He noted that no transition site had been found documenting the shift from what he termed immediate and delayed return social systems. He noted that the full range of domesticated animals (goats, sheep, cattle and pigs) were not found until the sixth millennium BCE at Tell Ramad. Hole concluded that "close attention should be paid in future investigations to the western margins of the Euphrates basin, perhaps as far south as the Arabian Peninsula, especially where wadis carrying Pleistocene rainfall runoff flowed."
Consequences
Social change
Despite the significant technological advance and advancements in knowledge, arts and trade, the Neolithic revolution did not lead immediately to a rapid growth of population. Its benefits appear to have been offset by various adverse effects, mostly diseases and warfare.
The introduction of agriculture has not necessarily led to unequivocal progress. The nutritional standards of the growing Neolithic populations were inferior to that of hunter-gatherers. Several ethnological and archaeological studies conclude that the transition to cereal-based diets caused a reduction in life expectancy and stature, an increase in infant mortality and infectious diseases, the development of chronic, inflammatory or degenerative diseases (such as obesity, type 2 diabetes and cardiovascular diseases) and multiple nutritional deficiencies, including vitamin deficiencies, iron deficiency anemia and mineral disorders affecting bones (such as osteoporosis and rickets) and teeth. Average height for Europeans went down from 178 cm (5'10") for men and 168 cm (5'6") for women to 165 cm (5'5") and 155 cm (5'1") respectively, and it took until the twentieth century for average height for Europeans to return to the pre-Neolithic Revolution levels.
The traditional view is that agricultural food production supported a denser population, which in turn supported larger sedentary communities, the accumulation of goods and tools, and specialization in diverse forms of new labor. Food surpluses made possible the development of a social elite who were not otherwise engaged in agriculture, industry or commerce, but dominated their communities by other means and monopolized decision-making. Nonetheless, larger societies made it more feasible for people to adopt diverse decision making and governance models. Jared Diamond (in The World Until Yesterday) identifies the availability of milk and cereal grains as permitting mothers to raise both an older (e.g. 3 or 4 year old) and a younger child concurrently. The result is that a population can increase more rapidly. Diamond, in agreement with feminist scholars such as V. Spike Peterson, points out that agriculture brought about deep social divisions and encouraged gender inequality. This social reshuffle is traced by historical theorists, like Veronica Strang, through developments in theological depictions. Strang supports her theory through a comparison of aquatic deities before and after the Neolithic Agricultural Revolution, most notably the Venus of Lespugue and the Greco-Roman deities such as Circe or Charybdis: the former venerated and respected, the latter dominated and conquered. The theory, supplemented by the widely accepted assumption from Parsons that "society is always the object of religious veneration", argues that with the centralization of government and the dawn of the Anthropocene, roles within society became more restrictive and were rationalized through the conditioning effect of religion; a process that is crystallized in the progression from polytheism to monotheism.
Subsequent revolutions
Andrew Sherratt has argued that following upon the Neolithic Revolution was a second phase of discovery that he refers to as the secondary products revolution. Animals, it appears, were first domesticated purely as a source of meat. The Secondary Products Revolution occurred when it was recognised that animals also provided a number of other useful products. These included:
hides and skins (from undomesticated animals)
manure for soil conditioning (from all domesticated animals)
wool (from sheep, llamas, alpacas, and Angora goats)
milk (from goats, cattle, yaks, sheep, horses, and camels)
traction (from oxen, onagers, donkeys, horses, camels, and dogs)
guarding and herding assistance (dogs)
Sherratt argued that this phase in agricultural development enabled humans to make use of the energy possibilities of their animals in new ways, and permitted permanent intensive subsistence farming and crop production, and the opening up of heavier soils for farming. It also made possible nomadic pastoralism in semi arid areas, along the margins of deserts, and eventually led to the domestication of both the dromedary and Bactrian camel. Overgrazing of these areas, particularly by herds of goats, greatly extended the areal extent of deserts.
Diet and health
Compared to foragers, Neolithic farmers' diets were higher in carbohydrates but lower in fibre, micronutrients, and protein. This led to an increase in the frequency of carious teeth and slower growth in childhood , and studies have consistently found that populations around the world became shorter after the transition to agriculture. This trend may have been exacerbated by the greater seasonality of farming diets and with it the increased risk of famine due to crop failure.
Throughout the development of sedentary societies, disease spread more rapidly than it had during the time in which hunter-gatherer societies existed. Inadequate sanitary practices and the domestication of animals may explain the rise in deaths and sickness following the Neolithic Revolution, as diseases jumped from the animal to the human population. Some examples of infectious diseases spread from animals to humans are influenza, smallpox, and measles. Ancient microbial genomics has shown that progenitors to human-adapted strains of Salmonella enterica infected up to 5,500 year old agro-pastoralists throughout Western Eurasia, providing molecular evidence for the hypothesis that the Neolithization process facilitated the emergence of Salmonella entericia.
In concordance with a process of natural selection, the humans who first domesticated the big mammals quickly built up immunities to the diseases as within each generation the individuals with better immunities had better chances of survival. In their approximately 10,000 years of shared proximity with animals, such as cows, Eurasians and Africans became more resistant to those diseases compared with the indigenous populations encountered outside Eurasia and Africa. For instance, the population of most Caribbean and several Pacific Islands have been completely wiped out by diseases. 90% or more of many populations of the Americas were wiped out by European and African diseases before recorded contact with European explorers or colonists. Some cultures like the Inca Empire did have a large domestic mammal, the llama, but llama milk was not drunk, nor did llamas live in a closed space with humans, so the risk of contagion was limited. According to bioarchaeological research, the effects of agriculture on dental health in Southeast Asian rice farming societies from 4000 to 1500 BP was not detrimental to the same extent as in other world regions.
Jonathan C. K. Wells and Jay T. Stock have argued that the dietary changes and increased pathogen exposure associated with agriculture profoundly altered human biology and life history, creating conditions where natural selection favoured the allocation of resources towards reproduction over somatic effort.
Comparative chronology
See also
Upper Paleolithic revolution
Broad spectrum revolution
Secondary products revolution
Urban revolution
Industrial revolution
Green Revolution
Further reading
Taiz, Lincoln. "Agriculture, plant physiology, and human population growth: past, present, and future." Theoretical and Experimental Plant Physiology 25 (2013): 167-181.
References
Bibliography
Bailey, Douglass. (2001). Balkan Prehistory: Exclusions, Incorporation and Identity. Routledge Publishers. .
Bailey, Douglass. (2005). Prehistoric Figurines: Representation and Corporeality in the Neolithic. Routledge Publishers. .
Balter, Michael (2005). The Goddess and the Bull: Catalhoyuk, An Archaeological Journey to the Dawn of Civilization. New York: Free Press. .
Bocquet-Appel, Jean-Pierre, editor and Ofer Bar-Yosef, editor, The Neolithic Demographic Transition and its Consequences, Springer (21 October 2008), hardcover, 544 pages, , trade paperback and Kindle editions are also available.
Cohen, Mark Nathan (1977)The Food Crisis in Prehistory: Overpopulation and the Origins of Agriculture. New Haven and London: Yale University Press. .
Diamond, Jared (2002). "Evolution, Consequences and Future of Plant and Animal Domestication". Nature, Vol 418.
Harlan, Jack R. (1992). Crops & Man: Views on Agricultural Origins ASA, CSA, Madison, WI. Hort 306 - READING 3-1
Wright, Gary A. (1971). "Origins of Food Production in Southwestern Asia: A Survey of Ideas" Current Anthropology, Vol. 12, No. 4/5 (Oct.–Dec. 1971), pp. 447–477
Kuijt, Ian; Finlayson, Bill. (2009). "Evidence for food storage and predomestication granaries 11,000 years ago in the Jordan Valley". PNAS, Vol. 106, No. 27, pp. 10966–10970.
Articles containing video clips
Prehistoric agriculture
History of technology
Revolution
Agricultural revolutions
Stages of history
Historical eras | Neolithic Revolution | Technology | 8,112 |
47,482,405 | https://en.wikipedia.org/wiki/Stephen%20L.%20Buchwald | Stephen L. Buchwald (born 1955) is an American chemist and the Camille Dreyfus Professor of Chemistry at MIT. He is known for his involvement in the development of the Buchwald-Hartwig amination and the discovery of the dialkylbiaryl phosphine ligand family for promoting this reaction and related transformations. He was elected as a fellow of the American Academy of Arts and Sciences and as a member of the National Academy of Sciences in 2000 and 2008, respectively.
Early life and education
Stephen Buchwald was born in Bloomington, Indiana. He credits his "young and dynamic" high school chemistry teacher, William Lumbley, for infecting him with his enthusiasm.
In 1977 he received his Sc.B. from Brown University where he worked with Kathlyn A. Parker and David E. Cane as well as Gilbert Stork from Columbia University. In 1982 he received his Ph.D from Harvard University working under Jeremy R. Knowles.
Career
Buchwald was a postdoctoral fellow at Caltech with Robert H. Grubbs. In 1984, he joined MIT faculty as an assistant professor of chemistry. He was promoted to associate professor in 1989 and to Professor in 1993. He was named the Camille Dreyfus Professor in 1997. He has coauthored over 435 accepted academic publications and 47 accepted patents.
He is known for his involvement in the development of the Buchwald-Hartwig amination and the discovery of the dialkylbiaryl phosphine ligand family for promoting this reaction and related transformations. He was elected as a fellow of the American Academy of Arts and Sciences and as a member of the National Academy of Sciences in 2000 and 2008, respectively.
, he served as an associate editor for the academic journal, Advanced Synthesis & Catalysis.
Notable awards
Awards received by Buchwald include:
2005 - CAS Science Spotlight Award
2005 - Bristol-Myers Squibb Distinguished Achievement Award
2006 – American Chemical Society Award for Creative Work in Synthetic Organic Chemistry
2006 – Siegfried Medal Award in Chemical Methods which Impact Process Chemistry
2010 – Gustavus J. Esselen Award for Chemistry in the Public Interest
2013 – Arthur C. Cope Award
2014 – Ulysses Medal, University College Dublin
2014 – Linus Pauling Award
2014 – BBVA Foundation Frontiers of Knowledge Award in Basic Sciences
2015 – Honorary Doctorate, University of South Florida
2016 - William H. Nichols Medal
2019 – Wolf Prize in Chemistry
2019 – Roger Adams Award, American Chemical Society
2020 – Clarivate Citation Laureate
References
External links
21st-century American chemists
Massachusetts Institute of Technology School of Science faculty
Living people
Harvard University alumni
Brown University alumni
1955 births
American organic chemists
California Institute of Technology fellows
Fellows of the American Academy of Arts and Sciences
Members of the United States National Academy of Sciences | Stephen L. Buchwald | Chemistry | 559 |
16,270,868 | https://en.wikipedia.org/wiki/Eta1%20Doradus | {{DISPLAYTITLE:Eta1 Doradus}}
Eta1 Doradus, Latinized from η1 Doradus, is a star in the southern constellation of Dorado. It is visible to the naked eye as a dim, white-hued star with an apparent visual magnitude of 5.72. This object is located approximately 335 light years distant from the Sun, based on parallax, and is drifting further away with a radial velocity of +18 km/s. It is circumpolar south of latitude 24°S.
This object is an A-type main-sequence star with a stellar classification of A0V. It is 94 million years old with a high rotation rate, showing a projected rotational velocity of 149. The star has 2.46 times the mass of the Sun and is radiating 49 times the Sun's luminosity from its photosphere at an effective temperature of 10,325 K. It is the southern pole star of Venus.
References
External links
2004. Starry Night Pro, Version 5.8.4. Imaginova. . www.starrynight.com
Dorado
A-type main-sequence stars
042525
Doradus, Eta1
028909
PD-66 00493
2194
Southern pole stars | Eta1 Doradus | Astronomy | 261 |
13,133,803 | https://en.wikipedia.org/wiki/Ellman%27s%20reagent | Ellman's reagent (5,5′-dithiobis-(2-nitrobenzoic acid) or DTNB) is a colorogenic chemical used to quantify the number or concentration of thiol groups in a sample. It was developed by George L. Ellman.
Preparation
In Ellman's original paper, he prepared this reagent by oxidizing 2-nitro-5-chlorobenzaldehyde to the carboxylic acid, introducing the thiol via sodium sulfide, and coupling the monomer by oxidization with iodine. Today, this reagent is readily available commercially.
Ellman's test
Thiols react with this compound, cleaving the disulfide bond to give 2-nitro-5-thiobenzoate (TNB−), which ionizes to the TNB2− dianion in water at neutral and alkaline pH. This TNB2− ion has a yellow color.
This reaction is rapid and stoichiometric, with the addition of one mole of thiol releasing one mole of TNB. The TNB2− is quantified in a spectrophotometer by measuring the absorbance of visible light at 412 nm, using an extinction coefficient of 14,150 M−1 cm−1 for dilute buffer solutions, and a coefficient of 13,700 M−1 cm−1 for high salt concentrations, such as 6 M guanidinium hydrochloride or 8 M urea. Ellman's original 1959 publication estimated the molar extinction at 13,600 M−1 cm−1, and this value can be found in some modern applications of the method despite improved determinations. Commercial DTNB may not be completely pure, so may require recrystallization to obtain completely accurate and reproducible results.
Ellman's reagent can be used for measuring low-molecular mass thiols such as glutathione in both pure solutions and biological samples, such as blood. It can also measure the number of thiol groups on proteins.
References
External links
Quantitation of sulfhydryls DTNB, Ellman’s reagent (uses incorrect absorbance coefficient)
Biochemistry detection reactions
Organic disulfides
Benzoic acids
Nitrobenzene derivatives
Biochemistry
Reagents for biochemistry
Reagents | Ellman's reagent | Chemistry,Biology | 490 |
845,642 | https://en.wikipedia.org/wiki/Variable-length%20intake%20manifold | In internal combustion engines, a variable-length intake manifold (VLIM),variable intake manifold (VIM), or variable intake system (VIS) is an automobile internal combustion engine manifold technology. As the name implies, VLIM/VIM/VIS can vary the length of the intake tract in order to optimise power and torque across the range of engine speed operation, as well as to help provide better fuel efficiency. This effect is often achieved by having two separate intake ports, each controlled by a valve, that open two different manifolds – one with a short path that operates at full engine load, and another with a significantly longer path that operates at lower load. The first patent issued for a variable length intake manifold was published in 1958, US Patent US2835235 by Daimler Benz AG.
There are two main effects of variable intake geometry:
Swirl Variable geometry can create a beneficial air swirl pattern, or turbulence in the combustion chamber. The swirling helps distribute the fuel and form a homogeneous air-fuel mixture. This aids the initiation of the combustion process, helps minimise engine knocking, and helps facilitate complete combustion. At low revolutions per minute (rpm), the speed of the airflow is increased by directing the air through a longer path with limited capacity (i.e., cross-sectional area) and this assists in improving low engine speed torque. At high rpm, the shorter and larger path opens when the load increases, so that a greater amount of air with least resistance can enter the chamber. This helps maximise 'top-end' power. In double overhead camshaft (DOHC) designs, the air paths may sometimes be connected to separate intake valves so the shorter path can be excluded by de-activating the intake valve itself.
Pressurisation A tuned intake path can have a light pressurising effect similar to a low-pressure supercharger due to Helmholtz resonance. However, this effect occurs only over a narrow engine speed band. A variable intake can create two or more pressurized "hot spots", increasing engine output. When the intake air speed is higher, the dynamic pressure pushing the air (and/or mixture) inside the engine is increased. The dynamic pressure is proportional to the square of the inlet air speed, so by making the passage narrower or longer the speed/dynamic pressure is increased.
Applications
Many automobile manufacturers use similar technology with different names. Another common term for this technology is variable resonance induction system (VRIS).
Acura — Variable Volume Induction 3.0-litre V6 C30A (1991-2005) and 3.2-litre V6 C32B (1997-2005); 3.2 L V6 J32A3 (2004-2008); 2.0-litre I4 R20A (2013-2015) petrol engines
Audi — 2.8-litre V6 petrol engine (1991–98); 3.0-litre V6 (2002-2005); 3.6 and 4.2-litre V8 engines, 1987–present
Alfa Romeo — Twin Spark 16v (1.8 and 2.0-litre) and JTS engines
BMW — DISA (DIfferenzierte SaugAnlage – "Differential Air Intake"), two stage: M42, M44, M54, N62TU, three stage: N52; DIVA (continuously variable length runners): N62, is the world's first continuously variable length intake manifold.
Citroën — XM 3,0 V6.24 (200 hp) used during 1991 to 1997, ZX Coupe 2.0 16v XU10J4 engine.
Daewoo — Variable Geometry Induction System (VGIS) Lanos
Dodge / Chrysler — 3.5 L V6 EGE, (1993-1997) used in Dodge Intrepid, Chrysler Concorde and LHS; 2.0 A588 - ECH (2001–2005) used in the 2001-2005 model year Dodge Neon R/T; 6.4 L V8 2011-2014 Dodge Charger and Challenger, Chrysler 300, Jeep Grand Cherokee (SRT8 versions)
Ferrari — 360 Modena, 550 Maranello, LaFerrari
Fiat – Controlled High Turbulence (1989–92, Fiat Croma CHT), StarJet engine, dubbed Port Deactivation (PDA), Variable Intake System on the 131HP 1.8 16V and on the 155 HP 2.0 20V Pratola Serra engine.
Ford — Dual-Stage Intake (DSI), on their Duratec 2.5 and 3.0-litre V6s, and it was also found on the Yamaha V6 in the Taurus SHO. The Ford Modular V8 engines and the V6 Cologne use either the Intake Manifold Runner Control (IMRC) for four-valve engines, or the Charge Motion Control Valve (CMCV) for three-valve engines. The SVT edition (in North America) and ST170 edition (in Europe) of the Ford Focus added IMRC to the Ford Zetec engine. A system called Split Port Induction (SPI) was used on the 2.0L CVH I4 of the 1997-2002 Escort and 2000-2004 Focus, and the 3.8L Essex V6 of the 1996-2003 Windstar and 2001-2004 Mustang.
General Motors — 3.9-litre LZ8/LZ9 V6, 3.2-litre LA3 V6, LT5 5.7-litre
GM Korea — DOHC versions of E-TEC II engines
Holden — Alloytec
Honda — Integra, Legend, NSX, Prelude, Civic, Accord Hybrid, Ridgeline, Honda Civic (ninth generation)
Hyundai — XG V6
Isuzu — Rodeo used in the second generation V6, 3.2-litre (6VD1) Rodeos, and third generation Gemini 1.6-litre 16v (4XE1) engines
Jaguar — AJ-V6
Kia — Carnival, Sedona
Land Rover — Variable Geometry Induction: Freelander V6 (2001-2006)
Lancia — VIS
Mazda — Variable Inertia Charging System (VICS) is used on the Mazda FE-DOHC engine and Mazda B engine family of inline-four engines, and Variable Resonance Induction System (VRIS) in the Mazda K engine family of V6 engines. An updated version of this technology is employed on Mazda's new Z and L engines, which is also used by Ford as the Duratec.
Mercedes-Benz — V6 M112, V6 M272, V8 AMG M156
MG — ZT 190, 180, 160 (2001-2005), ZS 180 (2001-2005)
Mitsubishi — Mitsubishi Variable Induction Management (MVIM) 1991-1999 3000GT NA DOHC, 2003-2005 Eclipse
Nissan — inline-four engines, V6 engines, V8 engines
Opel — TWINPORT – modern versions of Ecotec Family 1 and Ecotec Family 0 inline-four engines and inline-three engines; a similar technology is used in 3.2-litre 54° V6 engine.
Peugeot — 2.2-litre inline-four engine, 3.0-litre V6, 2.0 16v XU10J4 engine (non /z version)
Porsche — 928 "flappy", VarioRam, 964, 993, 996, Boxster
Proton — Campro CPS and VIM, Proton Gen-2 CPS and Proton Waja CPS; Proton Campro IAFM - 2008 Proton Saga 1.3
Renault — Clio 2.0 RS
Roewe — Variable Geometry Induction: Roewe 750 2.5 (2006–present).
Rover — Variable Geometry Induction: Rover 825 (1996-1999), Rover 75 V6 (1998-2005)
Subaru – Subaru Legacy Japan only using EJ204 (version D) 2.0 Litre, Naturally aspirated engine
Suzuki – VIS
Toyota — Toyota Variable Induction System (T-VIS), used in the early versions of the 1G-GEU, 3S-GTE, 3S-GE, and 4A-GE families, and Acoustic Control Induction System - (ACIS) used in E, G, GR, GZ, JZ, M, S, MZ, UR, UZ, and VZ engine families.
Volkswagen — 1.6-litre inline-four engine, V6 engines, VR5 engines, VR6 engines, W8 engines, V8 engines
Volvo — V-VIS (Volvo Variable Induction System) Volvo B52 engine as found on the Volvo 850. Longer inlet ducts used between 1,500 and 4,100 rpm at 80% load or higher.
References
Engine technology | Variable-length intake manifold | Technology | 1,780 |
47,512,765 | https://en.wikipedia.org/wiki/Pteridiospora%20spinosispora | Pteridiospora spinosispora is a species of fungus in the class Dothideomycetes.
Taxonomy
The fungus was discovered in 1963, isolated from the mycorrhizae of sweetgum (Liquidambar styraciflua). The type locality was near the Mississippi River in northern Mississippi; it was later reported growing with the roots of green ash (Fraxinus pennsylvanica). The species was first mentioned in a 1966 report, where it was described as an "unidentified sphaeriaceous ascomycete". Filer formally described the fungus in 1969.
Description
The fruitbodies of the fungus are small, dull black, and spherical, measuring 114–251 by 114–251 μm, with thick walls (up to 24 μm); They occur singly or in dense groups. Underlying the fruitbodies is a small, thin-walled mat of mycelium. The club-shaped asci (spore-bearing cells) measure 85 by 25 μm. The ascospores are black and spiny, measuring 21–25 by 12–20 μm (with the spines 2–5 μm); they contain a single septum. The ornamented spores clearly distinguish P. spinosispora from other members of Pteridiospora.
References
External links
Enigmatic Dothideomycetes taxa
Fungi described in 1969
Fungi of the United States
Fungi without expected TNC conservation status
Fungus species | Pteridiospora spinosispora | Biology | 303 |
61,823,811 | https://en.wikipedia.org/wiki/Identifier%20%28computer%20languages%29 | In computer programming languages, an identifier is a lexical token (also called a symbol, but not to be confused with the symbol primitive data type) that names the language's entities. Some of the kinds of entities an identifier might denote include variables, data types, labels, subroutines, and modules.
Lexical form
Which character sequences constitute identifiers depends on the lexical grammar of the language. A common rule is alphanumeric sequences, with underscore also allowed (in some languages, _ is not allowed), and with the condition that it can not begin with a numerical digit (to simplify lexing by avoiding confusing with integer literals) – so foo, foo1, foo_bar, _foo are allowed, but 1foo is not – this is the definition used in earlier versions of C and C++, Python, and many other languages. Later versions of these languages, along with many other modern languages, support many more Unicode characters in an identifier. However, a common restriction is not to permit whitespace characters and language operators; this simplifies tokenization by making it free-form and context-free. For example, forbidding + in identifiers due to its use as a binary operation means that a+b and a + b can be tokenized the same, while if it were allowed, a+b would be an identifier, not an addition. Whitespace in identifier is particularly problematic, as if spaces are allowed in identifiers, then a clause such as if rainy day then 1 is legal, with rainy day as an identifier, but tokenizing this requires the phrasal context of being in the condition of an if clause. Some languages do allow spaces in identifiers, however, such as ALGOL 68 and some ALGOL variants – for example, the following is a valid statement: real half pi; which could be entered as .real. half pi; (keywords are represented in boldface, concretely via stropping). In ALGOL this was possible because keywords are syntactically differentiated, so there is no risk of collision or ambiguity, spaces are eliminated during the line reconstruction phase, and the source was processed via scannerless parsing, so lexing could be context-sensitive.
In most languages, some character sequences have the lexical form of an identifier but are known as keywords – for example, if is frequently a keyword for an if clause, but lexically is of the same form as ig or foo namely a sequence of letters. This overlap can be handled in various ways: these may be forbidden from being identifiers – which simplifies tokenization and parsing – in which case they are reserved words; they may both be allowed but distinguished in other ways, such as via stropping; or keyword sequences may be allowed as identifiers and which sense is determined from context, which requires a context-sensitive lexer. Non-keywords may also be reserved words (forbidden as identifiers), particularly for forward compatibility, in case a word may become a keyword in future. In a few languages, e.g., PL/1, the distinction is not clear.
Semantics
The scope, or accessibility within a program of an identifier can be either local or global. A global identifier is declared outside of functions and is available throughout the program. A local identifier is declared within a specific function and only available within that function.
For implementations of programming languages that are using a compiler, identifiers are often only compile time entities. That is, at runtime the compiled program contains references to memory addresses and offsets rather than the textual identifier tokens (these memory addresses, or offsets, having been assigned by the compiler to each identifier).
In languages that support reflection, such as interactive evaluation of source code (using an interpreter or an incremental compiler), identifiers are also runtime entities, sometimes even as first-class objects that can be freely manipulated and evaluated. In Lisp, these are called symbols.
Compilers and interpreters do not usually assign any semantic meaning to an identifier based on the actual character sequence used. However, there are exceptions. For example:
In Perl a variable is indicated using a prefix called a sigil, which specifies aspects of how the variable is interpreted in expressions.
In Ruby a variable is automatically considered immutable if its identifier starts with a capital letter.
In Go, the capitalization of the first letter of a variable's name determines its visibility (uppercase for public, lowercase for private).
In some languages such as Go, identifiers uniqueness is based on their spelling and their visibility.
In HTML an identifier is one of the possible attributes of an HTML element. It is unique within the document.
See also
Naming convention (programming)
References
Programming language concepts
Metadata
Syntactic entities | Identifier (computer languages) | Technology | 1,034 |
1,448,997 | https://en.wikipedia.org/wiki/Gloom | Gloom is a low level of light which is so dim that there are physiological and psychological effects. Human vision at this level becomes monochrome and has lessened clarity.
Optical and psychological effects
Light conditions may be considered gloomy when the level of light in an environment is too low for the proper function of cone cells, and colour vision is lost. In a study by Rothwell and Campbell, light levels described as "gloomy" fell between 28 and 3.6 cd/m2.
Low light and lack of color of this sort may be associated with depression and lethargy. This association was made as far back as the 2nd century by the ancient Greek physician, Aretaeus of Cappadocia, who said, "Lethargics are to be laid in the light and exposed to the rays of the sun, for the disease is gloom." Also, some studies have found weaker electrical activity in the retinas of depressed people, which gave the individuals studied poor visual contrast, meaning that they saw the world in grayer hues. The naturally weak daylight during winter at extreme latitudes can cause seasonal affective disorder (SAD), although a percentage of people experience SAD during summer. A solarium or other source of bright light may be used as light therapy to treat winter SAD.
Architecture and ergonomics
Where artificial lighting is used, this has to be sufficient to not only illuminate the task area, but also provide sufficient background lighting to avoid a sensation of gloominess which has a negative effect on efficiency. If the task is challenging, such as playing cricket, reaction times are found to increase significantly when the illumination declines to the gloom level.
In architecture, the level of lighting affects whether a building is considered to be unappealing. If there is little or no sunlight or view of the outdoor surroundings from within, then this will tend to make the building seem "gloomy". As seen from the exterior, an interior which is brighter than the surrounding light level may cause the overall building to seem gloomy because the normal cues and contrasts have been upset.
Artistic effect
In the arts, a gloomy landscape or setting may be used to illustrate themes such as melancholy or poverty. Horace Walpole coined the term gloomth to describe the ambiance of great ancient buildings which he recreated in the Gothic revival of his house, Strawberry Hill, and novel, The Castle of Otranto. Characters which exemplify a gloomy outlook include Eeyore, Marvin and Old Man Gloom. The catchphrase "doom and gloom", which is commonly used to express extreme pessimism, was popularised by the movie Finian's Rainbow in which the leprechaun Og (Tommy Steele) uses it repeatedly.
Weather
Gloomy conditions may arise when low cloud cover forms a continuous overcast. This occurs annually in Southern California, where it is known as June Gloom. Anticyclones may generate gloom-like conditions if they remain stationary, causing a haze and layer of stratocumulus clouds. These tend to occur in temperate winter at the middle latitudes or over an extended period in subtropical regions.
References
Emotions
Visibility
Vision | Gloom | Physics,Mathematics | 640 |
34,222,176 | https://en.wikipedia.org/wiki/NGC%203938 | NGC 3938 is an unbarred spiral galaxy in the Ursa Major constellation. It was discovered on 6 February 1788 by William Herschel. It is one of the brightest spiral galaxies in the Ursa Major South galaxy group and is roughly 67,000 light years in diameter. It is approximately 43 million light years away from Earth. NGC 3938 is classified as type Sc under the Hubble sequence, a loosely wound spiral galaxy with a smaller and dimmer bulge. The spiral arms of the galaxy contain many areas of ionized atomic hydrogen gas, more so towards the center.
Supernovae
Five supernovae have been identified within NGC 3938.
SN 1961U (typeII, mag. 13.7) was discovered by Paul Wild on 28 December 1961. [Note: some sources incorrectly list the discovery date as 2 January 1962.]
SN 1964L (typeIc, mag. 13.3) was discovered by Paul Wild on 11 December 1964.
SN 2005ay (type II, mag. 15.6) was discovered by Doug Rich on 27 March 2005.
SN 2017ein (type Ic, mag. 17.6) was discovered by Ron Arbour on 25 May 2017 and peaked at magnitude 14.9. Images taken before the explosion point to a progenitor mass between ~47-48, if it was in a single star system, and ~60-80, if it was in a binary star system.
SN 2022xlp (typeIa, mag. 17) was discovered by Kōichi Itagaki on 13 October 2022.
Gallery
References
External links
Ursa Major
Unbarred spiral galaxies
3938
17880206
Ursa Major Cluster
037229 | NGC 3938 | Astronomy | 349 |
432,749 | https://en.wikipedia.org/wiki/ENEA%20AB | Enea AB is an information technology company with its headquarters in Kista, Sweden that provides real-time operating systems and consulting services. Enea, which is an abbreviation of Engmans Elektronik Aktiebolag, also produces the OSE operating system.
History
Enea was founded 1968 by Rune Engman as Engmans Elektronik AB. Their first product was an operating system for a defence computer used by the Swedish Air Force. During the 1970s the firm developed compiler technology for the Simula programming language.
During the early days of the European Internet-like connections, Enea employee Björn Eriksen connected Sweden to EUnet using UUCP, and registered enea as the first Swedish domain in April 1983. The domain was later converted to the internet domain enea.se when the network was switched over to TCP and the Swedish top domain .se was created in 1986.
Products
OSE
The ENEA OSE real-time operating system first released in 1985.
The Enea multi core family of real-time operating systems was first released in 2009.
The Enea Operating System Embedded (OSE) is a family of real-time, microkernel, embedded operating system created by Bengt Eliasson for ENEA AB, which at the time was collaborating with Ericsson to develop a multi-core system using Assembly, C, and C++. Enea OSE Multicore Edition is based on the same microkernel architecture. The kernel design that combines the advantages of both traditional asymmetric multiprocessing (AMP) and symmetric multiprocessing (SMP). Enea OSE Multicore Edition offers both AMP and SMP processing in a hybrid architecture. OSE supports many processors, mainly 32-bit. These include the ColdFire, ARM, PowerPC, and MIPS based system on a chip (SoC) devices.
The Enea OSE family features three OSs: OSE (also named OSE Delta) for processors by ARM, PowerPC, and MIPS, OSEck for various DSP's, and OSE Epsilon for minimal devices, written in pure assembly (ARM, ColdFire, C166, M16C, 8051). OSE is a closed-source proprietarily licensed software released on 20 March 2018. OSE uses events (or signals) in the form of messages passed to and from processes in the system. Messages are stored in a queue attached to each process. A link handler mechanism allows signals to be passed between processes on separate machines, over a variety of transports. The OSE signalling mechanism formed the basis of an open-source inter-process kernel design project named LINX.
Linux
Enea Linux provides an open, cross-development tool chain and runtime environment based on the Yocto Project embedded Linux configuration system.
Hypervisor
Enea Hypervisor is also based on OSE microkernel technology and runs Enea OSE applications and takes as guests Linux Operating System and optionally semiconductor specific executive environments for bare-metal speed packet processing
Optima
Enea Optima development tool suite for developing, debugging, and profiling embedded systems software
The Element
The Element middleware software for high-availability systems, based on technology developed by Equipe Communications Corp
Collaborative project and community memberships
Enea is a member of various collaborative projects and open source communities:
Linux Foundation
Automotive Grade Linux
Linux OPNFV
Yocto Project
Linaro
Open Data Plane (ODP)
References
Information technology companies of Sweden
Companies based in Stockholm
Real-time operating systems
Embedded operating systems
ARM operating systems
Microkernel-based operating systems
Companies listed on Nasdaq Stockholm | ENEA AB | Technology | 747 |
2,107,748 | https://en.wikipedia.org/wiki/P%C3%A9ter%20Frankl | Péter Frankl (born 26 March 1953 in Kaposvár, Somogy County, Hungary) is a mathematician, street performer, columnist and educator, active in Japan. Frankl studied mathematics at Eötvös Loránd University in Budapest and submitted his PhD thesis while still an undergraduate. He holds a PhD degree from the University Paris Diderot as well. He has lived in Japan since 1988, where he is a well-known personality and often appears in the media. He keeps travelling around Japan performing (juggling and giving public lectures on various topics). Frankl won a gold medal at the International Mathematical Olympiad in 1971. He has seven joint papers with Paul Erdős, and eleven joint papers with Ronald Graham. His research is in combinatorics, especially in extremal combinatorics. He is the author of the union-closed sets conjecture.
Personality
Both of his parents were survivors of concentration camps and taught him "The only things you own are in your heart and brain". So he became a mathematician. Frankl often lectures about racial discrimination.
Adolescence and abilities
He could multiply two digit numbers when he was four years old. Frankl speaks 12 languages (Hungarian, English, Russian, Swedish, French, Spanish, Polish, German, Japanese, Chinese, Thai, Korean) and lectured mathematics in many countries in these languages. He has travelled to more than 100 countries.
Activities
Frankl learnt juggling from Ronald Graham. He and Vojtěch Rödl solved a $1000 problem of Paul Erdős. Zsolt Baranyai helped Frankl to get a scholarship in France, where he became a CNRS research fellow.
For 1984 to 1990, Frankl and Akiyama worked hard organizing a Japanese mathematical Olympiad team, and as a consequence the Japanese team is now a regular participant of the International Mathematical Olympiad.
Since 1998, he is an external member of the Hungarian Academy of Sciences.
He authored more than thirty books in Japanese, and with László Babai, he wrote the manuscript of a book on "Linear Algebra Methods in Combinatorics". With Norihide Tokushige he is the coauthor of the book Extremal Problems For Finite Sets (American Mathematical Society, 2018).
Frankl conjecture
For any finite union-closed family of finite sets, other than the family consisting only of the empty set, there exists an element that belongs to at least half of the sets in the family.
See also
Frankl–Rödl graph
References
External links
Timeline in Japanese
1953 births
Living people
20th-century Hungarian mathematicians
20th-century Hungarian people
21st-century Hungarian mathematicians
21st-century Hungarian people
Graph theorists
Jugglers
Members of the Hungarian Academy of Sciences
Hungarian Jews
Expatriate television personalities in Japan
Hungarian expatriates in Japan
People from Kaposvár
International Mathematical Olympiad participants | Péter Frankl | Mathematics | 573 |
45,676,473 | https://en.wikipedia.org/wiki/FOCAL%20%28spacecraft%29 | FOCAL (an acronym for Fast Outgoing Cyclopean Astronomical Lens) is a proposed space telescope that would use the Sun as a gravity lens. The gravitational lens effect was first derived by Albert Einstein, and the concept of a mission to the solar gravitational lens was first suggested by professor Von Eshleman, and analyzed further by Italian astronomer Claudio Maccone and others.
In order to use the Sun as a gravity lens, it would be necessary to send the telescope to a minimum distance of 550 astronomical units away from the Sun, enabling very high signal amplifications: for example, at the 203 GHz wavelength, amplification of 1.3·1015. Maccone suggests that this should be enough to obtain detailed images of the surfaces of extrasolar planets.
Other uses of the mission
Even without using the Sun as the lens, FOCAL could perform various, otherwise impossible measurements: a separate telescope could be used to measure stellar distances by parallax, which would, using the baseline of 550 AU, measure the precise position of every star in the Milky Way, enabling various further scientific discoveries. It could also study the interstellar medium, the heliosphere, observe gravitational waves, check for the possible variation of the gravitational constant, observe the cosmic infrared background, characterise interplanetary dust within the Solar System, more precisely measure the mass of the Solar System and similar.
Limitations
FOCAL does not require any non-existing technology; however, it has various limitations. A space mission of this duration and distance has never been attempted; for comparison, the Voyager 1 and Voyager 2 probes are at distances of 147 AU and 122 AU in 2019. A gravity lens will bend objects behind it, so that images from the telescope would be difficult to interpret. FOCAL would be able to observe only objects that are right behind the Sun from its point of view, which means that for every observed object a new telescope would have to be made.
A critique of the technology of the gravity lens telescope was given by Landis. Some of the problems Landis points out include discussion of the interference of the solar corona, which will make the telescope signal-to-noise ratio poor, the high magnification of the target, which will make the design of the mission focal plane difficult, and an analysis of the inherent spherical aberration of the lens will limit the resolution possible.
References
External links
Space telescopes
Proposed spacecraft
Gravitational lensing | FOCAL (spacecraft) | Astronomy | 489 |
44,896,201 | https://en.wikipedia.org/wiki/Ana%20Caraiani | Ana Caraiani (born 1985) is a Romanian-American mathematician, who is a Royal Society University Research Fellow and Hausdorff Chair at the University of Bonn. Her research interests include algebraic number theory and the Langlands program.
Education
She was born in Bucharest and studied at Mihai Viteazul High School. In 2001, Caraiani became the first Romanian female competitor in 15 years at the International Mathematical Olympiad, where she won a silver medal. In the following two years, she won two gold medals.
After graduating high school in 2003, she pursued her studies in the United States. As an undergraduate student at Princeton University, Caraiani was a two-time Putnam Fellow (the only female competitor at the William Lowell Putnam Mathematical Competition to win more than once) and Elizabeth Lowell Putnam Award winner. Caraiani graduated summa cum laude from Princeton in 2007, with an undergraduate thesis on Galois representations supervised by Andrew Wiles.
Caraiani did her graduate studies at Harvard University under the supervision of Wiles' student Richard Taylor, earning her Ph.D. in 2012 with a dissertation concerning local-global compatibility in the Langlands correspondence.
Career
After spending a year as an L.E. Dickson Instructor at the University of Chicago, she returned to Princeton and the Institute for Advanced Study as a Veblen Instructor and NSF Postdoctoral Fellow. In 2016, she moved to the Hausdorff Center for Mathematics as a Bonn Junior Fellow. She moved to Imperial College London in 2017 as a Royal Society University Research Fellow and Senior Lecturer. In 2019, she became a Royal Society University Research Fellow and Reader at Imperial College London. As of 2021, Caraiani is a full professor at Imperial College London. She rejoined the University of Bonn in 2022 as Hausdorff Chair.
Research
Caraiani's research work includes the papers "Patching and the p-adic local Langlands correspondence" (2016), "On the generic part of the cohomology of compact unitary Shimura varieties" (2017) with Peter Scholze, and "Potential automorphy over CM fields" (2023). These three papers all happen to be directly related to the Langlands program, but she does have other interests.
Caraiani discusses the Langlands program from a more general perspective in the survey article "New frontiers in Langlands reciprocity".
Recognition
In 2007, the Association for Women in Mathematics awarded Caraiani their Alice T. Schafer Prize. In 2018, she was one of the winners of the Whitehead Prize of the London Mathematical Society.
She was elected as a Fellow of the American Mathematical Society in the 2020 Class, for "contributions to arithmetic geometry and number theory, in particular the -adic Langlands program". She is one of the 2020 winners of the EMS Prize. In September 2022 she was awarded the 2023 New Horizons in Mathematics Prize. She was elected to the Academia Europaea in 2024.
References
External links
Caraiani's scores at the IMO
Professional home page
Personal home page
Interview with Caraiani (in Romanian)
1985 births
Living people
Scientists from Bucharest
Number theorists
21st-century Romanian mathematicians
21st-century American mathematicians
21st-century American women mathematicians
Romanian women mathematicians
Romanian emigrants to the United States
International Mathematical Olympiad participants
Putnam Fellows
Princeton University alumni
Harvard Graduate School of Arts and Sciences alumni
Academics of Imperial College London
Whitehead Prize winners
Fellows of the American Mathematical Society
Institute for Advanced Study people
Mihai Viteazul National College (Bucharest) alumni
Members of Academia Europaea | Ana Caraiani | Mathematics | 709 |
17,469,697 | https://en.wikipedia.org/wiki/List%20of%20welding%20codes | This page lists published welding codes, procedures, and specifications.
American Society of Mechanical Engineers (ASME) Codes
The American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (BPVC) covers all aspects of design and manufacture of boilers and pressure vessels. All sections contain welding specifications, however most relevant information is contained in the following:
American Welding Society (AWS) Standards
The American Welding Society (AWS) publishes over 240 AWS-developed codes, recommended practices and guides which are written in accordance with American National Standards Institute (ANSI) practices. The following is a partial list of the more common publications:
American Petroleum Institute (API) Standards
The American Petroleum Institute (API) oldest and most successful programs is in the development of API standards which started with its first standard in 1924. API maintains over 500 standards covering the oil and gas field. The following is a partial list specific to welding:
Australian / New Zealand (AS/NZS) Standards
Standards Australia is the body responsible for the development, maintenance and publication of Australian Standards. The following is a partial list specific to welding:
Canadian Standards Association (CSA) Standards
The Canadian Standards Association (CSA) is responsible for the development, maintenance and publication of CSA standards. The following is a partial list specific to welding:
British Standards (BS)
British Standards are developed, maintained and published by BSI Standards which is UK's National Standards Body. The following is a partial list of standards specific to welding:
International Organization for Standardization (ISO) Standards
International Organization for Standardization (ISO) has developed over 18500 standards and over 1100 new standards are published every year. The following is a partial list of the standards specific to welding:
European Union (CEN) standards
The European Committee for Standardization (CEN) had issued numerous standards covering welding processes, which unified and replaced former national standards. Of the former national standards, those issued by BSI and DIN were widely used outside their countries of origin. After the Vienna Agreement with ISO, CEN has replaced most of them with equivalent ISO standards (EN ISO series).
Additional requirements for welding exist in CEN codes and standards for specific products, like EN 12952, EN 12953, EN 13445, EN 13480, etc.
German Standards (DIN and others)
NA 092 is the Standards Committee for welding and allied processes (NAS) at DIN Deutsches Institut für Normung e. V. The following is a partial list of DIN welding standards:
Japanese Standards (JIS and others)
Japanese Industrial Standards (JIS)
Japanese Industrial Standards are the standards used for industrial activities in Japan, coordinated by the Japanese Industrial Standards Committee (JISC) and published by the Japanese Standards Association (JSA).
JIS Z 3001-1 Welding and allied processes-Vocabulary-Part 1: General
JIS Z 3001-2 Welding and allied processes-Vocabulary-Part 2: Welding processes
JIS Z 3001-3 Welding and allied processes-Vocabulary-Part 3: Soldering and brazing
JIS Z 3001-4 Welding and allied processes-Vocabulary-Part 4: Imperfections in welding
JIS Z 3001-5 Welding and allied processes-Vocabulary-Part 5: Laser welding
JIS Z 3001-6 Welding and allied processes-Vocabulary-Part 6: Resistance welding
JIS Z 3001-7 Welding and allied processes-Vocabulary-Part 7: Arc welding
JIS Z 3011 Welding positions defined by means of angles of slope and rotation
JIS Z 3021 Welding and allied processes -- Symbolic representation
Japan Welding Society Standard (WES)
As a WES standard, it is defined in the following classification.
Fundamentals
Tests, inspections and their equipment
Base material
welding material
Welding and cutting equipment and accessories
Welding design and construction
Welding-related certifications and certifications
Safety, health and environment
See also
Welding
List of welding processes
Welder certification
Welding Procedure Specification
Welder
Notes
References
Structural steel
Further reading and external links
Overview poster of CEN & ISO welding standards
Welding codes
Codes | List of welding codes | Engineering | 808 |
58,057,192 | https://en.wikipedia.org/wiki/HMA%20%28VPN%29 | HMA (formerly HideMyAss!) is a VPN service founded in 2005 in the United Kingdom. It has been a subsidiary of the Czech cybersecurity company Avast since 2016.
History
HMA was created in 2005 in Norfolk, England by Jack Cator. At the time, Cator was sixteen years-old. He created HMA in order to circumvent restrictions his school had on accessing games or music from their network. According to Cator, the first HMA service was created in just a few hours using open-source code. The first product was a free proxy website where users typed in a URL and it delivered the website in the user's web browser.
Cator promoted the tool in online forums and it was featured on the front page of digg. After attracting more than one thousand users, Cator incorporated ads. HMA did not take any venture capital funding. It generated about $1,000 - $2,000 per month while the founder went to college to pursue a degree in computer science. In 2009, Cator dropped out of college to focus on HMA and added a paid VPN service. Most early HMA employees were freelancers found on oDesk. In 2012, one of the freelancers set up a competing business. HMA responded by hiring its contractors as full-time employees and establishing physical offices in London.
In 2012, the United Kingdom's government sent HMA a court order demanding it provide information about Cody Andrew Kretsinger's use of HMA's service to hack Sony as a member of the LulzSec hacking group. HMA provided the information to authorities. HMA said it was a violation of the company's terms of use to use its software for illegal activities.
In 2013, HMA added software to anonymize internet traffic from mobile devices. In 2014, the company introduced HideMyPhone! service, which allowed mobile phone users to make their calls appear to come from a different location.
By 2014, the service had 10 million users and 215,000 paying subscribers of its VPN service. It made £11 million in revenue that year. HMA had 100 staff and established international offices in Belgrade and Kyiv.
By 2015, HMA became one of the largest VPN providers. In May 2015, it was acquired by AVG Technologies for $40 million with a $20 million earn-out upon achievement of milestones, and became part of Avast after its 2016 acquisition of AVG Technologies.
In 2017, a security vulnerability was discovered that allowed hackers with access to a user's laptop to obtain elevated privileges on the device. HMA corrected the vulnerability days later.
In 2019, it was reported that HMA received a directive from Russian authorities to join a state sponsored registry of banned websites, which would prevent Russian HMA users from circumventing Russian state censorship. HMA was reportedly given one month to comply, or face blocking by Russian authorities.
In 2020, HMA introduced a no-log policy for their VPN service. Under the policy HMA will not log a user’s original IP address, DNS queries, online activity, amount of data transferred or VPN connection timestamps. Following HMA’s introduction of a no-log policy, HMA’s VPN was awarded a low risk user privacy impact rating for its no-logging policy, after it was independently audited by third-party cybersecurity firm VerSprite.
Software
HMA provides digital software and services intended to help users remain anonymous online and encrypt their online traffic. Its software is used to access websites that may be blocked in the user's country, to anonymize information that could otherwise be used by hackers, and to do something unscrupulous without being identified. HMA's privacy policy and terms of use prohibit using it for illegal activity.
HMA hides the user's IP address and other identifying information by routing the user's internet traffic through a remote server. However, experts note that the company does log some connection data including the originating IP address, the duration of each VPN session, and the amount of bandwidth used.
As of May 2018, the company had 830 servers in 280 locations across the globe and provided over 3000 IP addresses. The software also includes a kill switch across all platforms.
Privacy
According to Invisibler, HMA VPN appears to have cooperated with US authorities in handing over logs in a hacking case. This led to the arrest of a hacker in what is known as the "LulzSec fiasco".
Reception
In 2015, a review in Tom's Hardware said HMA was easy to use, had good customer service, and a large number of server locations to choose from, but criticized it for slowing internet speeds. In contrast, Digital Trends said HMA had strong speeds and good server selection, but wasn't fool-proof at ensuring anonymity, because it stored user activity logs (in 2020, HMA announced that it would no longer log user activity). In 2017, PC World noted that it was difficult to measure the effect a VPN service has on internet speed, because of variables like location, internet service speeds, and hardware.
A 2016 review in PCMag gave the HMA Android app 3 out of 5 stars. It praised HMA for its server selection and user interface, but criticized it for price, speed, and the lack of advanced features. In 2018, PCMag gave similar feedback on the HMA VPN service. PC World’s 2017 review also praised HMA's simple user interface, but criticized the lack of advanced features, saying the software was ideal for casual users that do not need advanced configuration options.
References
External links
Avast
Virtual private network services
Computer companies established in 2005
Computer security software
Proxy servers
2005 establishments in England
British companies established in 2005
2016 mergers and acquisitions
British subsidiaries of foreign companies
Gen Digital software | HMA (VPN) | Engineering | 1,222 |
6,057,496 | https://en.wikipedia.org/wiki/Wireless%20supplicant | A Wireless Supplicant is a program that runs on a computer and is responsible for making login requests to a wireless network. It handles passing the login and encryption credentials to the authentication server. It also handles roaming from one wireless access point to another, in order to maintain connectivity.
See also
Supplicant
wpa_supplicant
Xsupplicant
References
Wireless networking | Wireless supplicant | Technology,Engineering | 82 |
52,745,718 | https://en.wikipedia.org/wiki/Johann%20Gasteiger | Johann Gasteiger (27 October 1941 in Dachau) is a German Chemist and a Chemoinformatician on which he wrote and edited various books.
Life
Johann Gasteiger studied Chemistry at Ludwig Maximilian University of Munich, ETH Zurich and University of Zurich. He obtained his PhD in Organic Chemistry at Ludwig Maximilian University of Munich in 1971 with Professor Rolf Huisgen. After Postdoc at the University of California, Berkeley until 1972, he was an assistant professor at Technical University of Munich and received his Habilitation in 1979 under the mentorship of Professor Ivar Ugi. From 1994 until 2007 he was a professor at University of Erlangen–Nuremberg in the "Computer-Chemie-Centrum", which he cofounded. In 1997, Johann Gasteiger founded the company Molecular Networks, which distributes software developed at the Computer-Chemie-Centrum.
Johann Gasteiger is one of the pioneers of Cheminformatics. His main research interest is the development of software for drug design (for example via QSAR, the simulation of chemical reactions, for synthesis planning in organic chemistry, machine learning for spectroscopy, and the application of neural networks and genetic algorithms in chemistry.
Career
In 1979, Johann Gasteiger and Mario Marsili published a method for the iterative calculation of atomic partial charges in molecules. This work is his most-cited publication.
Between 1987 and 1991 Johann Gasteiger was a project manager for the development of the ChemInform RX database.
Since 1985, the 3D structure generator CORINA is developed in his group.
Johann Gasteiger has pioneered the use of neural networks in chemistry. It is mainly his contribution that neural networks are one of standard methods in Cheminformatics today.
Awards
1991 Gmelin-Beilstein Denkmünze of the Society of German Chemists for contributions to Computational Chemistry
1997 Herman Skolnik Award of the Division of Chemical Information of the American Chemical Society
2005 Mike Lynch Award of the Chemical Structure Association
2006 ACS Award for Computers in Chemical and Pharmaceutical Research
2006 The 2nd German Conference on Chemoinformatics (also the 20th CIC-Workshop of the Fachgruppe Chemie-Information-Computer of the GDCh) was dedicated to Johann Gasteiger.
References
External links
website at University of Erlangen–Nuremberg
website of Molecular Networks GmbH
1941 births
20th-century German chemists
German organic chemists
Ludwig Maximilian University of Munich alumni
Academic staff of the Technical University of Munich
Living people
21st-century German chemists
ETH Zurich alumni | Johann Gasteiger | Chemistry | 518 |
1,362,163 | https://en.wikipedia.org/wiki/Mepacrine | Mepacrine, also called quinacrine or by the trade names Atabrine or Atebrin, is a medication with several uses. It is related to chloroquine and mefloquine. Although available from compounding pharmacies, as of August 2020 approved formulations are not available in the United States.
Medical uses
The main uses of mepacrine are as an antiprotozoal, antirheumatic, and an intrapleural sclerosing agent.
Mepacrine is used off label as a primary antimicrobial agent for patients with metronidazole-resistant giardiasis and patients who should not receive or cannot tolerate metronidazole. Giardiasis with a high level of drug resistance may even require a combination of mepacrine and metronidazole to cure.
Mepacrine is also used off-label for the treatment of systemic lupus erythematosus, indicated in the treatment of discoid and subcutaneous lupus manifestations, particularly in patients who are unable to take hydroxychloroquine.
As an sclerosing agent, it is used as pneumothorax prophylaxis in patients at high risk of recurrence, e.g., in those with cystic fibrosis.
Mepacrine is not the drug of choice because side effects are common, including toxic psychosis, and may cause permanent damage. See mefloquine for more information.
In addition to medical applications, mepacrine is an effective in vitro research tool for the epifluorescent visualization of cells, especially platelets. Mepacrine is a green fluorescent dye taken up by most cells. Platelets store mepacrine in dense granules.
Mechanism
Its mechanism of action against protozoa is uncertain, but it is thought to act against the protozoan's cell membrane.
It is known to act as a histamine N-methyltransferase inhibitor.
It also inhibits NF-κB and activates p53.
History
Antiprotozoal
Mepacrine was initially approved in the 1930s as an antimalarial drug. It was used extensively during the second World War by Allied forces fighting in North Africa and the Far East to prevent malaria.
This antiprotozoal is also approved for the treatment of giardiasis (an intestinal parasite), and has been researched as an inhibitor of phospholipase A2.
Scientists at Bayer in Germany first synthesised mepacrine in 1931. The product was one of the first synthetic substitutes for quinine although later superseded by chloroquine.
Anthelmintics
In addition it has been used for treating tapeworm infections.
Creutzfeldt–Jakob disease
Mepacrine has been shown to bind to the prion protein and prevent the formation of prion aggregates in vitro,
and full clinical trials of its use as a treatment for Creutzfeldt–Jakob disease are under way in the United Kingdom and the United States. Small trials in Japan have reported improvement in the condition of patients with the disease,
although other reports have shown no significant effect,
and treatment of scrapie in mice and sheep has also shown no effect. Possible reasons for the lack of an in vivo effect include inefficient penetration of the blood–brain barrier, as well as the existence of drug-resistant prion proteins that increase in number when selected for by treatment with mepacrine.
Non-surgical sterilization for women
The use of mepacrine for non-surgical sterilization for women has also been studied. The first report of this method claimed a first year failure rate of 3.1%. However, despite a multitude of clinical studies on the use of mepacrine and female sterilization, no randomized, controlled trials have been reported to date and there is some controversy over its use.
Pellets of mepacrine are inserted through the cervix into a woman's uterine cavity using a preloaded inserter device, similar in manner to IUCD insertion. The procedure is undertaken twice, first in the proliferative phase, 6 to 12 days following the first day of the menstrual cycle and again one month later. The sclerosing effects of the drugs at the utero-tubal junctions (where the Fallopian tubes enter the uterus) results in scar tissue forming over a six-week interval to close off the tubes permanently.
In the United States, this method has undergone Phase I clinical testing. The FDA has waived the necessity for Phase II clinical trials because of the extensive data pertaining to other uses of mepacrine. The next step in the FDA approval process in the United States is a Phase III large multi-center clinical trial. The method is currently used off-label.
Many peer reviewed studies suggest that mepacrine sterilization (QS) is potentially safer than surgical sterilization. Nevertheless, in 1998 the Supreme Court of India banned the import or use of the drug, allegedly based on reports that it could cause cancer or ectopic pregnancies.
Skin dye
During World War II, Caucasian American operatives involved in Sino-American Cooperative Organization activities the Second Sino-Japanese War yellowed their skin using mepacrine tablets in order better blend in with the native Chinese population.
See also
Chloroquine
Amodiaquine
Pamaquine
Mefloquine
References
External links
National Institute on Aging (NIA) trial
Disulfiram-like drugs
Antiprotozoal agents
Antimalarial agents
Sterilization (medicine)
Experimental methods of birth control
Acridines
Chloroarenes
Phenol ethers
Aromatic amines
Diethylamino compounds | Mepacrine | Biology | 1,190 |
9,544,219 | https://en.wikipedia.org/wiki/Common%20Public%20Radio%20Interface | The Common Public Radio Interface (CPRI) standard defines an interface between Radio Equipment Control (REC) and Radio Equipment (RE). Oftentimes, CPRI links are used to carry data between cell sites/remote radio heads and base stations/baseband units.
The purpose of CPRI is to allow replacement of a copper or coax cable connection between a radio transceiver (used example for mobile-telephone communication and typically located in a tower) and a base station/baseband unit (typically located at the ground nearby), so the connection can be made to a remote and more convenient location. This connection (often referred to as the Fronthaul network) can be a fiber to an installation where multiple remote base stations may be served. This fiber supports both single and multi mode communication. The fiber end is connected with the Small Form-factor Pluggable (SFP) transceiver device.
The companies working to define the specification include Ericsson
AB, Huawei Technologies Co. Ltd, NEC Corporation and Nokia.
See also
Open Base Station Architecture Initiative (OBSAI)
Remote radio head (RRH)
References
External links
CPRI Homepage
CPRI specification (free) at CPRI homepage
Radio technology | Common Public Radio Interface | Technology,Engineering | 249 |
1,514,469 | https://en.wikipedia.org/wiki/Column%20chromatography | Column chromatography in chemistry is a chromatography method used to isolate a single chemical compound from a mixture. Chromatography is able to separate substances based on differential absorption of compounds to the adsorbent; compounds move through the column at different rates, allowing them to be separated into fractions. The technique is widely applicable, as many different adsorbents (normal phase, reversed phase, or otherwise) can be used with a wide range of solvents. The technique can be used on scales from micrograms up to kilograms. The main advantage of column chromatography is the relatively low cost and disposability of the stationary phase used in the process. The latter prevents cross-contamination and stationary phase degradation due to recycling. Column chromatography can be done using gravity to move the solvent, or using compressed gas to push the solvent through the column.
A thin-layer chromatograph can show how a mixture of compounds will behave when purified by column chromatography. The separation is first optimised using thin-layer chromatography before performing column chromatography.
Column preparation
A column is prepared by packing a solid adsorbent into a cylindrical glass or plastic tube. The size will depend on the amount of compound being isolated. The base of the tube contains a filter, either a cotton or glass wool plug, or glass frit to hold the solid phase in place. A solvent reservoir may be attached at the top of the column.
Two methods are generally used to prepare a column: the dry method and the wet method. For the dry method, the column is first filled with dry stationary phase powder, followed by the addition of mobile phase, which is flushed through the column until it is completely wet, and from this point is never allowed to run dry. For the wet method, a slurry is prepared of the eluent with the stationary phase powder and then carefully poured into the column. The top of the silica should be flat, and the top of the silica can be protected by a layer of sand. Eluent is slowly passed through the column to advance the organic material.
The individual components are retained by the stationary phase differently and separate from each other while they are running at different speeds through the column with the eluent. At the end of the column they elute one at a time. During the entire chromatography process the eluent is collected in a series of fractions. Fractions can be collected automatically by means of fraction collectors. The productivity of chromatography can be increased by running several columns at a time. In this case multi stream collectors are used. The composition of the eluent flow can be monitored and each fraction is analyzed for dissolved compounds, e.g. by analytical chromatography, UV absorption spectra, or fluorescence. Colored compounds (or fluorescent compounds with the aid of a UV lamp) can be seen through the glass wall as moving bands.
Stationary phase
The stationary phase or adsorbent in column chromatography is a solid. The most common stationary phase for column chromatography is silica gel, the next most common being alumina. Cellulose powder has often been used in the past. A wide range of stationary phases are available in order to perform ion exchange chromatography, reversed-phase chromatography (RP), affinity chromatography or expanded bed adsorption (EBA). The stationary phases are usually finely ground powders or gels and/or are microporous for an increased surface, though in EBA a fluidized bed is used. There is an important ratio between the stationary phase weight and the dry weight of the analyte mixture that can be applied onto the column. For silica column chromatography, this ratio lies within 20:1 to 100:1, depending on how close to each other the analyte components are being eluted.
Mobile phase (eluent)
The mobile phase or eluent is a solvent or a mixture of solvents used to move the compounds through the column. It is chosen so that the retention factor value of the compound of interest is roughly around 0.2 - 0.3 in order to minimize the time and the amount of eluent to run the chromatography. The eluent has also been chosen so that the different compounds can be separated effectively. The eluent is optimized in small scale pretests, often using thin layer chromatography (TLC) with the same stationary phase, using solvents of different polarity until a suitable solvent system is found. Common mobile phase solvents, in order of increasing polarity, include hexane, dichloromethane, ethyl acetate, acetone, and methanol. A common solvent system is a mixture of hexane and ethyl acetate, with proportions adjusted until the target compound has a retention factor of 0.2 - 0.3. Contrary to common misconception, methanol alone can be used as an eluent for highly polar compounds, and does not dissolve silica gel.
There is an optimum flow rate for each particular separation. A faster flow rate of the eluent minimizes the time required to run a column and thereby minimizes diffusion, resulting in a better separation. However, the maximum flow rate is limited because a finite time is required for the analyte to equilibrate between the stationary phase and mobile phase, see Van Deemter's equation. A simple laboratory column runs by gravity flow. The flow rate of such a column can be increased by extending the fresh eluent filled column above the top of the stationary phase or decreased by the tap controls. Faster flow rates can be achieved by using a pump or by using compressed gas (e.g. air, nitrogen, or argon) to push the solvent through the column (flash column chromatography).
The particle size of the stationary phase is generally finer in flash column chromatography than in gravity column chromatography. For example, one of the most widely used silica gel grades in the former technique is mesh 230 – 400 (40 – 63 μm), while the latter technique typically requires mesh 70 – 230 (63 – 200 μm) silica gel.
A spreadsheet that assists in the successful development of flash columns has been developed. The spreadsheet estimates the retention volume and band volume of analytes, the fraction numbers expected to contain each analyte, and the resolution between adjacent peaks. This information allows users to select optimal parameters for preparative-scale separations before the flash column itself is attempted.
Automated systems
Column chromatography is an extremely time-consuming stage in any lab and can quickly become the bottleneck for any process lab. Many manufacturers like Biotage, Buchi, Interchim and Teledyne Isco have developed automated flash chromatography systems (typically referred to as LPLC, low pressure liquid chromatography, around ) that minimize human involvement in the purification process. Automated systems will include components normally found on more expensive high performance liquid chromatography (HPLC) systems such as a gradient pump, sample injection ports, a UV detector and a fraction collector to collect the eluent. Typically these automated systems can separate samples from a few milligrams up to an industrial many kilogram scale and offer a much cheaper and quicker solution to doing multiple injections on prep-HPLC systems.
The resolution (or the ability to separate a mixture) on an LPLC system will always be lower compared to HPLC, as the packing material in an HPLC column can be much smaller, typically only 5 micrometre thus increasing stationary phase surface area, increasing surface interactions and giving better separation. However, the use of this small packing media causes the high back pressure and is why it is termed high pressure liquid chromatography. The LPLC columns are typically packed with silica of around 50 micrometres, thus reducing back pressure and resolution, but it also removes the need for expensive high pressure pumps. Manufacturers are now starting to move into higher pressure flash chromatography systems and have termed these as medium pressure liquid chromatography (MPLC) systems which operate above .
Column chromatogram resolution calculation
Typically, column chromatography is set up with peristaltic pumps, flowing buffers and the solution sample through the top of the column. The solutions and buffers pass through the column where a fraction collector at the end of the column setup collects the eluted samples. Prior to the fraction collection, the samples that are eluted from the column pass through a detector such as a spectrophotometer or mass spectrometer so that the concentration of the separated samples in the sample solution mixture can be determined.
For example, if you were to separate two different proteins with different binding capacities to the column from a solution sample, a good type of detector would be a spectrophotometer using a wavelength of 280 nm. The higher the concentration of protein that passes through the eluted solution through the column, the higher the absorbance of that wavelength.
Because the column chromatography has a constant flow of eluted solution passing through the detector at varying concentrations, the detector must plot the concentration of the eluted sample over a course of time. This plot of sample concentration versus time is called a chromatogram.
The ultimate goal of chromatography is to separate different components from a solution mixture. The resolution expresses the extent of separation between the components from the mixture. The higher the resolution of the chromatogram, the better the extent of separation of the samples the column gives. This data is a good way of determining the column's separation properties of that particular sample. The resolution can be calculated from the chromatogram.
The separate curves in the diagram represent different sample elution concentration profiles over time based on their affinity to the column resin. To calculate resolution, the retention time and curve width are required.
Retention time is the time from the start of signal detection by the detector to the peak height of the elution concentration profile of each different sample.
Curve width is the width of the concentration profile curve of the different samples in the chromatogram in units of time.
A simplified method of calculating chromatogram resolution is to use the plate model. The plate model assumes that the column can be divided into a certain number of sections, or plates and the mass balance can be calculated for each individual plate. This approach approximates a typical chromatogram curve as a Gaussian distribution curve. By doing this, the curve width is estimated as 4 times the standard deviation of the curve, 4σ. The retention time is the time from the start of signal detection to the time of the peak height of the Gaussian curve.
From the variables in the figure above, the resolution, plate number, and plate height of the column plate model can be calculated using the equations:
Resolution (Rs):
Rs = 2(tRB – tRA)/(wB + wA),
where:
tRB = retention time of solute B
tRA = retention time of solute A
wB = Gaussian curve width of solute B
wA = Gaussian curve width of solute A
Plate Number (N):
N = (tR)2/(w/4)2
Plate Height (H):
H = L/N
where L is the length of the column.
Column adsorption equilibrium
For an adsorption column, the column resin (the stationary phase) is composed of microbeads. Even smaller particles such as proteins, carbohydrates, metal ions, or other chemical compounds are conjugated onto the microbeads. Each binding particle that is attached to the microbead can be assumed to bind in a 1:1 ratio with the solute sample sent through the column that needs to be purified or separated.
Binding between the target molecule to be separated and the binding molecule on the column beads can be modeled using a simple equilibrium reaction Keq = [CS]/([C][S]) where Keq is the equilibrium constant, [C] and [S] are the concentrations of the target molecule and the binding molecule on the column resin, respectively. [CS] is the concentration of the complex of the target molecule bound to the column resin.
Using this as a basis, three different isotherms can be used to describe the binding dynamics of a column chromatography: linear, Langmuir, and Freundlich.
The linear isotherm occurs when the solute concentration needed to be purified is very small relative to the binding molecule. Thus, the equilibrium can be defined as:
[CS] = Keq[C].
For industrial scale uses, the total binding molecules on the column resin beads must be factored in because unoccupied sites must be taken into account. The Langmuir isotherm and Freundlich isotherm are useful in describing this equilibrium. The Langmuir isotherm is given by:
[CS] = (KeqStot[C])/(1 + Keq[C]), where Stot is the total binding molecules on the beads.
The Freundlich isotherm is given by:
[CS] = Keq[C]1/n
The Freundlich isotherm is used when the column can bind to many different samples in the solution that needs to be purified. Because the many different samples have different binding constants to the beads, there are many different Keqs. Therefore, the Langmuir isotherm is not a good model for binding in this case.
See also
Fast protein liquid chromatography (FPLC) – separation of proteins using column chromatography
High-performance liquid chromatography (HPLC) – column chromatography using high pressure
References
External links
Flash Column Chromatography Guide (pdf)
Radial Flow Chromatography
Chromatography
Laboratory techniques | Column chromatography | Chemistry | 2,916 |
75,271,665 | https://en.wikipedia.org/wiki/L-H%20mode%20transition | Low to High Confinement Mode Transition, more commonly referred to as L-H transition, is a phenomenon in the fields of plasma physics and magnetic confinement fusion, signifying the transition from less efficient plasma confinement to highly efficient modes. The L-H transition, a milestone in the development of nuclear fusion, enables the confinement of high-temperature plasmas (ionized gases at extremely high temperatures). The transition is dependent on many factors such as density, magnetic field strength, heating method, plasma fueling, and edge plasma control, and is made possible through mechanisms such as edge turbulence, E×B shear, edge electric field, and edge current and plasma flow. Researchers studying this field use tools such as Electron Cyclotron Emission, Thomson Scattering, magnetic diagnostics, and Langmuir probes to gauge the PLH (energy needed for the transition) and seek to lower this value. This confinement is a necessary condition for sustaining the fusion reactions, which involve the combination of atomic nuclei, leading to the release of vast amounts of energy.
Background
Key terms and concepts needed to comprehend L-H Transition include understanding plasma and fusion.
Plasma
Plasma is one of the four fundamental states of matter, other than solid, liquid, and gas. In contrast to other states, plasma is composed of ionized gas particles, which cause the separation of its electrons from atoms/molecules and result in the creation of an electrically conductive medium. It occurs in phenomena like lightning, stars, and fusion plasma.
Fusion
Fusion is a nuclear process in which two atomic nuclei combine to form a single bigger nucleus. This phenomenon releases a substantial amount of energy and is the process that powers stars. On Earth, controlled nuclear fusion is being pursued as a clean and virtually limitless energy source. It involves the fusion of isotopes like deuterium (hydrogen atom with 1 neutron) and tritium (hydrogen atom with 2 neutrons), and generates energy in the form of kinetic energy (energy in the form of motion/high speed) of released particles, such as neutrons, and intense heat. The principle is based on Einstein's equation E=mc^2, and as the resulting helium is marginally lighter than the two original hydrogens, the difference in the mass is converted into energy, known as mass defect. It is this energy that can be converted into clean electricity without producing waste.
Overview of Confinement Modes
Sources:
Plasma in both L-Mode and H-Mode exhibit distinct characteristics related to turbulence, control, power thresholds, energy efficiency, and confinement durations.
PLH (H-Mode Power Threshold)
PLH
PLH (H-mode power threshold) is an essential parameter in nuclear fusion. It represents the minimum power input required to trigger the transition from a low-confinement mode (L-Mode) to a high-confinement mode (H-Mode) in plasma confinement devices, such as tokamaks or stellarators. The PLH signifies the point at which the plasma attains the conditions necessary for enhanced energy confinement, reduced turbulence, and improved stability characteristic of H-Mode. Controlled nuclear fusion requires understanding and precise control of the PLH in order to facilitate the continuous generation of energy from the fusion process.
Factors Influencing PLH
Plasma Density and Magnetic Field Strength
H-Mode Power Threshold (PLH) in experimental nuclear-controlled fusion is highly dependent on both plasma confinement and magnetic field intensity. Higher plasma densities and stronger magnetic fields correlate positively with the elevated PLH.
τ is the confinement time
n is plasma density
V is the volume of the plasma
B is the magnetic field strength
Higher plasma densities result in increased particle collisions, enhancing the confinement of energy and increasing the plasma's stability. The greater the density, the higher the threshold of power (PLH) required to transition from L-Mode to H-Mode. The increased particle density allows for improved plasma confinement, which is vital for sustaining fusion reactions efficiently.
Similarly, stronger magnetic fields serve to contain and shape the plasma, mitigating its loss and preventing contact with the reactor's walls, which would ultimately lead to the reaction's failure. This magnetic confinement is essential for preventing energy losses and ensuring that the plasma reaches the conditions necessary for the L-Mode to H-Mode transition.
Heating Method
The heating methods used in fusion devices significantly impact the PLH. Various techniques, such as neutral beam injection (introduction high energy neutral particles to increase plasma temperature), radio frequency heating (uses radiofrequency waves to increase kinetic energy of particles), and magnetic confinement(uses magnetic fields to control extremely hot plasma), are employed to heat the plasma to the required temperatures for H-Mode. The choice of heating method and the effectiveness of energy transfer to the plasma are key factors in determining the PLH.
Plasma Fueling
Plasma fueling, which involves introducing additional fuel into the plasma, is another factor influencing the PLH. By injecting fuel, researchers can alter the plasma's density and temperature. An efficient and well-calibrated fueling system can elevate the plasma density, increasing the number of particles within the plasma, which is essential for enhancing confinement and stability. Additionally, effective fueling contributes to the rise in plasma temperature, a vital factor in achieving the conditions required for the L-Mode to H-Mode transition.
Edge Plasma Control
Edge plasma control is an important aspect of achieving and maintaining H-Mode in fusion devices. The edge plasma region, located at the outer boundary of the plasma confinement area, is susceptible to instabilities and turbulence.
The edge plasma is sensitive to disturbances because it's close to the magnetic confinement boundaries, where the plasma interacts with the walls of the containment vessel. These disturbances can lead to issues like uneven heat and particle movement or localized turbulence, which affect the transition to H-Mode.
To tackle this techniques such as magnetic shaping and advanced tools can control the edge plasma. The aim is to reduce these disturbances and make the edge plasma more stable. By regulating factors such as temperature, density, and impurities in the edge plasma, researchers can influence the PLH (H-Mode Power Threshold). Effective control of these factors ensures that the conditions for transitioning from L-Mode to H-Mode are met and maintained.
Methods for Measuring and Determining PLH
Electron Cyclotron Emissions (ECE)
Electron Cyclotron Emission (ECE) diagnostics, involve observing the radiation emitted by electrons as they undergo cyclotron motion (motion where a particle moves in a spiral path away from the center) in the magnetic field. This technique provides valuable insights into plasma parameters, including electron temperature and density. By analyzing the emitted radiation's spectral characteristics, researchers can precisely measure these properties, aiding in the determination of PLH.
Thomson Scattering
Thomson scattering employs laser beams to scatter off plasma electrons. The scattered light's characteristics show data on the velocity and temperature of these electrons, providing critical information about the plasma's thermal energy.
Magnetic Diagnostics
Magnetic sensors and probes are employed to map the magnetic fields within the plasma confinement device. Knowledge of the magnetic field's strength and configuration is fundamental for determining PLH, as it directly affects plasma stability and confinement.
Langmuir Probes
Langmuir probes are small electrodes inserted into the plasma to measure its properties, including electron temperature, density, and plasma potential. These measurements are critical for evaluating PLH and understanding the behavior of the plasma.
Transition Mechanisms
A few key processes that make the transition between L-H transition possible and allow for the improved stability of H-mode are edge turbulence, E×B shear, edge electric field, edge current, and plasma flow.
Mechanisms Driving L-H Transition
Edge Turbulence
The behavior of edge turbulence, a common feature in plasmas, is closely linked to the L-H transition. Researchers study how turbulence responds to changes in parameters like E×B shear, Er gradients, and other variables.
E×B Shear
One of the mechanisms thought to be responsible for triggering the L-H transition is the phenomenon known as E×B shear stabilization of turbulence. This refers to the rotation of the plasma resulting from the interaction between the electric field (E) and the magnetic field (B). As the plasma approaches the transition point, the E×B shear increases, creating a shearing (moving in a way that opposes the turbulent transport of particles, heat, and energy) motion within the plasma. This shearing motion suppresses turbulent transport (turbulent structures, such as eddies and vortices, within the plasma), promoting stability and improved confinement characteristic of H-mode.
Edge Electric Field (Er)
The behavior of the plasma at its edge, specifically the edge electric field (Er), plays a role in the L-H transition. As the transition approaches, there is the emergence of increasingly steep Er gradients near the plasma's edge. These gradient changes are closely associated with the suppression of turbulent transport, which refers to the erratic movement of particles and heat within the plasma. This suppression marks the shift to the H-mode, a state of plasma confinement that is significantly more efficient and stable, making it a key goal in nuclear fusion research.
Edge Current and Plasma Flow
The L-H transition's characteristics are further influenced by edge current and the toroidal flow of plasma. The complex interactions between these two elements can introduce variability in the threshold conditions for the transition to the more efficient H-mode.
Future Implications
L-H transition in nuclear fusion, if understood and used correctly, has the potential for clean energy and sustainable power plants.
Importance of Understanding L-H Transition in Nuclear Fusion
Enhanced Confinement
The transition to H-Mode brings about an improvement in plasma confinement. This leads to increased energy production and more efficient fusion reactions.
Pedestal Formation
H-Mode is associated with the development of a "pedestal" in the plasma profile. This pedestal acts as a protective barrier, preventing the plasma from contacting the reactor walls. The pedestal enhances stability and enables the plasma to reach the conditions necessary for sustained fusion reactions.
PLH Optimization
Achieving and maintaining H-Mode requires reaching the PLH (H-Mode Power Threshold). Understanding the factors that influence PLH, such as plasma density, magnetic field strength, heating methods, and edge plasma control, is essential for ensuring a smooth transition and sustained H-Mode operation.
Future Energy Solutions
Controlled nuclear fusion has the potential to revolutionize the energy sector. It offers a clean and virtually limitless energy source, significantly reducing greenhouse gas emissions and addressing energy demands. The L-H transition is a critical step towards harnessing the immense energy release of fusion reactions.
References
Plasma phenomena | L-H mode transition | Physics | 2,163 |
611,954 | https://en.wikipedia.org/wiki/Rescue%20of%20Jews%20during%20the%20Holocaust | During World War II, some individuals and groups helped Jews and others escape the Holocaust conducted by Nazi Germany.
The support, or at least absence of active opposition, of the local population was essential to Jews attempting to hide but often lacking in Eastern Europe. Those in hiding depended on the assistance of non-Jews. Having money, social connections with non-Jews, a non-Jewish appearance, perfect command of the local language, determination, and luck played a major role in determining survival. Jews in hiding were hunted down with the assistance of local collaborators and rewards offered for their denunciation. The death penalty was sometimes enforced on people hiding them, especially in eastern Europe, including Poland. Rescuers' motivations varied on a spectrum from altruism to expecting sex or material gain; it was not uncommon for helpers to betray or murder Jews if their money ran out.
Jews were hidden or saved by non-Jews throughout Nazi-occupied Europe. The Catholic Church and Vatican opposed the systemic murder of Jews, and in Italy the Mussolini government refused to deport Jews or participate in their mass murder. Many diplomats were involved in efforts to help Jews escape, such as by providing documents that allowed safe transit.
Since 1953, Israel's Holocaust memorial, Yad Vashem, has recognized 26,973 people as Righteous among the Nations. Yad Vashem's Holocaust Martyrs' and Heroes' Remembrance Authority, headed by an Israeli Supreme Court justice, recognizes rescuers of Jews as Righteous among the Nations to honor non-Jews who risked their lives during the Holocaust to save Jews from extermination by Nazi Germany.
By country
Poland
Poland had a very large Jewish population, and, according to Norman Davies, more Jews were both killed and rescued in Poland than in any other nation: the rescue figure usually being put at between 100,000–150,000. The memorial at Bełżec extermination camp commemorates 600,000 murdered Jews and 1,500 Poles who tried to save Jews. 6,532 men and women (more than from any other country in the world) have been recognized as rescuers by Yad Vashem in Israel., constituting the largest national contingent. Martin Gilbert wrote that "Poles who risked their own lives to save the Jews were indeed the exception. But they could be found throughout Poland, in every town and village."
Poland during the Holocaust of World War II was under total enemy control: initially, half of Poland was occupied by the Germans, as the General Government and Reichskomissariat; the other half by the Soviets, along with the territories of today's Belarus and Ukraine. The death penalty was threatened for individuals hiding Jews and their families. The list of Polish citizens officially recognized as Righteous includes 700 names of those who lost their lives while trying to help their Jewish neighbors. There were also groups, such as the Polish Żegota organization, that took drastic and dangerous steps to rescue victims. Witold Pilecki, a member of Armia Krajowa, the Polish Home Army, organized a resistance movement in Auschwitz from 1940, and Jan Karski tried to spread the word of the Holocaust.
When AK Home Army Intelligence discovered the true fate of transports leaving the Jewish Ghetto, the council to Aid Jews – Rada Pomocy Żydom (codename Żegota) – was established in late 1942 in co-operation with church groups. The organization saved thousands. Emphasis was placed on protecting children, as it was nearly impossible to intervene directly against the heavily guarded transports. False papers were prepared, and children were distributed among safe houses and church networks. Two women founded the movement: the Catholic writer and activist Zofia Kossak-Szczucka and the socialist Wanda Filipowicz. Some of its members had been involved in Polish nationalist movements, which were themselves anti-Jewish, but which became appalled by the barbarity of the Nazi mass murders. In an emotional protest prior to the foundation of the council, Kossak wrote that Hitler's race murders were a crime about which it was not possible to remain silent. While Polish Catholics might still feel Jews were "enemies of Poland", Kossak wrote that protest was required: "God requires this protest from us... It is required of a Catholic conscience... The blood of the innocent calls for vengeance to the heavens."
In the 1948–49 Zegota Case, the Stalin-backed regime established in Poland after the war secretly tried and imprisoned the leading survivors of Zegota as part of a campaign to eliminate and besmirch resistance heroes who might threaten the new regime.
Jews were aided also by diplomats outside Poland. The Ładoś Group was a group of Polish diplomats and Jewish activists who created in Switzerland a system of illegal production of Latin American passports aimed at saving European Jews from the Holocaust. About 10,000 Jews received such passports, of whom over 3,000 have been saved. The group efforts are documented in the Eiss Archive. Jews were also helped by Henryk Sławik, in Hungary, who helped save over 30,000 Polish refugees, including 5,000 Polish Jews by giving them false Polish passports with a Catholic designation, and by Tadeusz Romer in Japan.
Greece
The Foundation for the Advancement of Sephardic Studies and Culture writes "One cannot forget the repeated initiatives of the head of the Greek Christian Orthodox Metropolitan See of Thessaloniki, Gennadios, against the deportations, and most of all, the official letter of protest signed in Athens on March 23, 1943, by Archbishop Damaskinos of the Greek Orthodox Church, along with 27 prominent leaders of cultural, academic and professional organizations. The document, written in a very sharp language, refers to unbreakable bonds between Christian Orthodox and Jews, identifying them jointly as Greeks, without differentiation. It is noteworthy that such a document is unique in the whole of occupied Europe, in character, content and purpose".
The 275 Jews of the island of Zakynthos, however, survived the Holocaust. When the island's mayor, Loukas Karrer (Λουκάς Καρρέρ), was presented with the German order to hand over a list of Jews, Bishop Chrysostomos returned to the amazed Germans with a list of two names; his and the mayor's. Moreover, the Bishop wrote a letter to Hitler himself stating that the Jews of the island were under his supervision. In the meantime the island's population hid every member of the Jewish community. When the island was almost levelled by the great earthquake of 1953, the first relief came from the state of Israel, with a message that read "The Jews of Zakynthos have never forgotten their Mayor or their beloved Bishop and what they did for us."
The Jewish community of Volos, one of the most ancient in Greece, had fewer losses than any other Jewish community in Greece thanks to the timely and dynamic intervention and mobilization of the massive communist-leftist partisan movement of EAM-ELAS (National Liberation Front (Greece) – Greek People's Liberation Army) and the successful cooperation of the head of the Greek Christian Orthodox Metropolitan See of Demetrias Joachim and the chief rabbi of Volos, Moses Pesach for the evacuation of Volos from the Jewish people, after the events in Thessaloniki (displacement of the city's Jews to concentration camps).
Princess Alice of Battenberg and Greece, who was the wife of Prince Andrew of Greece and Denmark and the mother of Prince Philip, Duke of Edinburgh, and mother-in-law of Queen Elizabeth II of the United Kingdom, stayed in occupied Athens during the Second World War, sheltering Jewish refugees, for which she is recognized as "Righteous Among the Nations" at Yad Vashem.
Although the Germans and Bulgarians deported a great number of Greek Jews, others were successfully hidden by their Greek neighbors.
82-year-old Simon Danieli traveled from Israel to his birthplace in Veria to thank the descendants of the people who helped him and his family escape Nazi persecution during World War II. Danieli was 13 in 1942 when his family—father Joseph, a grain merchant, mother Buena, and nine siblings—fled Veria to escape the increasingly frequent atrocities committed by Nazi forces against the city's Jews. They ended up in a small nearby village in Sykies, where the family was taken in by Giorgos and Panayiota Lanara, who offered them shelter, food and a hiding place in the woods, helped also by a priest, Nestoras Karamitsopoulos. The Nazis, however, soon stormed Sykies, where around 50 more Jews from Veria had also taken refuge. They questioned the priest about the whereabouts of the Jews, but when Karamitsopoulos refused to answer, they began raiding people's homes. They found Jews hidden in eight homes, and promptly set fire the houses. They also turned their wrath on the priest, torturing him and pulling out his beard, according to Danieli.
France
Père Marie-Benoît was a French Capuchin priest who helped smuggle approximately 4,000 Jews into safety from Nazi-occupied Southern France and subsequently was recognized by Yad Vashem as a Righteous among the Nations in 1966. The French town of Le Chambon-sur-Lignon sheltered several thousand Jews. The Brazilian diplomat Luis Martins de Souza Dantas illegally issued Brazilian diplomatic visas to hundreds of Jews in France during the Vichy Government, saving them from almost certain death. Si Kaddour Benghabrit, the religious head of the Islamic Center of France, helped more than a thousand Jews by providing forged identity papers to the Jews of Paris during the German occupation of France. He also managed to hide many Jewish families in the rooms of Paris Mosque as well as in the residencies and women's prayer areas.
Belgium
In April 1943, members of the Belgian resistance held up the twentieth convoy train to Auschwitz, and freed 231 people. Several local governments did all they could to slow down or block the registration processes for Jews they were obliged to perform by the Nazis. Many people saved children by hiding them away in private houses and boarding schools. Of the approximately 50,000 Jews in Belgium in 1940, about 25,000 were deported—though only about 1,250 survived. Marie and Emile Taquet sheltered Jewish boys in a residential school or home. Bruno Reynders was a Belgian monk who defied the Nazis, as he implemented the directive of Pope Pius XII to save the Jews, worked with local orphanages, Catholic Nuns and the Belgian Underground to forge false identities for Jewish children whose parents willingly gave them up in an attempt to spare their lives faced with deportation to the death camps. Pere Bruno risked his life for his values and to save the lives of an estimated 400 Jewish children and is honored as a Righteous Gentile at Yad Vashem.
L'abbé Joseph André is another Catholic priest who secured safe hiding places with Belgian families, orphanages and other institutions for Jewish children and adults.
Denmark
The Jewish community in Denmark remained relatively unaffected by Germany's occupation of Denmark on 9 April 1940. The Germans allowed the Danish government to remain in office and this cabinet rejected the notion that any "Jewish question" should exist in Denmark. No legislation was passed against Jews and the yellow badge was not introduced in Denmark. In August 1943, this situation was about to collapse as the Danish government refused to introduce the death penalty as demanded by the Germans following a series of strikes and popular protests. The German empire forced the Danish government to shut down. During these events, German diplomat Georg Ferdinand Duckwitz tipped off Danish politician Hans Hedtoft that the Danish Jews would be deported to Germany following the collapse of the Danish government. Hedtoft alerted the Danish resistance and the Jewish leader C.B. Henriques informed the acting Chief Rabbi Marcus Melchior in the absence of the Chief Rabbi Max Friediger who had already been arrested as a hostage on 29 August 1943, urging the community to go into hiding in service on 29 September 1943. During the following weeks, more than 7,200 of Denmark's 8,000-strong Jewish communities were ferried to neutral Sweden hidden in fishing boats. A small number of Jews, some 450 in all, were captured by the Germans and shipped to Theresienstadt. Danish officials were able to ensure that these prisoners weren't shipped to extermination camps, and Danish Red Cross inspections and food packages ensured focus on the Danish Jews. Swedish Count Folke Bernadotte ensured their release and transport to Denmark in the final days of the war.
Netherlands
Based on its 1940 population of 9 million the 5,516 Jews rescued in the Netherlands represents the largest per capita number: 1 in 1,700 Dutch was awarded the Righteous Among the Nations medal. Notable rescuers include:
Willem Arondeus, Dutch artist and resistance fighter who helped forge documents allowing Jewish families to flee the country
Gertruida Wijsmuller-Meijer, who helped save about 10,000 Jewish children from Germany and Austria just before the outbreak of the war (Kindertransport) and on the last transport ship leaving the Netherlands to the UK in May 1940.
Jan Zwartendijk, who as a Dutch consular representative in Kaunas, Lithuania, issued exit visas used by between 6,000 and to 10,000 Jewish refugees.
Those who hid and helped Anne Frank and her family, like Miep Gies.
Caecilia Loots, a teacher and antifascist resistance member, who saved Jewish children during the war.
Marion van Binsbergen helped save approximately 150 Dutch Jews, most of them children, throughout the German occupation of the Netherlands.
Tina Strobos, rescued over 100 Jews by hiding them in her house and providing them with forged paperwork to escape the country.
Jan van Hulst (18 December 1903 – 1 August 1975), instrumental in preventing Jews from being deported and murdered during the Holocaust.
The participants of the so-called "Amsterdam dock strike" (better known as the February strike, about 300,000 to 500,000 people who on 25 and 26 February 1941 took part in the first strike against persecution of the Jews in Nazi-occupied Europe).
The village of Nieuwlande (117 inhabitants) that set up a quota for residents to rescue Jews.
Serbia
After the Invasion of Yugoslavia, the country was occupied by Germany and some regions were occupied by Italy, Hungary, Bulgaria and Albania. A joint German-Italian puppet state called Independent State of Croatia was installed. After a bombing campaign on major Serbian cities, a German puppet regime Nedić’s Serbia led by Milan Nedić was installed. In collaboration with the German Army, Serbian Chetnik collaborators along with the Serbian Volunteer Corps as well as the Serbian State Guard assisted in the persecution of Jews in Serbia proper, in Hungarian-occupied Vojvodina region, and in the territory held by the Croatian Ustashas. Serbian Jews who were not transported to concentration camps in Germany were either murdered in Nazi concentration camps within Serbia (Sajmište and Banjica), Banjica being jointly controlled by Nedic's Government and the German Army, or transported to Ustasha-controlled concentration camp Jasenovac and murdered there. Jews living in Hungarian-occupied regions faced mass executions, the most notorious being the Novi Sad raid in 1942.
Serbian civilians were involved in saving thousands of Yugoslavian Jews during this period. Miriam Steiner-Aviezer, a researcher into Yugoslavian Jewry and a member of Yad Vashem's Righteous Gentiles committee states: "The Serbs saved many Jews. Contrary to their present image in the world, the Serbs are a friendly, loyal people who will not abandon their neighbors." As of 2017 Yad Vashem recognizes 135 Serbians as Righteous Among Nations, the highest of any Balkan country.
Bulgaria
Bulgaria joined the Axis powers in March 1941 and took part in the invasion of Yugoslavia and Greece. The Nazi-allied government of Bulgaria, led by Bogdan Filov, fully and actively assisted in the Holocaust in occupied areas. On Passover 1943, Bulgaria rounded up the great majority of Jews in Greece and Yugoslavia, transported them through Bulgaria, and handed them off to German transport to Treblinka, where almost all were murdered. The Nazi-allied government of Bulgaria deported a higher percentage of Jews (from the areas of Greece and the Republic of Macedonia) than did the German occupiers in the region. In Bulgarian-occupied Greece, the Bulgarian authorities arrested the majority of the Jewish population on Passover 1943. The territories of Greece, Macedonia and other nations occupied by Bulgaria during World War II were not considered Bulgarian—they were only administered by Bulgaria, but Bulgaria had no say as to the affairs of these lands.
The active participation of Bulgaria in the Holocaust however did not extend to its pre-war territory and after various protests by Archbishop Stefan of Sofia and the interference of Dimitar Peshev, the planned deportation of the Bulgarian Jews (about 50,000) was stopped. Deportation to the concentration camps was denied. Bulgaria was officially thanked by the government of Israel despite being an ally of Nazi Germany.
Dimitar Peshev was the Deputy Speaker of the National Assembly of Bulgaria and Minister of Justice during World War II. He rebelled against the pro-Nazi cabinet and prevented the deportation of Bulgaria's 48 000 Jews. He was aided by the strong opposition of the Bulgarian Orthodox Church. Although Peshev had been involved in various anti-Semitic legislation that was passed in Bulgaria during the early years of the War, the government's decision to deport Bulgaria's 48 000 Jews on 8 March 1943 was too much for Peshev. After being informed of the deportation, Peshev tried several times to see Prime Minister Bogdan Filov but the prime minister refused. Next, he went to see Interior Minister Petar Gabrovski insisting that he cancel the deportations. After much persuasion, Gabrovski finally called the governor of Kyustendil and instructed him to stop preparations for the Jewish deportations. By 5:30 p.m. on 9 March, the order was cancelled. After the war, Peshev was charged with anti-Semitism and anti-Communism by the Soviet courts, and sentenced to death. However, after an outcry from the Jewish community, his sentence was commuted to 15 years imprisonment, though released after just one year. His deeds went unrecognized after the war, as he lived in poverty in Bulgaria. It was not until 1973 that he was awarded the title of Righteous Among the Nations. He died the same year.
Portugal
Historians have estimated that up to one million refugees fled from the Nazis through Portugal during World War II, an impressive number considering the size of the country's population at that time (circa 6 million). Portugal remained neutral within the overall objectives of the Anglo-Portuguese Alliance; and that astute policy under precarious conditions, made it possible for Portugal to contribute to the rescue of a large number of refugees. Portuguese Prime Minister António de Oliveira Salazar allowed all international Jewish organizations—HIAS, HICEM, the American Jewish Joint Distribution Committee, World Jewish Congress, and Portuguese Jewish relief committees—to establish themselves in Lisbon. In 1944, in Hungary, risking their lives, the diplomats Carlos Sampaio Garrido and Carlos de Liz-Texeira Branquinho, coordinating with Salazar, also helped many Jews escape Nazis and their Hungarian allies. In June 1940, when Germany invaded France, Portuguese consul in Bordeaux, Aristides de Sousa Mendes issued visas, indiscriminately, to a population in panic, without asking previous authorizations from Lisbon, as he was supposed to. On 20 June, the British Embassy in Lisbon accused the Consul in Bordeaux of improperly charging money for issuing visas and Sousa Mendes was called to Lisbon. The number of visas issued by Sousa Mendes cannot be determined; a 1999 study by the Yad Vashem historian Dr. Avraham Milgram published by the Shoah Resource Center, International School for Holocaust Studies, asserts that there is a great difference between reality and the myth created by the generally cited numbers. Sousa Mendes never lost his title as he kept on being listed in the Portuguese Diplomatic Yearbook until 1954 and kept on receiving his full Consul salary, $1,593 Portuguese Escudos, until the day he died. Other Portuguese credited for saving Jews during the war are Professor Francisco Paula Leite Pinto and Moisés Bensabat Amzalak. A devoted Jew, and a Salazar supporter, Amzalak headed the Lisbon Jewish community for more than fifty years (from 1926 until 1978). Leite Pinto, General Manager of the Portuguese railways, together with Amzalak, organized several trains, coming from Berlin and other cities, loaded with refugees.
Spain
In Franco's Spain, several diplomats contributed very actively to rescue Jews during the Holocaust. The two most prominent ones were Ángel Sanz Briz (the Angel of Budapest), who saved around five thousand Hungarian Jews by providing them Spanish passports, and Eduardo Propper de Callejón, who helped thousands of Jews to escape from France to Spain. Other diplomats with a relevant role were Bernardo Rolland de Miota (consul of Spain at Paris), José Rojas Moreno (ambassador at Bucharest), Miguel Ángel de Muguiro (diplomat at the embassy in Budapest), Sebastián Romero Radigales (consul at Athens), Julio Palencia Tubau, (diplomat at the embassy in Sofía), Juan Schwartz Díaz-Flores (consul at Vienna) and José Ruiz Santaella (diplomat at the embassy in Berlin).
Lithuania
According to the data available at Yad Vashem, by 1 January 2019, 904 rescuers of Jews in Lithuania were identified, whereas in the catalogue compiled by the Vilna Gaon State Jewish Museum, 2300 Lithuanians who rescued Jews are indicated, among them 159 members of clergy.
The Republic of Lithuania following the occupation of Poland by Nazi Germany and the Soviet Union in September 1939, accepted and accommodated in the country numbers of Polish and Jewish refugees as well as soldiers of defeated Polish army. Part of these refugees were later saved from the Soviets (and eventually from Nazis) by Japanese consul-general Chiune Sugihara and director of Philips plants in Lithuania and part-time acting consul of Netherlands Jan Zwartendijk after the occupation of Lithuania by the Soviet Union on June 15, 1940.
Chiune Sempo Sugihara, Japanese Consul-General in Kaunas, Lithuania, 1939–1940, issued thousands of visas to Jews fleeing Kaunas after occupation of Lithuania by the Soviet Union in defiance of explicit orders from the Japanese foreign ministry. The last foreign diplomat to leave Kaunas, Sugihara continued stamping visas from the open window of his departing train. After the war, Sugihara was fired from the Japanese foreign service, ostensibly due to downsizing.
As well as in other countries rescuers from Lithuania came from different layers of society. The most iconic figures are librarian Ona Šimaitė, doctor Petras Baublys, writer Kazys Binkis and his wife journalist Sofija Binkienė, musician Vladas Varčikas, writer and translator Danutė Zubovienė (Čiurlionytė) and her husband Vladimiras Zubovas, doctor Elena Kutorgienė, aviator Vladas Drupas, doctor Pranas Mažylis, Catholic priest Juozapas Stakauskas, teacher Vladas Žemaitis, Catholic nun Maria Mikulska and others. In Šarnelė village (Plungė district) Straupiai family (Jonas and Bronislava Straupiai together with their neighbours Adolfina and Juozas Karpauskai) saved 26 people (9 families).
Citizens of Lithuania and foreign countries who rescue people on the territory of Lithuania and citizens of Lithuania abroad are awarded Life Saving Crosses. The President of Lithuania honors Jewish rescuers every year on the occasion of the National Memorial Day for the Genocide of Lithuanian Jews, which is marked on September 23 to commemorate the liquidation of the Vilna Ghetto on that day in 1943.
Albania
Unlike many other Eastern European countries under Nazi occupation, Albania—which has a mixed Muslim and Christian population and a tradition of tolerance—became a safe haven for Jews. At the end of 1938, Albania was the only remaining country in Europe that still issued visas to Jews through its embassy in Berlin. Following the Nazi occupation of Albania, the country refused to hand over its small Jewish population to the Germans, sometimes even providing Jewish families with forged documents. During the war, about 2,000 Jews sought refuge in Albania, and many of them took shelter in rural parts of the country where they were protected by the local population. At the end of the war, Albania's Jewish population was greater than it was prior to the war, making it the only country in Europe where the Jewish population increased during World War II. Out of two thousand Jews in total, only five Albanian Jews perished at the hands of the Nazis. They were discovered by the Germans and subsequently deported to Pristina.
Between February and March in 1939, King Zog I of Albania granted asylum to 300 Jewish refugees before being overthrown by the Italian fascists in April the same year. When the Italians requisitioned the Albanian puppet government to expel its Jewish refugees, the Albanian leaders refused, and in the following years, 400 more Jewish refugees found sanctuary in Albania.
Refik Veseli was the first Albanian to be awarded the title Righteous Among the Nations, having declared afterwards that betraying the Jews "would have disgraced his village and his family. At minimum his home would be destroyed and his family banished". On 21 July 1992, Mihal Lekatari, an Albanian partisan from Kavajë, was recognized as Righteous Among the Nations. Lekatari is noted for stealing blank identity papers from the municipality of Harizaj and distributing identity papers with Muslim names on them to Jewish refugees. In 1997, Albanian Shyqyri Myrto was honored for rescuing Jews, with the Anti-Defamation League's Courage to Care Award presented to his son, Arian Myrto. In 2006, a plaque honoring the compassion and courage of Albania during the Holocaust was dedicated in The Holocaust Memorial Park in Sheepshead Bay in Brooklyn, New York, with the Albanian ambassador to the United Nations in attendance.
During the war, some parts of Kosovo and Macedonia which were occupied by the Axis powers were annexed to Albania, and an estimated 600 Jews were captured in these territories, and consequently killed.
Finland
The government of Finland generally refused to deport Finnish Jews to Germany. It has been said that Finnish government officials told German envoys that "Finland has no Jewish Problem". However, the Secret Police ValPo deported 8 Jews in 1942 who were refugees seeking asylum in Finland. Moreover, it seems highly likely that Finland deported Soviet POWs, among them a number of Jews. The majority of Finnish Jews, however, were protected by the government's co-belligerence with Germany. Their men joined the Finnish army and fought on the front.
The most notable Finnish individual involved in aiding the Jews was Algoth Niska (1888–1954). Niska was a smuggler during the Finnish prohibition but had run into financial troubles after its end in 1932, so when Albert Amtmann, an Austrian-Jewish acquaintance, expressed his concerns over his people's position in Europe, Niska quickly saw a business opportunity in smuggling Jews out of Germany. The modus operandi was quickly established. Niska would forge Finnish passports and Amtmann would acquire the customers, who with their new passports would be able to cross the border out of Germany. All in all, Niska falsified passports for 48 Jews during 1938 and earned 2,5 million Finnish marks ($890,000 or £600,000 in today's money) selling them. Only three of the Jews are known to have survived the Holocaust while twenty were certainly caught. The fates of the other twenty-five are not known. Involved in the operation with Niska and Amtmann were Major Rafael Johannes Kajander, Axel Belewicz and Belewicz's girlfriend Kerttu Ollikainen whose job was to steal the forms on which the passports were forged.
Italy
Despite Benito Mussolinis close alliance with Hitler, Italy did not adopt Nazism's genocidal ideology towards the Jews. The Nazis were frustrated by the Italian forces' refusal to co-operate in the roundups of Jews, and no Jews were deported from Italy prior to the Nazi occupation of the country following the Italian capitulation in September 1943. In Italian-occupied Croatia, the Nazi envoy Siegfried Kasche advised Berlin that Italian forces had "apparently been influenced" by Vatican opposition to German anti-Semitism. As anti-Axis feeling grew in Italy, the use of Vatican Radio to broadcast papal disapproval of race murder and anti-Semitism angered the Nazis. Mussolini was overthrown in July 1943, and the Nazis moved to occupy Italy, commencing a round-up of Jews. Although thousands were caught, the great majority of Italy's Jews were saved. As in other nations, Catholic networks were heavily engaged in rescue efforts.
In Fiume (northern Italy, today Croatian Rijeka), Giovanni Palatucci, after the promulgation of racial laws against Jews in 1938 and at the beginning of war in 1940, as chief of the Foreigners' Office, forged documents and visas to Jews threatened by deportation. He managed to destroy all documented records of some 5,000 Jewish refugees living in Fiume, issuing them false papers and providing them with funds. Palatucci then sent the refugees to a large internment camp in southern Italy protected by his uncle, Giuseppe Maria Palatucci, the Catholic Bishop of Campagna. Following the 1943 capitulation of Italy, Fiume was occupied by the Nazis. Palatucci remained as head of the police administration without real powers. He continued to clandestinely help Jews and maintain contact with the Resistance, until his activities were discovered by the Gestapo. The Swiss Consul to Trieste, a close friend of his, offered him a safe pass to Switzerland, but Giovanni Palatucci sent his young Jewish fiancée instead. Palatucci was arrested on 13 September 1944. He was condemned to death, but the sentence was later commuted to deportation to Dachau, where he died.
On 19 July 1944, the Gestapo rounded up the nearly 2000 Jewish inhabitants of the island of Rhodes, which had been governed by Italy since 1912. Of the approximately 2,000 Rhodesli Jews who were deported to Auschwitz and elsewhere, only 104 survived.
Giorgio Perlasca, who posed as the consul-general of Spain under the Spanish ambassador in Budapest, was able to put under his protection thousands of Jews and non-Jews destined to concentration camps.
The cycling champion Gino Bartali had hidden a Jewish family in his cellar and, according to one of the survivors, saved their lives in doing so. He also used his fame to carry messages and documents to the Italian Resistance and fugitive Jews. Bartali cycled from Florence through Tuscany, Umbria and Marche, many times traveling as far afield as Assisi, all the while wearing the racing jersey emblazoned with his name.
Calogero Marrone was the chief of the Civil Registry office in the municipality of Varese and issued hundreds of fake identity cards in order to save Jews and anti-fascists. He was arrested after an anonymous tip-off and died in the Dachau concentration camp.
Martin Gilbert wrote that, in October 1943, with the SS occupying Rome and determined to deport the city's 5000 Jews, the Vatican clergy had opened the sanctuaries of the Vatican to all "non-Aryans" in need of rescue in an attempt to forestall the deportation. "Catholic clergy in the city acted with alacrity", wrote Gilbert. "At the Capuchin convent on the Via Siciliano, Father Benoit saved a large number of Jews by providing them with false identification papers [...] by the morning of October 16, a total of 4,238 Jews had been given sanctuary in the many monasteries and convents of Rome. A further 477 Jews had been given shelter in the Vatican and its enclaves." Gilbert credited the rapid rescue efforts of the Church with saving over four-fifths of Roman Jews.
Other Righteous Catholic rescuers in Italy included Elisabeth Hesselblad. She and two British women, Mother Riccarda Beauchamp Hambrough and Sister Katherine Flanagan have been beatified for reviving the Swedish Bridgettine Order of nuns and hiding scores of Jewish families in their convent. The churches, monasteries and convents of Assisi formed the Assisi Network and served as a safe haven for Jews. Gilbert credits the network established by Bishop Giuseppe Placido Nicolini and Abbott Rufino Niccaci of the Franciscan Monastery, with saving 300 people. Other Italian clerics honored by Yad Vashem include the theology professor Fr Giuseppe Girotti of Dominican Seminary of Turin, who saved many Jews before being arrested and sent to Dachau where he died in 1945; Fr Arrigo Beccari who protected around 100 Jewish children in his seminary and among local farmers in the village of Nonantola in Central Italy; and Don Gaetano Tantalo, a parish priest who sheltered a large Jewish family. Of Italy's 44,500 Jews, some 7,680 were murdered in the Nazi Holocaust.
Vatican City State
In the 1930s, Pope Pius XI urged Mussolini to ask Hitler to restrain the anti-Semitic actions taking place in Germany. In 1937, the Pope issued the Mit brennender Sorge () encyclical, in which he asserted the inviolability of human rights.
Pius XII
Pope Pius XII succeeded Pius XI on the eve of war in 1939. He used diplomacy to aid the victims of the Holocaust, and directed the Church to provide discreet aid. His encyclicals such as Summi Pontificatus and Mystici corporis preached against racism—with specific reference to Jews: "there is neither Gentile nor Jew, circumcision nor uncircumcision". His 1942 Christmas radio address denounced the murder of "hundreds of thousands" of "faultless" people because of their "nationality or race". The Nazis were furious and The Reich Security Main Office, responsible for the deportation of Jews, called him the "mouthpiece of the Jewish war criminals". Pius XII intervened to attempt to block Nazi deportations of Jews in various countries.
Following the capitulation of Italy, Nazi deportations of Jews to death camps began. Pius XII protested at diplomatic levels, while several thousand Jews found refuge in Catholic networks. On 27 June 1943, Vatican Radio broadcast a papal injunction: "He who makes a distinction between Jews and other men is being unfaithful to God and is in conflict with God's commands".
When the Nazis came to Rome in search of Jews, the Pope had already days earlier ordered the sanctuaries of the Vatican City be opened to all "non-Aryans" in need of refuge and according to Martin Gilbert, by the morning of 16 October, "a total of 477 Jews had been given shelter in the Vatican and its enclaves, while another 4,238 had been given sanctuary in the many monasteries and convents of in Rome. Only 1,015 of Rome's 6,730 Jews were seized that morning". Upon receiving news of the roundups on the morning of 16 October, the Pope immediately instructed Cardinal Secretary of State Maglione, to make a protest to the German ambassador. After the meeting, the ambassador gave orders for a halt to the arrests. Earlier, the Pope had helped the Jews of Rome by offering gold towards the 50 kg ransom demanded by the Nazis.
Other noted rescuers assisted by Pius were Pietro Palazzini Giovanni Ferrofino, Giovanni Palatucci, Pierre-Marie Benoit and others. When Archbishop Giovanni Montini (later Pope Paul VI) was offered an award for his rescue work by Israel, he said he had only been acting on the orders of Pius XII.
Pius' diplomatic representatives lobbied on behalf of Jews across Europe, including in Vichy France, Hungary, Romania, Bulgaria, Croatia and Slovakia, Germany itself and elsewhere. Many papal nuncios played important roles in the rescue of Jews, among them Giuseppe Burzio, the Vatican Chargé d'Affaires in Slovakia; Filippo Bernardini, Nuncio to Switzerland; and Angelo Roncalli, the Nuncio to Turkey. Angelo Rotta, the wartime Nuncio to Budapest and Andrea Cassulo, the Nuncio to Bucharest have been recognized as Righteous Among the Nations.
Pius directly protested the deportations of Slovakian Jews to the Bratislava government from 1942. He made a direct intervention in Hungary to lobby for an end to Jewish deportations in 1944, and on 4 July, the Hungarian leader, Admiral Horthy, told Berlin that deportations of Jews must cease, citing protests by the Vatican, the King of Sweden and the Red Cross. The pro-Nazi, anti-Semitic Arrow Cross Party seized power in October, and a campaign of murder of the Jews commenced. The neutral powers led a major rescue effort and Pius' representative, Angelo Rotta, took the lead in establishing an "international Ghetto", marked by the emblems of the Swiss, Swedish, Portuguese, Spanish and Vatican legations, and providing shelter for some 25,000 Jews.
In Rome, some 4,000 Italian Jews and escaped prisoners of war avoided deportation, many of them hidden in safe houses or evacuated from Italy by a resistance group organized by the Irish-born priest and Vatican official Hugh O'Flaherty. Msgr. O'Flaherty used his political connections to help secure sanctuary for dispossessed Jews. The wife of the Irish ambassador, Delia Murphy, assisted him.
Norway
During the occupation of Norway by Nazi Germany, its Jewish community was subject to persecution and deported to extermination camps. Although at least 764 Jews in Norway were killed, over 1,000 were rescued with the help of non-Jewish Norwegians who risked their lives to smuggle the refugees out, typically to Sweden. , 67 of these individuals have been recognized by Yad Vashem as being Righteous Among the Nations. Yad Vashem has also recognized the Norwegian resistance movement collectively.
China
Ho Feng Shan – Chinese Consul in Vienna started to issue visas to Jews for Shanghai, part of which during this time was still under the control of the Republic of China, for humanitarian reasons. Between 1933 and 1941, the Chinese city of Shanghai under Japanese occupation, accepted unconditionally over 18,000 Jewish refugees escaping the Holocaust in Europe, a number greater than those taken in by Canada, New Zealand, South Africa and British India combined during World War II. After 1943, the occupying Nazi-aligned Japanese ghettoised the Jewish refugees in Shanghai into an area known as the Shanghai ghetto. Many of the Jewish refugees in Shanghai migrated to the United States and Israel after 1948 due to the Chinese Civil War (1946–1950).
Japan
The Japanese government ensured Jewish safety in China, Japan and Manchuria. Japanese Army General Hideki Tōjō received Jewish refugees in accordance with Japanese national policy and rejected German protest. Chiune Sugihara, Kiichiro Higuchi, and Fumimaro Konoe helped thousands of Jews escape the Holocaust from occupied Europe.
Bolivia
Between 1938 and 1941, around 20,000 Jews were given visas for Bolivia under an agricultural visa program. Although most moved on to the neighboring countries of Argentina, Uruguay and Chile, some stayed and created a Jewish Community in Bolivia.
The Philippines
In a notable humanitarian act, Manuel L. Quezon, the first Commonwealth of the Philippines, in cooperation with United States High Commissioner Paul V. McNutt, facilitated the entry into the Philippines of Jewish refugees fleeing fascist regimes in Europe, while taking on critics who were convinced by fascist propaganda that Jewish settlement is a threat to the country. Quezon and McNutt proposed to have 30,000 refugee families on Mindanao, and 40,000-50,000 refugees on Polillo. Quezon gave, as a 10-year loan to Manila's Jewish Refugee Committee, land beside Quezon's family home in Marikina. The land would house homeless refugees in Marikina Hall, dedicated on 23 April 1940.
Leaders and diplomats
Per Anger – Swedish diplomat in Budapest who originated the idea of issuing provisional passports to Hungarian Jews to protect them from arrest and deportation to camps. Anger collaborated with Raoul Wallenberg to save the lives of thousands of Jews.
Władysław Bartoszewski – Polish Żegota activist.
Count Folke Bernadotte of Wisborg – Swedish diplomat, who negotiated the release of 27,000 people (a significant number of whom were Jews) to hospitals in Sweden.
Jacob (Jack) Benardout – British diplomat to Dominican Republic before and during World War II. Issued numerous Dominican Republic visas to Jews in Germany. Only 16 Jewish families arrived in the Dominican Republic (the other Jews dispersed to countries along the way, e.g. Britain, America) and so created the Jewish community of the Dominican Republic.
Hiram Bingham IV – American Vice Consul in Marseilles, France, 1940–1941.
José Castellanos Contreras – a Salvadorean army colonel and diplomat who, while working as El Salvador's Consul General in Geneva from 1942 to 1945, and in conjunction with George Mantello, helped save at least 13,000 Central European Jews from Nazi persecution by providing them with false papers of Salvadorean nationality.
Georg Ferdinand Duckwitz – German diplomatic attaché in Denmark. Alerted Danish politician Hans Hedtoft about the imminent German plans deport to Denmark's Jewish community, thus enabling the following rescue of the Danish Jews.
Harald Edelstam – Swedish diplomat in Norway who helped to protect and smuggle hundreds of Jews and Norwegian resistance fighters to Sweden.
Gisi Fleischmann led the Bratislava Working Group, one of the most important rescue groups, in partnership with Rabbi Chaim Michael Dov Weissmandl. They successfully negotiated with the Nazis in early 1942 to stop the transports from Slovakia and a few months later, via the Europa plan, to try to stop transports from other parts of Europe. They demanded bombing of the rail lines to Auschwitz and authored/distributed the Auschwitz Report in 1944.
Frank Foley – British MI6 agent undercover as a passport officer in Berlin, saved around 10,000 people by issuing forged passports to Britain and the British Mandate of Palestine.
Rafael Leónidas Trujillo – the Dominican dictator promised to receive 100,000 Jewish refugees into the Dominican Republic in 1938 when Franklin D. Roosevelt organized an international conference in Evian to discuss the persecution of the Jews. Dominican Republic was the only nation accepting Jews immigrants after the conference. The DORSA (Dominican Republic Settlement Association) was formed to settle Jews on the northern coast. 5,000 visas were issued, but only 645 European Jews reached the settlement. The refugees were assigned land and cattle and the town of Sosúa was founded. 5000 dollars in gold from Jewish International in New York were paid for each person taken by the Trujillo. Other refugees settled in the capital Santo Domingo.
Albert Göring – German businessman (and younger brother of leading Nazi Hermann Göring) who helped Jews and dissidents survive in Germany.
Paul Grüninger – Swiss commander of police who provided falsely dated papers to over 3,000 refugees so they could escape Austria following the Anschluss.
Carlos María Gurméndez - Uruguayan ambassador to the Netherlands who sheltered German and Dutch Jews in the Uruguayan embassy and assisted with their travel to Uruguay and the United States.
Kiichiro Higuchi – Japanese lieutenant general who saved 20,000 Jewish refugees.
Wilm Hosenfeld – German officer who helped pianist Wladyslaw Szpilman, a Polish Jew, among many others.
Seishirō Itagaki – Japanese Army Minister who proposed and adopted a Japanese national policy to receive Jewish refugees.
Lyndon B. Johnson – Future President of the United States who, as a member of the United States House of Representatives in 1938, helped Austrian conductor Erich Leinsdorf gain permanent residency in the United States. Johnson later helped Jews enter the U.S. through Latin America and become workers on National Youth Administration projects in Texas.
Prince Constantin Karadja – Romanian diplomat, who saved over 51,000 Jews from deportation and extermination, as credited by Yad Vashem in 2005.
Jan Karski – Polish emissary of Armia Krajowa to Western Allies and eye-witness of the Holocaust.
Necdet Kent – Turkish Consul General at Marseille, who granted Turkish citizenship to hundreds of Jews. At one point, he entered an Auschwitz-bound train at enormous personal risk to save from deportation 70 Jews, to whom he had granted Turkish citizenship.
Fumimaro Konoe – Japanese Prime Minister who adopted a Japanese national policy to receive Jewish refugees.
Zofia Kossak-Szczucka – Polish founder of Zegota.
Hillel Kook (aka Peter Bergson) established a US-based rescue group, which had considerable support in the Congress and Senate. The group's activism was the major factor forcing President Roosevelt to establish the War Refugee Board in January 1944. One of the WRB's important actions was initiation and sponsoring of the Wallenberg mission to Budapest.
Carl Lutz – Swiss consul in Budapest, protected tens of thousands of Jews in Hungary.
Luis Martins de Souza Dantas – Brazilian in charge of the Brazilian diplomatic mission in France. He granted Brazilian visas to several Jews and other minorities persecuted by the Nazis. He was proclaimed as Righteous among the Nations in 2003.
George Mantello (b. Mandl Gyorgy) – El Salvador's honorary consul for Hungary, Romania, and Czechoslovakia – provided Salvadoran protection papers for thousands of Jews. He spearheaded an unprecedented Swiss grassroots protests and press campaign. It led to Roosevelt, Churchill and other world leaders threatening Hungary's ruler, regent Miklos Horthy, with post-war retribution if the transports did not stop. That ended the deportation of Jews from Hungary to Auschwitz.
Boris III of Bulgaria – King of Bulgaria from 1918 to 1943 Resisted demands from Hitler to deport the Jews resulting in all 50,000 being spared, Boris died in 1943 after meeting with Hitler.
Paul V. McNutt – United States High Commissioner of the Philippines, 1937–1939, who facilitated the entry of Jewish refugees into the Philippines.
Helmuth James Graf von Moltke – adviser to Nazi Germany on international law; active in Kreisau Circle resistance group, sent Jews to safe-haven countries.
Delia Murphy – wife of Dr. Thomas J. Kiernan, Irish minister in Rome 1941–1946, who worked with Hugh O'Flaherty and was part of the network that saved the lives of POWs and Jews in the hands of the Gestapo.
Jean-Marie Musy toward end of the war negotiated with Himmler on behalf of Recha Sternbuch – to rescue large numbers of Jews in the concentration camps
Giovanni Palatucci – Italian police official who saved several thousand.
Giorgio Perlasca – Italian. When Ángel Sanz Briz was ordered to leave Hungary, he falsely claimed to be his substitute and saved some thousands more Jews.
Dimitar Peshev – Deputy Speaker of the Bulgarian Parliament, played a major role in rescuing Bulgaria's 48 000 Jews, the entire Jewish population in Bulgaria at the time.
Frits Philips – Dutch industrialist who saved 382 Jews by insisting to the Nazis that they were indispensable employees of Philips.
Witold Pilecki – the only person who volunteered to be imprisoned in Auschwitz, organized a resistance inside the camp and as a member of Armia Krajowa sent the first reports on the camp atrocities to the Polish Government in Exile, from where they were passed to the rest of the Western Allies.
Karl Plagge – a major in the Wehrmacht Heer who issued work permits in order to save almost 1,000 Jews (see The Search for Major Plagge: The Nazi Who Saved Jews, by Michael Good)
Enver Hoxha – Led the Resistance against the German and Italians in Albania. Hoxha refused that the Germans or collaborationists deport a single Jew, therefore Albania was the only country in Europe to have an increased Jewish population after the war.
Mehmet Shehu – a resistance fighter in Albania who allowed Jews to enter Albania, and refused to hand the Jews over to The Germans, during the occupation
Eduardo Propper de Callejón – First Secretary in the Spanish embassy in Paris who stamped and signed passports almost non-stop for four days in 1940 to let Jewish refugees escape to Spain and Portugal.
Traian Popovici – Romanian mayor of Cernăuţi (Chernivtsi) who saved 20,000 Jews of Bukovina.
Manuel L. Quezon – President of the Commonwealth of the Philippines, 1935–1941, assisted in resettling Jewish refugees on the island of Mindanao.
Florencio Rivas – Consul General of Uruguay in Germany, who allegedly hid one hundred and fifty Jews during Kristallnacht and later provided them with passports.
Gilberto Bosques Saldívar – General Consul of Mexico in Marseilles, France. For two years, he issued Mexican visas to around 40,000 Jews, Spaniards and political refugees, allowing them to escape to Mexico and other countries. He was imprisoned by the Nazis in 1943 and released to Mexico in 1944.
Ángel Sanz Briz – Spanish consul in Hungary. Together with Giorgio Perlasca, he saved more than 5,000 Jews in Budapest by issuing Spanish passports to them.
Abdol-Hossein Sardari – Head of Consular affairs at the Iranian Embassy in Paris. He saved many Iranian Jews and gave 500 blank Iranian passports to an acquaintance of his, to be used by non-Iranian Jews in France.
Oskar Schindler – German businessman whose efforts to save his 1,200 Jewish workers were recounted in the book Schindler's Ark and the film Schindler's List.
Rabbi Solomon Schonfeld set up a Uk-based rescue committee and rescued many thousands of Jews.
Eduard Schulte – German industrialist, the first to inform the Allies about the mass extermination of Jews.
Irena Sendler – Polish head of Zegota children's department who saved 2,500 Jewish children.
Ho Feng Shan – Chinese Consul in Vienna who freely issued visas to Jews.
Henryk Slawik – Polish diplomat who saved 5,000–10,000 people in Budapest, Hungary.
Aristides de Sousa Mendes – Portuguese diplomat in Bordeaux, who signed about 30,000 visas to help Jews and persecuted minorities to escape the Nazis and The Holocaust.
Recha Sternbuch rescued large numbers of Jews with the help of her husband Yitzchak by smuggling them into Switzerland from Austria, by distributing protection papers, by negotiating with Himmler with help of Jean-Marie Musy to save Jews in the concentration camps as the Germans were retreating, and by rescuing the Jews who arrived to Bergen-Belsen by train from Hungary.
Chiune Sugihara – Japanese consul to Lithuania, 2,140 (mostly Polish) Jews and an unknown number of additional family members were saved by passports, many unauthorized, provided by him in 1940.
Hideki Tōjō – General and Prime Minister of Japan who received Jewish refugees in Manchuria and rejected German protest.
Selâhattin Ülkümen – Turkish diplomat who saved the lives of some 42 Jewish Turkish families, more than 200 persons, among a Jewish community of some 2000 after the Germans occupied the island of Rhodes in 1944.
Raoul Wallenberg – Swedish diplomat. Wallenberg saved the lives of tens of thousands of Jews condemned to certain death by the Nazis during World War II. In January 1945, Wallenberg was imprisoned at the headquarters of Rodion Malinovsky in Debrecen and disappeared. He is believed to have been poisoned in the Lubyanka Building by the NKVD torturer Grigory Mairanovsky.
Sir Nicholas Winton – British stockbroker who organized the Czech Kindertransport which sent 669 children (most of them Jewish) to foster parents ln England and Sweden from Czechoslovakia and Austria after Kristallnacht. Sir Nicholas was nominated for the 2008 Nobel Peace Prize.
Namik Kemal Yolga – A Vice-Consul at the Turkish Embassy in Paris who saved numerous Turkish Jews from deportation.
Guelfo Zamboni – Consul General at Thessaloniki who gave false papers to save the lives of over 300 Jews residing there.
Raymond Geist – Consul General at the American embassy in Berlin. While he was posted in Berlin from 1929 to 1939 he personally intervened with Nazi officials to save those (German Jews as well as opponents of the Nazi regime), who were under the threat of being imprisoned in concentration camps and issued more than 50,000 visas to save their lives. According to the TV series Genius, he was the one who issued visas to Albert Einstein and his family even when he was under orders from J. Edgar Hoover, who was at that time the Director of the FBI to not to give the visas till Albert Einstein signed a declaration confirming that he was not a member of the Communist Party. He was awarded the Order of Merit by the German Federal Republic in 1954.
Religious figures
Catholic officials
Pope Pius XII, preached against racism in encyclicals like Summi Pontificatus. Used Vatican Radio to denounce race murders and anti-Semitism. Directly lobbied Axis officials to stop Jewish deportations. Opened the sanctuaries of the Vatican to Rome's Jews during the Nazi roundup.
Monsignor Hugh O'Flaherty CBE – Irish Catholic priest who saved more than 6,500 Allied soldiers and Jews; known as the "Scarlet Pimpernel of the Vatican". Retold in the film The Scarlet and the Black.
Filippo Bernardini, papal nuncio to Switzerland.
Giuseppe Burzio, the Vatican Chargé d'Affaires in Slovakia. Protested the anti-Semitism and totalitarianism of the Tiso regime. Burzio advised Rome of the deteriorating situation for Jews in the Nazi puppet state, sparking Vatican protests on behalf of Jews.
Angelo Roncalli, the nuncio to Turkey saved a number of Croatian, Bulgarian and Hungarian Jews by assisting their migration to Palestine. Roncalli succeeded Pius XII as Pope John XXIII, and always said that he had been acting on the orders of Pius XII in his actions to rescue Jews.
Andrea Cassulo, papal nuncio in Romania. Appealed directly to Marshall Antonescu to limit the deportations of Jews to Nazi concentration camps planned for the summer of 1942.
Cardinal Gerlier of France refused to hand over Jewish children being sheltered in Catholic homes. In September 1942, Eight Jesuits were arrested for sheltering hundreds of children on Jesuit properties, and Pius XII's Secretary of State, Cardinal Maglione protested to the Vichy Ambassador.
Giuseppe Marcone, apostolic visitor to Croatia, lobbied Croat regime, saved 1000 Jewish partners in mixed marriages.
Archbishop Aloysius Stepinac of Zagreb, condemned Croat atrocities against both Serbs and Jews, and himself saved a group of Jews. He declared publicly in the spring of 1942 that it was "forbidden to exterminate Gypsies and Jews because they are said to belong to an inferior race".
Bishop Pavel Gojdič protested the persecution of Slovak Jews. Gojdic was beatified by the Church and recognized as Righteous Among the Nations by Yad Vashem.
Angelo Rotta, papal nuncio to Hungary. Actively protested Hungary's mistreatment of the Jews, and helped persuade Pope Pius XII to lobby the Hungarian leader Admiral Horthy to stop their deportation. He issued protective passports for Jews and 15,000 safe conduct passes – the nunciature sheltered some 3000 Jews in safe houses. An "International Ghetto" was established, including more than 40 safe houses marked by the Vatican and other national emblems. 25,000 Jews found refuge in these safe houses. Elsewhere in the city, Catholic institutions hid several thousand more Jewish people.
Archbishop Johannes de Jong, later Cardinal, of Utrecht, Netherlands, who drew up together with Titus Brandsma O.Carm. († Dachau, 1942) a letter in which he called for all Catholics to assist persecuted Jews, and in which he openly condemned the Nazi German "deportation of our Jewish fellow citizens" (From: Herderlijk Schrijven, read from all pulpits on Sunday 26 January 1942).
Archbishop Jules-Géraud Saliège of Toulouse – lead a number of French bishops (including Monseigneur Théas, Bishop of Montauban, Monseigneur Delay, Bishop of Marseilles, Cardinal Gerlier, Archbishop of Lyon, Monseigneur Vansteenberghe of Bayonne and Monseigneur Moussaron, Archbishop of Albi – in denouncing roundups and mistreatment of Jews in France, spurring greater resistance.
Père Marie-Benoît, Capuchin priest who saved many Jews in Marseille and later in Rome where he became known among the Jewish community as "father of the Jews".
Mother Matylda Getter's Franciscan Sisters of the Family of Mary sheltered Jewish children escaping the Warsaw Ghetto. Getter's convent rescued more than 750.
Alfred Delp S.J., a Jesuit priest who helped Jews escape to Switzerland while rector of St. Georg Church in suburban Munich; also involved with the Kreisau Circle. Executed 2 February 1945 in Berlin.
Rufino Niccacci, a Franciscan friar and priest who sheltered Jewish refugees in Assisi, Italy, from September 1943 through June 1944.
Maximilian Kolbe – Polish Conventual Franciscan friar. During the Second World War, in the friary, Kolbe provided shelter to people from Greater Poland, including 2,000 Jews. He was also active as a radio amateur, vilifying Nazi activities through his reports.
Bernhard Lichtenberg – German Catholic priest at Berlin's Cathedral. Sent to Dachau because he prayed for Jews at Evening Prayer.
Sára Salkaházi – a Hungarian Roman Catholic nun who sheltered approximately 100 Jews in Budapest.
Margit Slachta, of the Hungarian Social Service Sisterhood, went to Rome to encourage papal action against the Jewish persecutions. In Hungary, she had sheltered the persecuted and protested forced labour and antisemitism. In 1944, Pius appealed directly to the Hungarian government to halt the deportation of the Jews of Hungary. The Sisters of Social Service, nuns who saved thousands of Hungarian Jews; included Sister Sara Salkahazi, recognized by Yad Vashem as well as beatified.
Others
Archbishop Damaskinos – Archbishop of Athens during the German occupation. He formally protested the deportation of Jews and quietly ordered churches under his jurisdiction to issue fake Christian baptismal certificates to Jews fleeing the Nazis. Thousands of Greek Jews in and around Athens were thus able to claim that they were Christian and were thus saved.
Archbishop Stefan of Sofia – Bishop of Sofia and Exarch of Bulgaria, actively supported Dimitar Peshev's pressure against the Bulgarian government to cancel the deportation of the 48,000 Bulgarian Jews.
Bishop George Bell - Bishop of Chichester, England and friend of Dietrich Bonhoeffer. In 1936 Bell received the chair of the International Christian Committee for German Refugees, and in that role he especially supported Jewish Christians, who at that time were supported by neither Jewish nor Christian organizations. He provided a temporary home for exiled Jewish children in his own official residence.
Dietrich Bonhoeffer – a German Lutheran pastor who joined the Abwehr (a German military intelligence organization) which was also the center of the anti-Hitler resistance, and was involved in operations to help German Jews escape to Switzerland. Arrested by the Nazis, he was hanged on 5 April 1945, not long before the war ended.
Metropolitan Bishop Chrysostomos of Zakynthos, who, when ordered by the Axis occupying forces to submit a list of all Jews on the island, submitted a document bearing just two names: his own and the mayor's. Consequently, all 275 Zante Jews were saved.
Omelyan Kovch – Ukrainian Greek Catholic priest who was deported to Majdanek for helping thousands of Jews. He was canonized by Pope John Paul II
Dimitar Peshev was the Deputy Speaker of the National Assembly of Bulgaria and Minister of Justice (1935–1936), before World War II. He rebelled against the pro-Nazi cabinet and prevented the deportation of Bulgaria's 48,000 Jews, and was bestowed the title of "Righteous Among the Nations".
Leopold Socha was a Polish sewage inspector in the city of Lwów (now Lviv, Ukraine). During the Holocaust, Socha used his knowledge of the city's sewage system to shelter a group of Jews from Nazi Germans and their supporters of different nationalities. In 1978, he was recognized by the State of Israel as Righteous Among the Nations.
Andrey Sheptytsky – Metropolitan Archbishop of the Ukrainian Greek Catholic Church, harbored hundreds of Jews in his residence and in Greek Catholic monasteries. He also issued the pastoral letter, "Thou Shalt Not Kill", to protest Nazi atrocities.
André and Magda Trocmé – A French Reformed pastor and his wife who led the Le Chambon-sur-Lignon village movement that saved 3,000–5,000 Jews.
Maria Skobtsova – Russian Orthodox nun who ran a shelter for alcoholics, drug addicts and homeless people; the shelter was also open for refugees who had fled from the Soviet Union. During the first three years of the war she also took in several hundred Jewish people fearing persecution. She died in Ravensbrück concentration camp during the end of the war, after almost two years in the camp. Canonized by the Eastern Orthodox Church as a saint; she is also named a Righteous among the Nations by Yad Vashem
Quakers
The Religious Society of Friends, known as Quakers, from 1933 played a major role in assisting and saving Jews through their international network of centres (Berlin, Paris, Vienna) and organizations. In 1947, the Nobel Peace Prize was awarded to the Friends Service Council and to the American Friends Service Committee. Also individual Friends did rescue work.
Bertha Bracey – As secretary of the Germany Emergency Commission, set up 7 April 1933, in Britain, she raised awareness for the dangers of the Nazi philosophy. With voluntary workers, she handled appeals for assistance from Germany, Austria and Czechoslovakia and contributed substantially to the Kindertransport which brought 10,000 children to England.
Elisabeth Abegg – On 23 May 1967, Yad Vashem recognized German Quaker Elisabeth Abegg as Righteous Among the Nations. She helped many Jewish people by offering them accommodation in her home or directing them to hiding places elsewhere.
Kees Boeke and Betty Boeke-Cadbury – On 4 July 1991, Yad Vashem recognized Cornelis Boeke and his wife Beatrice Boeke-Cadbury as Righteous Among the Nations for hiding Jewish children in Bilthoven.
Laura van den Hoek Ostende – On 29 September 1994, Yad Vashem recognized Dutch Quaker Laura van den Hoek Ostende-van Honk as Righteous Among the Nations for hiding Jews in Putten, Hilversum and Amsterdam.
Mary Elmes – On 23 January 2013, Yad Vashem recognized Irish Quaker Mary Elisabeth Elmes as Righteous Among the Nations for rescuing Jewish children in France.
Auguste Fuchs-Bucholz and Fritz Fuchs – On 11 August 2009, Yad Vashem recognized German Quakers Auguste Fuchs-Bucholz and Fritz Fuchs as Righteous Among the Nations.
Carl Hermann and Eva Hermann-Lueddecke – On 19 January 1976, Yad Vashem recognized German Quakers Carl Hermann and Eva Hermann-Lueddecke as Righteous Among the Nations.
Gilbert Lesage – On 14 January 1985, Yad Vashem recognized French Quaker Gilbert Lesage as Righteous Among the Nations.
Gertrud Luckner – On 15 February 1966, Yad Vashem recognized German Quaker Gertrud Luckner as Righteous Among the Nations.
Ernst Lusebrink and Elfriede Lusebrink-Bokenkruger – On 11 August 2009, Yad Vashem recognized German Quakers Ernst Lusebrink and Elfriede Lusebrink-Bokenkruger as Righteous Among the Nations.
Geertruida Pel and Trijntje Pfann – On 15 August 2012, Yad Vashem recognized Dutch Quaker Geertruida Pel and her daughter Trijntje Pfann as Righteous Among the Nations.
Lili Pollatz-Engelsmann and Manfred Pollatz – On 3 December 2013, Yad Vashem recognized German Quakers Lili Louise Pollatz-Engelsmann and Erwin Herbert Manfred Pollatz as Righteous Among the Nations for hiding German and Dutch Jewish children in their home in Haarlem, Netherlands. Wijnberg, I., Hollaender, A., 'Er wacht nog een kind..., De quakers Lili en Manfred Pollatz, hun school en kindertehuis in Haarlem 1934–1945, AMB Diemen, 2014, ;
Wijnberg, I., Hollaender, A., 'Er wacht nog een kind ..., De quakers Lili en Manfred Pollatz, huIlse Schwersensky-Zimmermann and n school en kinderte men, 2014,
Ilse Schwersensky-Zimmermann and Gerhard Schwersensky – On 2 May 1985, Yad Vashem recognized German Quakers Gerhard Schwersensky and Ilse Schwersensky-Zimmermann as Righteous Among the Nations for hiding Jews in Berlin.
Villages helping Jews
Yaruga, Ukraine
Le Chambon-sur-Lignon, in the Haute-Loire département in France, which saved up to 5,000 Jews.
In occupied Poland, among the hundreds of villages involved, some of the most notable included Głuchów near Łańcut with everyone engaged, as well as the villages of Główne, Ozorków, Borkowo near Sierpc, Dąbrowica near Ulanów, in Głupianka near Otwock, and Teresin near Chełm. In Cisie near Warsaw, 25 Poles were caught hiding Jews; all were killed and the village was burned to the ground as punishment. In Gołąbki, Jerzy and Irena Krępeć provided a hiding place for as many as 30 Jews on their farm and set up homeschooling for all children, Christian and Jewish together; their actions were "an open secret in the village." Other villagers helped "if only to provide a meal." Another farm couple, Alfreda and Bolesław Pietraszek, provided shelter for Jewish families consisting of 18 people in Ceranów near Sokołów Podlaski, and their neighbors brought food to those being rescued. In Markowa, where 17 Jews survived the war in hiding with their Christian neighbors, entire Polish family of Józef and Wiktoria Ulma including 6 children and prenatal child were shot dead by the Germans for hiding the Szall and Goldman families. Dorota and Antoni Szylar hid seven members of Weltz family. Julia and Józef Bar hid five members of Reisenbach family. Michal Bar hid Jakub Lorbenfeld; while Jan and Weronika Przybylak hid Jakub Einhorn.
Tršice, Czech Republic, many people from this village helped hide a Jewish family; six of them were given the honorific of Righteous Among the Nations.
Nieuwlande, Netherlands – during the war, this small village contained 117 inhabitants. Most households in the village and surrounding area cooperated to shelter Jews, thus making it difficult for anyone in the small village to betray their neighbors. Dozens of Jews were thus saved. Over 200 inhabitants have been honored by Yad Vashem.
Moissac, France – There was a Jewish boarding home and orphanage in this town. When the mayor was told that the Nazis were coming, the older students would go camping for several days, the younger students were boarded with families in the area and told to be treated as members of their immediate family; the oldest students hid in the house. When it became too dangerous for the students to stay there any longer, the residents made sure that every student had a safe place to go to. If the students had to move again, the counsellors from the boarding house arranged for a new place and even escorted them to the new housing.
The Portuguese cities of Figueira da Foz, Porto, Coimbra, Curia, Ericeira and Caldas da Rainha were assigned to house refugees. They were pleasant resorts with many available hotels. The refugees led totally ordinary lives. They were allowed to circulate freely within town limits, practice their religions, and enroll their children in local schools. "Here we were given freedom of movement; we were allowed to go on outing and live as we wished", said Ben-Zwi Kalischer. Those times were captured on films that can be found at the Steven Spielberg Film and Video Archive.
Oľšavica, Slovakia
Others
The American Jewish Joint Distribution Committee
The Jewish Labor Committee
See also
Arab rescue efforts during the Holocaust
British Hero of the Holocaust
Jewish settlement in the Japanese Empire
Rescue of Roma during the Porajmos
Rescuer (genocide)
Footnotes
Citations
Sources
Further reading
External links
The Jewish Foundation for the Righteous: Stories of Moral Courage
About the "Righteous Among the Nations" Program at Yad Vashem
Lists of people by activity
People of the Holocaust
Responses to genocide
The Holocaust-related lists | Rescue of Jews during the Holocaust | Biology | 14,328 |
2,903,320 | https://en.wikipedia.org/wiki/Theta%20Bo%C3%B6tis | Theta Boötis, Latinized from θ Boötis, is a star in the northern constellation of Boötes the herdsman, forming a corner of the upraised left hand of this asterism. It has the traditional name Asellus Primus (; Latin for "first donkey colt") and the Flamsteed designation 23 Boötis. Faintly visible to the naked eye, this star has a yellow-white hue with an apparent visual magnitude of 4.05. It is located at a distance of 47.2 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −10.6 km/s.
Properties
The stellar classification of Theta Boötis is F7 V, matching an F-type main-sequence star. It is a solar-type star that may be near the end of its main sequence lifetime based on a high luminosity for a star of its type. Theta Boötis is a suspected variable star and a source of X-ray emission. There is evidence for low amplitude radial velocity variation of about 5 km/s. The star has a 24/41% greater mass and a 40% larger radius than the Sun. It is about 3–4 billion years old and is spinning with a projected rotational velocity of 29 km/s. The star is radiating 4.1 times the luminosity of the Sun from its photosphere at an effective temperature of 6,294 K.
There is a nearby 11th magnitude optical companion star about 70 arcseconds away. This is a class M2.5 red dwarf that is separated by a minimum of 1,000 AUs. It is uncertain whether they are gravitationally bound, but they do have a common motion through space and so the two stars probably share a common origin.
Nomenclature
θ Boötis, along with the other Aselli (ι Boo and κ Boo) and λ Boo, were Aulād al Dhiʼbah (أولاد الضّباع - awlād al-ḍibā‘), "the Whelps of the Hyenas".
In Chinese, (), meaning Celestial Spear'', refers to an asterism consisting of θ Boötis, κ2 Boötis and ι Boötis. Consequently, the Chinese name for θ Boötis itself is (, .)
References
External links
CCDM J14252+5151
HR 5404
Image Theta Boötis
F-type main-sequence stars
Bootis, Theta
Suspected variables
M-type main-sequence stars
Binary stars
Boötes
Bootis, Theta
BD+52 1804
Bootis, 23
0549
126660
087379
5404
Asellus Primus | Theta Boötis | Astronomy | 550 |
65,455,145 | https://en.wikipedia.org/wiki/Sachs%20subgraph | In graph theory, a Sachs subgraph of a given graph is a subgraph in which all connected components are either single edges or cycles. These subgraphs are named after Horst Sachs, who used them in an expansion of the characteristic polynomial of the adjacency matrix of graphs. A similar expansion using Sachs subgraphs is also possible for permanental polynomials of graphs. Sachs subgraphs and the polynomials calculated with their aid have been applied in chemical graph theory, for instance as part of a test for the existence of non-bonding orbitals in hydrocarbon structures.
A spanning Sachs subgraph, also called a {1,2}-factor, is a Sachs subgraph in which every vertex of the given graph is incident to an edge of the subgraph. The union of two perfect matchings is always a bipartite spanning Sachs subgraph, but in general Sachs subgraphs are not restricted to being bipartite. Some authors use the term "Sachs subgraph" to mean only spanning Sachs subgraphs.
References
Graph theory objects | Sachs subgraph | Mathematics | 210 |
16,589,498 | https://en.wikipedia.org/wiki/Vector%20spherical%20harmonics | In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.
Definition
Several conventions have been used to define the VSH.
We follow that of Barrera et al.. Given a scalar spherical harmonic , we define three VSH:
with being the unit vector along the radial direction in spherical coordinates and the vector along the radial direction with the same norm as the radius, i.e., . The radial factors are included to guarantee that the dimensions of the VSH are the same as those of the ordinary spherical harmonics and that the VSH do not depend on the radial spherical coordinate.
The interest of these new vector fields is to separate the radial dependence from the angular one when using spherical coordinates, so that a vector field admits a multipole expansion
The labels on the components reflect that is the radial component of the vector field, while and are transverse components (with respect to the radius vector ).
Main properties
Symmetry
Like the scalar spherical harmonics, the VSH satisfy
which cuts the number of independent functions roughly in half. The star indicates complex conjugation.
Orthogonality
The VSH are orthogonal in the usual three-dimensional way at each point :
They are also orthogonal in Hilbert space:
An additional result at a single point (not reported in Barrera et al, 1985) is, for all ,
Vector multipole moments
The orthogonality relations allow one to compute the spherical multipole moments of a vector field as
The gradient of a scalar field
Given the multipole expansion of a scalar field
we can express its gradient in terms of the VSH as
Divergence
For any multipole field we have
By superposition we obtain the divergence of any vector field:
We see that the component on is always solenoidal.
Curl
For any multipole field we have
By superposition we obtain the curl of any vector field:
Laplacian
The action of the Laplace operator separates as follows:
where and
Also note that this action becomes symmetric, i.e. the off-diagonal coefficients are equal to , for properly normalized VSH.
Examples
First vector spherical harmonics
Expressions for negative values of are obtained by applying the symmetry relations.
Applications
Electrodynamics
The VSH are especially useful in the study of multipole radiation fields. For instance, a magnetic multipole is due to an oscillating current with angular frequency and complex amplitude
and the corresponding electric and magnetic fields, can be written as
Substituting into Maxwell equations, Gauss's law is automatically satisfied
while Faraday's law decouples as
Gauss' law for the magnetic field implies
and Ampère–Maxwell's equation gives
In this way, the partial differential equations have been transformed into a set of ordinary differential equations.
Alternative definition
In many applications, vector spherical harmonics are defined as fundamental set of the solutions of vector Helmholtz equation in spherical coordinates.
In this case, vector spherical harmonics are generated by scalar functions, which are solutions of scalar Helmholtz equation with the wavevector .
here are the associated Legendre polynomials, and are any of the spherical Bessel functions.
Vector spherical harmonics are defined as:
longitudinal harmonics
magnetic harmonics
electric harmonics
Here we use harmonics real-valued angular part, where , but complex functions can be introduced in the same way.
Let us introduce the notation . In the component form vector spherical harmonics are written as:
There is no radial part for magnetic harmonics. For electric harmonics, the radial part decreases faster than angular, and for big can be neglected. We can also see that for electric and magnetic harmonics angular parts are the same up to permutation of the polar and azimuthal unit vectors, so for big electric and magnetic harmonics vectors are equal in value and perpendicular to each other.
Longitudinal harmonics:
Orthogonality
The solutions of the Helmholtz vector equation obey the following orthogonality relations:
All other integrals over the angles between different functions or functions with different indices are equal to zero.
Rotation and inversion
Under rotation, vector spherical harmonics are transformed through each other in the same way as the corresponding scalar spherical functions, which are generating for a specific type of vector harmonics. For example, if the generating functions are the usual spherical harmonics, then the vector harmonics will also be transformed through the Wigner D-matrices
The behavior under rotations is the same for electrical, magnetic and longitudinal harmonics.
Under inversion, electric and longitudinal spherical harmonics behave in the same way as scalar spherical functions, i.e.
and magnetic ones have the opposite parity:
Fluid dynamics
In the calculation of the Stokes' law for the drag that a viscous fluid exerts on a small spherical particle, the velocity distribution obeys Navier–Stokes equations neglecting inertia, i.e.,
with the boundary conditions
where U is the relative velocity of the particle to the fluid far from the particle. In spherical coordinates this velocity at infinity can be written as
The last expression suggests an expansion in spherical harmonics for the liquid velocity and the pressure
Substitution in the Navier–Stokes equations produces a set of ordinary differential equations for the coefficients.
Integral relations
Here the following definitions are used:
In case, when instead of are spherical Bessel functions, with help of plane wave expansion one can obtain the following integral relations:
In case, when are spherical Hankel functions, one should use the different formulae. For vector spherical harmonics the following relations are obtained:
where , index means, that spherical Hankel functions are used.
See also
Spherical harmonics
Spinor spherical harmonics
Spin-weighted spherical harmonics
Electromagnetic radiation
Spherical basis
References
External links
Vector Spherical Harmonics at Eric Weisstein's Mathworld
Vector calculus
Special functions
Differential equations
Applied mathematics
Theoretical physics | Vector spherical harmonics | Physics,Mathematics | 1,199 |
2,526,862 | https://en.wikipedia.org/wiki/Isotopes%20of%20actinium | Actinium (89Ac) has no stable isotopes and no characteristic terrestrial isotopic composition, thus a standard atomic weight cannot be given. There are 34 known isotopes, from 203Ac to 236Ac, and 7 isomers. Three isotopes are found in nature, 225Ac, 227Ac and 228Ac, as intermediate decay products of, respectively, 237Np, 235U, and 232Th. 228Ac and 225Ac are extremely rare, so almost all natural actinium is 227Ac.
The most stable isotopes are 227Ac with a half-life of 21.772 years, 225Ac with a half-life of 10.0 days, and 226Ac with a half-life of 29.37 hours. All other isotopes have half-lives under 10 hours, and most under a minute. The shortest-lived known isotope is 217Ac with a half-life of 69 ns.
Purified 227Ac comes into equilibrium with its decay products (227Th and 223Fr) after 185 days.
List of isotopes
|-id=Actinium-203
| 203Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 114
|
|
| α
| 199Fr
| (1/2+)
|
|-id=Actinium-204
|204Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 115
|
|
| α
| 200Fr
|
|
|-id=Actinium-205
|205Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 116
|
|
| α
| 201Fr
| 9/2−?
|
|-id=Actinium-206
| 206Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 117
| 206.01450(8)
| 25(7) ms
| α
| 202Fr
| (3+)
|
|-id=Actinium-206m1
| style="text-indent:1em" | 206m1Ac
|
| colspan="3" style="text-indent:2em" | 80(50) keV
| 15(6) ms
| α
| 202Fr
|
|
|-id=Actinium-206m2
| style="text-indent:1em" | 206m2Ac
|
| colspan="3" style="text-indent:2em" | 290(110)# keV
| 41(16) ms
| α
| 202mFr
| (10−)
|
|-id=Actinium-207
| 207Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 118
| 207.01195(6)
| 31(8) ms[27(+11−6) ms]
| α
| 203Fr
| 9/2−#
|
|-id=Actinium-208
| rowspan=2|208Ac
| rowspan=2|
| rowspan=2 style="text-align:right" | 89
| rowspan=2 style="text-align:right" | 119
| rowspan=2|208.01155(6)
| rowspan=2|97(16) ms[95(+24−16) ms]
| α (99%)
| 204Fr
| rowspan=2|(3+)
| rowspan=2|
|-
| β+ (1%)
| 208Ra
|-id=Actinium-208m
| rowspan=3 style="text-indent:1em" | 208mAc
| rowspan=3|
| rowspan=3 colspan="3" style="text-indent:2em" | 506(26) keV
| rowspan=3|28(7) ms[25(+9−5) ms]
| α (89%)
| 204Fr
| rowspan=3|(10−)
| rowspan=3|
|-
| IT (10%)
| 208Ac
|-
| β+ (1%)
| 208Ra
|-id=Actinium-209
| rowspan=2|209Ac
| rowspan=2|
| rowspan=2 style="text-align:right" | 89
| rowspan=2 style="text-align:right" | 120
| rowspan=2|209.00949(5)
| rowspan=2|92(11) ms
| α (99%)
| 205Fr
| rowspan=2|(9/2−)
| rowspan=2|
|-
| β+ (1%)
| 209Ra
|-id=Actinium-210
| rowspan=2|210Ac
| rowspan=2|
| rowspan=2 style="text-align:right" | 89
| rowspan=2 style="text-align:right" | 121
| rowspan=2|210.00944(6)
| rowspan=2|350(40) ms
| α (96%)
| 206Fr
| rowspan=2|7+#
| rowspan=2|
|-
| β+ (4%)
| 210Ra
|-id=Actinium-211
| rowspan=2|211Ac
| rowspan=2|
| rowspan=2 style="text-align:right" | 89
| rowspan=2 style="text-align:right" | 122
| rowspan=2|211.00773(8)
| rowspan=2|213(25) ms
| α (99.8%)
| 207Fr
| rowspan=2|9/2−#
| rowspan=2|
|-
| β+ (.2%)
| 211Ra
|-id=Actinium-212
| rowspan=2|212Ac
| rowspan=2|
| rowspan=2 style="text-align:right" | 89
| rowspan=2 style="text-align:right" | 123
| rowspan=2|212.00781(7)
| rowspan=2|920(50) ms
| α (97%)
| 208Fr
| rowspan=2|6+#
| rowspan=2|
|-
| β+ (3%)
| 212Ra
|-id=Actinium-213
| rowspan=2|213Ac
| rowspan=2|
| rowspan=2 style="text-align:right" | 89
| rowspan=2 style="text-align:right" | 124
| rowspan=2|213.00661(6)
| rowspan=2|731(17) ms
| α
| 209Fr
| rowspan=2|(9/2−)#
| rowspan=2|
|-
| β+ (rare)
| 213Ra
|-id=Actinium-214
| rowspan=2|214Ac
| rowspan=2|
| rowspan=2 style="text-align:right" | 89
| rowspan=2 style="text-align:right" | 125
| rowspan=2|214.006902(24)
| rowspan=2|8.2(2) s
| α (89%)
| 210Fr
| rowspan=2|(5+)#
| rowspan=2|
|-
| β+ (11%)
| 214Ra
|-id=Actinium-215
| rowspan=2|215Ac
| rowspan=2|
| rowspan=2 style="text-align:right" | 89
| rowspan=2 style="text-align:right" | 126
| rowspan=2|215.006454(23)
| rowspan=2|0.17(1) s
| α (99.91%)
| 211Fr
| rowspan=2|9/2−
| rowspan=2|
|-
| β+ (.09%)
| 215Ra
|-id=Actinium-216
| 216Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 127
| 216.008720(29)
| 440(16) μs
| α
| 212Fr
| (1−)
|
|-id=Actinium-216m1
| style="text-indent:1em" | 216m1Ac
|
| colspan="3" style="text-indent:2em" | 38(5) keV
| 441(7) μs
| α
| 212Fr
| (9−)
|
|-id=Actinium-216m2
| style="text-indent:1em" | 216m2Ac
|
| colspan="3" style="text-indent:2em" | 422#(100#) keV
| ~300 ns
| IT
| 216Ac
|
|
|-id=Actinium-217
| 217Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 128
| 217.009347(14)
| 69(4) ns
| α
| 213Fr
| 9/2−
|
|-id=Actinium-217m
| style="text-indent:1em" | 217mAc
|
| colspan="3" style="text-indent:2em" | 2012(20) keV
| 740(40) ns
|
|
| (29/2)+
|
|-id=Actinium-218
| 218Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 129
| 218.01164(5)
| 1.08(9) μs
| α
| 214Fr
| (1−)#
|
|-id=Actinium-218m
| style="text-indent:1em" | 218mAc
|
| colspan="3" style="text-indent:2em" | 607(86)# keV
| 103(11) ns
| IT
| 218Ac
| (11+)
|
|-id=Actinium-219
| 219Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 130
| 219.01242(5)
| 11.8(15) μs
| α
| 215Fr
| 9/2−
|
|-id=Actinium-220
| 220Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 131
| 220.014763(16)
| 26.36(19) ms
| α
| 216Fr
| (3−)
|
|-id=Actinium-221
| 221Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 132
| 221.01559(5)
| 52(2) ms
| α
| 217Fr
| 9/2−#
|
|-id=Actinium-222
| rowspan=2|222Ac
| rowspan=2|
| rowspan=2 style="text-align:right" | 89
| rowspan=2 style="text-align:right" | 133
| rowspan=2|222.017844(6)
| rowspan=2|5.0(5) s
| α (99(1)%)
| 218Fr
| rowspan=2|1−
| rowspan=2|
|-
| β+ (1(1)%)
| 222Ra
|-id=Actinium-222m
| rowspan=3 style="text-indent:1em" | 222mAc
| rowspan=3|
| rowspan=3 colspan="3" style="text-indent:2em" | 78(21) keV
| rowspan=3|1.05(5) min
| α (98.6%)
| 218Fr
| rowspan=3|5+#
| rowspan=3|
|-
| β+ (1.4%)
| 222Ra
|-
| IT?
| 222Ac
|-id=Actinium-223
| rowspan=3|223Ac
| rowspan=3|
| rowspan=3 style="text-align:right" | 89
| rowspan=3 style="text-align:right" | 134
| rowspan=3|223.019137(8)
| rowspan=3|2.10(5) min
| α (99%)
| 219Fr
| rowspan=3|(5/2−)
| rowspan=3|
|-
| EC (1%)
| 223Ra
|-
| CD (3.2×10−9%)
| 209Bi14C
|-id=Actinium-224
| rowspan=3|224Ac
| rowspan=3|
| rowspan=3 style="text-align:right" | 89
| rowspan=3 style="text-align:right" | 135
| rowspan=3|224.021723(4)
| rowspan=3|2.78(17) h
| β+ (90.9%)
| 224Ra
| rowspan=3|0−
| rowspan=3|
|-
| α (9.1%)
| 220Fr
|-
| β− (1.6%)
| 224Th
|-
| rowspan=2|225Ac
| rowspan=2|
| rowspan=2 style="text-align:right" | 89
| rowspan=2 style="text-align:right" | 136
| rowspan=2|225.023230(5)
| rowspan=2|10.0(1) d
| α
| 221Fr
| rowspan=2|(3/2−)
| rowspan=2|Trace
|-
| CD (6×10−10%)
| 211Bi14C
|-
| rowspan=3|226Ac
| rowspan=3|
| rowspan=3 style="text-align:right" | 89
| rowspan=3 style="text-align:right" | 137
| rowspan=3|226.026098(4)
| rowspan=3|29.37(12) h
| β− (83%)
| 226Th
| rowspan=3|(1)(−#)
| rowspan=3|
|-
| EC (17%)
| 226Ra
|-
| α (.006%)
| 222Fr
|-
| rowspan=2|227Ac
| rowspan=2|Actinium
| rowspan=2 style="text-align:right" | 89
| rowspan=2 style="text-align:right" | 138
| rowspan=2|227.0277521(26)
| rowspan=2|21.772(3) y
| β− (98.62%)
| 227Th
| rowspan=2|3/2−
| rowspan=2|Trace
|-
| α (1.38%)
| 223Fr
|-id=Actinium-228
| 228Ac
| Mesothorium 2
| style="text-align:right" | 89
| style="text-align:right" | 139
| 228.0310211(27)
| 6.13(2) h
| β−
| 228Th
| 3+
| Trace
|-id=Actinium-229
| 229Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 140
| 229.03302(4)
| 62.7(5) min
| β−
| 229Th
| (3/2+)
|
|-id=Actinium-230
| 230Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 141
| 230.03629(32)
| 122(3) s
| β−
| 230Th
| (1+)
|
|-id=Actinium-231
| 231Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 142
| 231.03856(11)
| 7.5(1) min
| β−
| 231Th
| (1/2+)
|
|-id=Actinium-232
| 232Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 143
| 232.04203(11)
| 119(5) s
| β−
| 232Th
| (1+)
|
|-id=Actinium-233
| 233Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 144
| 233.04455(32)#
| 145(10) s
| β−
| 233Th
| (1/2+)
|
|-id=Actinium-234
| 234Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 145
| 234.04842(43)#
| 44(7) s
| β−
| 234Th
|
|
|-id=Actinium-235
| 235Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 146
| 235.05123(38)#
| 60(4) s
| β−
| 235Th
| 1/2+#
|
|-id=Actinium-236
| 236Ac
|
| style="text-align:right" | 89
| style="text-align:right" | 147
| 236.05530(54)#
|
| β−
| 236Th
|
|
Actinides vs fission products
Notable isotopes
Actinium-225
Actinium-225 is a highly radioactive isotope with 136 neutrons. It is an alpha emitter and has a half-life of 9.919 days. As of 2024, it is being researched as a possible alpha source in targeted alpha therapy. Actinium-225 undergoes a series of three alpha decays – via the short-lived francium-221 and astatine-217 – to 213Bi, which itself is used as an alpha source. Another benefit is that the decay chain of 225Ac ends in the nuclide 209Bi, which has a considerably shorter biological half-life than lead. However, a major factor limiting its usage is the difficulty in producing the short-lived isotope, as it is most commonly isolated from aging parent nuclides (such as 233U); it may also be produced in cyclotrons, linear accelerators, or fast breeder reactors.
Actinium-226
Actinium-226 is an isotope of actinium with a half-life of 29.37 hours. It mainly (83%) undergos beta decay, sometimes (17%) undergo electron capture, and rarely (0.006%) undergo alpha decay. There are researches on 226Ac to use it in SPECT.
Actinium-227
Actinium-227 is the most stable isotope of actinium, with a half-life of 21.772 years. It mainly (98.62%) undergos beta decay, but sometimes (1.38%) it will undergo alpha decay instead. 227Ac is a member of the actinium series. It is found only in traces in uranium ores – one tonne of uranium in ore contains about 0.2 milligrams of 227Ac. 227Ac is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor.
^{226}_{88}Ra + ^{1}_{0}n -> ^{227}_{88}Ra ->[\beta^-][42.2 \ \ce{min}] ^{227}_{89}Ac
227Ac is highly radioactive and was therefore studied for use as an active element of radioisotope thermoelectric generators, for example in spacecraft. The oxide of 227Ac pressed with beryllium is also an efficient neutron source with the activity exceeding that of the standard americium-beryllium and radium-beryllium pairs. In all those applications, 227Ac (a beta source) is merely a progenitor which generates alpha-emitting isotopes upon its decay. Beryllium captures alpha particles and emits neutrons owing to its large cross-section for the (α,n) nuclear reaction:
^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma
The 227AcBe neutron sources can be applied in a neutron probe – a standard device for measuring the quantity of water present in soil, as well as moisture/density for quality control in highway construction. Such probes are also used in well logging applications, in neutron radiography, tomography and other radiochemical investigations.
The medium half-life of 227Ac makes it a very convenient radioactive isotope in modeling the slow vertical mixing of oceanic waters. The associated processes cannot be studied with the required accuracy by direct measurements of current velocities (of the order 50 meters per year). However, evaluation of the concentration depth-profiles for different isotopes allows estimating the mixing rates. The physics behind this method is as follows: oceanic waters contain homogeneously dispersed 235U. Its decay product, 231Pa, gradually precipitates to the bottom, so that its concentration first increases with depth and then stays nearly constant. 231Pa decays to 227Ac; however, the concentration of the latter isotope does not follow the 231Pa depth profile, but instead increases toward the sea bottom. This occurs because of the mixing processes which raise some additional 227Ac from the sea bottom. Thus analysis of both 231Pa and 227Ac depth profiles allows researchers to model the mixing behavior.
See also
Actinium series
Actinide
Notes
References
Isotope masses from:
Half-life, spin, and isomer data selected from the following sources.
Actinium
Actinium | Isotopes of actinium | Chemistry | 4,758 |
4,833,502 | https://en.wikipedia.org/wiki/Optic%20Nerve%20%28CD-ROM%29 | Optic Nerve is an interactive CD-ROM showcasing the life and work of multimedia artist David Wojnarowicz. The disc includes film, interviews, music, performance, painting and writing from the artist. The release is the first entry in the Red Hot AIDS Benefit Series with a non-musical focus. Production was handled by the Red Hot Organization (RHO) and Funny Garbage, in conjunction with the New Museum of Contemporary Art exhibit entitled "Fever: The Art of David Wojnarowicz."
The disc also features an interactive version of ITSOFOMO — the series of public performances, featuring readings from Wojnarowicz's work, along with multiple video images which the artist either created or selected.
Optic Nerve was originally available from New York City's New Museum bookstore. At that time, four dollars received from the sale of each disc was donated to the Hetrick Martin Institute — an entity which had Wojnarowicz as a patron. The HMI is a leading professional provider of social support and programming for all at-risk youth, particularly lesbian, gay, bisexual, transgender or questioning youth in the New York metropolitan area. The CD-ROM has since become available from the Red Hot Organization.
External links
Information page on Optic Nerve at the Red Hot Organization website
Fever: The Art of David Wojnarowicz
Red Hot Organization albums
1999 albums
1990s spoken word albums
Spoken word albums by American artists
Multimedia works | Optic Nerve (CD-ROM) | Technology | 286 |
385,661 | https://en.wikipedia.org/wiki/Inverse%20scattering%20problem | In mathematics and physics, the inverse scattering problem is the problem of determining characteristics of an object, based on data of how it scatters incoming radiation or particles. It is the inverse problem to the direct scattering problem, which is to determine how radiation or particles are scattered based on the properties of the scatterer.
Soliton equations are a class of partial differential equations which can be studied and solved by a method called the inverse scattering transform, which reduces the nonlinear PDEs to a linear inverse scattering problem. The nonlinear Schrödinger equation, the Korteweg–de Vries equation and the KP equation are examples of soliton equations. In one space dimension the inverse scattering problem is equivalent to a Riemann-Hilbert problem. Inverse scattering has been applied to many problems including radiolocation, echolocation, geophysical survey, nondestructive testing, medical imaging, and quantum field theory.
Citations
References
Reprint
Scattering theory
Scattering, absorption and radiative transfer (optics)
Inverse problems | Inverse scattering problem | Chemistry,Mathematics | 207 |
58,456,427 | https://en.wikipedia.org/wiki/Aspergillus%20lucknowensis | Aspergillus lucknowensis is a species of fungus in the genus Aspergillus. It is from the Usti section. The species was first described in 1968.
References
lucknowensis
Fungi described in 1968
Fungus species | Aspergillus lucknowensis | Biology | 46 |
324,828 | https://en.wikipedia.org/wiki/Bimonster%20group | In mathematics, the bimonster is a group that is the wreath product of the monster group M with Z2:
The Bimonster is also a quotient of the Coxeter group corresponding to the Dynkin diagram Y555, a Y-shaped graph with 16 nodes:
Actually, the 3 outermost nodes are redundant. This is because the subgroup Y124 is the E8 Coxeter group. It generates the remaining node of Y125. This pattern extends all the way to Y444: it automatically generates the 3 extra nodes of Y555.
John H. Conway conjectured that a presentation of the bimonster could be given by adding a certain extra relation to the presentation defined by the Y444 diagram. More specifically, the affine E6 Coxeter group is , which can be reduced to the finite group by adding a single relation called the spider relation. Once this relation is added, and the diagram is extended to Y444, the group generated is the bimonster. This was proved in 1990 by Simon P. Norton; the proof was simplified in 1999 by A. A. Ivanov.
Other Y-groups
Many subgroups of the (bi)monster can be defined by adjoining the spider relation to smaller Coxeter diagrams, most notably the Fischer groups and the baby monster group. The groups Yij0, Yij1, Y122, Y123, and Y124 are finite even without adjoining additional relations. They are the Coxeter groups Ai+j+1, Di+j, E6, E7, and E8, respectively. Other groups, which would be infinite without the spider relation, are summarized below:
See also
Triality - simple Lie group D4, Y111
Affine E_6 Y222
References
.
.
.
.
.
.
.
External links
(Note: incorrectly named here as [36,6,6])
Group theory | Bimonster group | Mathematics | 395 |
64,051,431 | https://en.wikipedia.org/wiki/Isovoacangine | Isovoacangine is a naturally occurring substance that has action on heart muscles in pigs.
Chemistry
Derivatives
3-Hydroxyisovoacangine and 3-(2'-oxopropyl)isovoacangine are derivates of isovoacangine.
Natural occurrence
It occurs naturally in many Tabernaemontana (milkwood) species such as Tabernaemontana pachysiphon and Tabernaemontana divaricata.
See also
Voacangine
Tabernanthine
Ibogaline
Vinervine
References
Indole alkaloids
Heterocyclic compounds with 5 rings
Methyl esters
Methoxy compounds | Isovoacangine | Chemistry | 136 |
78,790,313 | https://en.wikipedia.org/wiki/WY%20Velorum | WY Velorum, also known as HD 81137, is a binary system between a variable red supergiant (RSG) and a blue giant companion in the constellation of Vela. It is located approximately distant. Its apparent magnitude slowly varies over the course of years between 8.84 and 10.22. As such, it has been described as an irregular variable, though a rough 550-day period and a more uncertain 370-day period have been detected. The primary star is among the largest stars discovered to date, with an estimated radius of 1,157 (). If it replaced the Sun, its surface would reach past Jupiter's orbit (5.20 AU).
Physical properties
Early publications in 1928 and 1939 classified the star as a possible R Coronae Borealis variable. Later authors were split on whether it was a symbiotic star or a VV Cephei-type star. The two differ in that the former consists of a red giant and a white dwarf or neutron star, while the latter is usually composed of a K- or M-type RSG and a massive early B-type star. The latter was confirmed to be the case in a 1988 paper, and the companion was identified as a giant star with the spectral type B2. This study also presented the absolute magnitudes of the two stars, −4.8 for the primary and −1.7 for the secondary, albeit this has been calculated using a distance of 1400 pc, smaller than modern estimates. With an updated value of 1900 pc, its KS band absolute magnitude is gauged at −11.3. No radial velocity variations have been detected, so the binary likely has a small orbital inclination.
Spectrum
The star has a peculiar spectrum, as indicated by the "pe" suffix in the spectral type (the "p" stands for peculiar, and the "e" stands for emission lines). It displays various strong emission lines, namely of hydrogen, nitrogen, oxygen, silicon, sulfur, iron, nickel, copper, and possibly chromium, many of them forbidden lines. Among them, the strong [Fe II] (forbidden line of singly ionized iron) emission is particularly unusual and recognizable. However, in the ultraviolet region, as observed by the International Ultraviolet Explorer, it only shows the emission lines for Mg II (Mg+). In this regard, it is similar to the symbiotic star CH Cygni, except that CH Cygni also has neutral oxygen lines.
Excess infrared emission signifies the existence of circumstellar dust at a temperature of . The spectrum does not appear to be reddened.
Historical observations
The star's variability was first discovered by Annie Jump Cannon. Between 1890 and 1901, the brightness gradually increased from magnitude 9.8 to 9.2, but it slowly dimmed since 1902 to reach magnitude 10.1 by May 1922. Additional research on the light curves published in 1947 by Cecilia Payne-Gaposchkin indicate that the fading that began in 1902 halted around 1916, after which the star remained almost constant until 1933, when it began to brighten again.
No discernible changes occurred in the spectrum of the star between 1944 and 1948, but in 1952, the H-α line shifted from a single line to a double line, and previously unseen faint H-β features appeared. In 1956, it was reported that the calcium H and K lines swung from absorption to emission during two consecutive nights. By 1969, the RSG had likely become fainter than it was in the 1940s.
References
Vela (constellation)
081137
M-type supergiants
B-type giants
CD-52 03010
J09215913-5233514
Binary stars
Velorum, WY | WY Velorum | Astronomy | 768 |
44,830,594 | https://en.wikipedia.org/wiki/C11H10N2O2 | {{DISPLAYTITLE:C11H10N2O2}}
The molecular formula C11H10N2O2 (molar mass: 202.21 g/mol, exact mass: 202.0742 u) may refer to:
Tolimidone
Vasicinone
Molecular formulas | C11H10N2O2 | Physics,Chemistry | 65 |
28,833,887 | https://en.wikipedia.org/wiki/Haworth%20Art%20Gallery | The Haworth Art Gallery is a public art gallery located in Accrington, Lancashire, northwest England, and is the home of the largest collection in Europe of Tiffany glass from the studio of Louis Comfort Tiffany. The museum, a Tudor-style house, was originally built in 1909 to be the home of William Haworth, a manufacturer of textiles. The house was designed by Walter Brierley (1862–1926), a York architect known as "the Yorkshire Lutyens". It was bequeathed to the people of Accrington in 1920, and stands in nine acres of parkland on the south side of Accrington Town Centre.
The Haworth's Tiffany collection is the largest outside the United States, with almost every type of Tiffany glass, including 140 pieces, including Favrile glass tiles, jewels, samples and mosaics. It was the gift of Joseph Briggs, a design apprentice who left Accrington at 17 to emigrate to the United States, where he worked for Tiffany for 40 years from about 1892. In 1933, he sent his Tiffany collection home.
The collection is on permanent public display in four themed-rooms: 'Tiffany and Interior Design', 'Tiffany and the Past', 'Tiffany and Nature', and 'The Tiffany Phenomenon'. Notable in the Gallery's Tiffany collection are over 70 vases, including a group of 'Millefiore Paperweight' and 'Intaglio' or cut-glass examples, 'flowerform' vases, vases shaped like vegetables, 'Cypriote' and 'Tel-El-Amarna' vases inspired by Roman and Egyptian examples. There are also samples relating to decorative schemes Briggs was involved with, and his 'Sulphur-crested Cockatoos' mosaic.
The museum also has a collection of mainly 19th-century oil paintings and watercolours including works by Frederic, Lord Leighton, Claude Joseph Vernet, John Frederick Herring and others.
See also
Listed buildings in Accrington
References
Notes
External links
Official website
Virtual Tour
Houses completed in 1909
Art museums and galleries established in 1921
Glass museums and galleries
Art museums and galleries in Lancashire
Museums in Lancashire
Accrington
Buildings and structures in Hyndburn
1920 establishments in England
Decorative arts museums in England
Tiffany Studios
Walter Brierley buildings | Haworth Art Gallery | Materials_science,Engineering | 458 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.