id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
3,079,231 | https://en.wikipedia.org/wiki/Carothers%20equation | In step-growth polymerization, the Carothers equation (or Carothers' equation) gives the degree of polymerization, , for a given fractional monomer conversion, .
There are several versions of this equation, proposed by Wallace Carothers, who invented nylon in 1935.
Linear polymers: two monomers in equimolar quantities
The simplest case refers to the formation of a strictly linear polymer by the reaction (usually by condensation) of two monomers in equimolar quantities. An example is the synthesis of nylon-6,6 whose formula is
from one mole of hexamethylenediamine, , and one mole of adipic acid, . For this case
In this equation
is the number-average value of the degree of polymerization, equal to the average number of monomer units in a polymer molecule. For the example of nylon-6,6 ( diamine units and diacid units).
is the extent of reaction (or conversion to polymer), defined by
is the number of molecules present initially as monomer
is the number of molecules present after time . The total includes all degrees of polymerization: monomers, oligomers and polymers.
This equation shows that a high monomer conversion is required to achieve a high degree of polymerization. For example, a monomer conversion, , of 98% is required for = 50, and = 99% is required for = 100.
Linear polymers: one monomer in excess
If one monomer is present in stoichiometric excess, then the equation becomes
r is the stoichiometric ratio of reactants, the excess reactant is conventionally the denominator so that r < 1. If neither monomer is in excess, then r = 1 and the equation reduces to the equimolar case above.
The effect of the excess reactant is to reduce the degree of polymerization for a given value of p. In the limit of complete conversion of the limiting reagent monomer, p → 1 and
Thus for a 1% excess of one monomer, r = 0.99 and the limiting degree of polymerization is 199, compared to infinity for the equimolar case. An excess of one reactant can be used to control the degree of polymerization.
Branched polymers: multifunctional monomers
The functionality of a monomer molecule is the number of functional groups which participate in the polymerization. Monomers with functionality greater than two will introduce branching into a polymer, and the degree of polymerization will depend on the average functionality fav per monomer unit. For a system containing N0 molecules initially and equivalent numbers of two functional groups A and B, the total number of functional groups is N0fav.
And the modified Carothers equation is
, where p equals to
Related equations
Related to the Carothers equation are the following equations (for the simplest case of linear polymers formed from two monomers in equimolar quantities):
where:
Xw is the weight average degree of polymerization,
Mn is the number average molecular weight,
Mw is the weight average molecular weight,
Mo is the molecular weight of the repeating monomer unit,
Đ is the dispersity index. (formerly known as polydispersity index, symbol PDI)
The last equation shows that the maximum value of the Đ is 2, which occurs at a monomer conversion of 100% (or p = 1). This is true for step-growth polymerization of linear polymers. For chain-growth polymerization or for branched polymers, the Đ can be much higher.
In practice the average length of the polymer chain is limited by such things as the purity of the reactants, the absence of any side reactions (i.e. high yield), and the viscosity of the medium.
References
Polymer chemistry
Equations | Carothers equation | Chemistry,Materials_science,Mathematics,Engineering | 783 |
9,978,334 | https://en.wikipedia.org/wiki/AIDSVAX | AIDSVAX is an experimental HIV vaccine that was developed originally at Genentech in San Francisco, California, and later tested by the VaxGen company, a Genentech offshoot. The development and trials of the vaccine received significant coverage in the international media, but American trials proved inconclusive. The vaccine was then tested on a group of at-risk individuals in Thailand.
In 1991, AIDSVAX originally consisted of the B envelope of recombinant gp120, a glycoprotein unique to HIV's surface, from a strain of the virus, MN, known at the time to infect people in the United States and Europe. The vaccine was designed to provoke the production of antibodies in subjects that would strip the gp120 protein off of the HIV viral particles, effectively disabling the virus so that it could not bind to or invade susceptible cells. Then, another group, infected with a second strain of HIV, A244, was discovered in 1995, and a revised, bivalent version of the vaccine was produced that combined elements of both MN and A244. Phase I and Phase II tests of the first version were promising, showing excellent safety in chimpanzees and humans and provoking production of HIV MN and A244 antibodies in 99% of human volunteers.
VaxGen's leadership enthusiastically applied to the U.S. Food and Drug Administration (FDA) for permission to undertake Phase III studies in the U. S. on large numbers of at-risk volunteers. But since some of the Phase I and Phase II volunteers had become infected with HIV while taking the vaccine, showing that the vaccine was not 100% effective, and it was not proven that the vaccine itself hadn't caused these infections, the FDA and other members of the medical community hesitated and finally declined to approve Phase III testing "until more was learned about HIV immunity", despite the fact that early versions of successful vaccines have rarely been 100% effective, and even the 1955 Salk polio vaccine was only 70% effective, superseded as it was seven years later by the Sabin vaccine. With human lives at stake, however, the FDA could not risk condoning further trials until it knew what had caused the infections.
Another problem with AIDSVAX was that it provoked an entirely humoral, antibody immune response in its subjects, unlike other HIV vaccines in development in Europe and elsewhere that were provoking balanced antibody and cellular defenses.
In response, VaxGen turned to the international community, seeking a place that would sanction clinical trials of AIDSVAX, and after negotiating with AIDS-plagued officials in Africa and Asia, landed upon Thailand. Initial Phase II trials of AIDSVAX B/B alone in the US and AIDVAX B/E alone in Thailand were unsuccessful, with both vaccines failing to either prevent or weaken HIV infection, so instead VaxGen began Thai trials of AIDSVAX B/E in combination with the Aventis-Pasteur vaccine, ALVAC-HIV, that uses genetic elements of several different HIV strains encapsulated in a canarypox virus vector. AIDSVAX B/E, moreover, contained elements of the HIV strain peculiar to Thailand's victims, E, as well as one common in the US, B.
A report published in the December 2009 New England Journal of Medicine on this Thailand clinical trial, also known as the RV144 trial, of the vaccine known as "ALVAC-AIDSVAX B/E" showed the combination treatment's efficacy ranging from 26.1 to 31.4%, which though far from optimal makes the combination AIDSVAX-Aventis vaccine one of the first important milestones in the world's struggle to produce a globally effective HIV vaccine. However, during the study seven participants were removed from the study when the results showed they had already been found with an HIV-1 infection at baseline, which changed the result from 26.1 to 31.4% vaccination efficacy.
References
HIV vaccine research | AIDSVAX | Chemistry | 819 |
5,068,993 | https://en.wikipedia.org/wiki/HD%2081101 | HD 81101 is a single star in the southern constellation of Carina. It has the Bayer designation k Carinae, while HD 81101 is the star's designation in the Henry Draper catalogue. The star has a yellow hue and is faintly visible to the naked eye with an apparent visual magnitude of 4.79. It is located at a distance of approximately 225 light years from the Sun based on parallax. This object is drifting further away with a radial velocity of +51 km/s, having come to within of the Sun some 1.4 million years ago.
This is an aging giant star with a stellar classification of G6III, having exhausted the supply of hydrogen at its core then cooled and expanded away from the main sequence. It is two billion years old with 1.95 times the mass of the Sun and has expanded to 11 times the Sun's radius. The star is radiating 65 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,908 K. Being a member of the old disk population, the metallicity of the star's stellar atmosphere is much lower than solar.
References
G-type giants
Carina (constellation)
Carinae, k
Durchmusterung objects
081101
045856
3728 | HD 81101 | Astronomy | 265 |
3,340,052 | https://en.wikipedia.org/wiki/Noise%20measurement | In acoustics, noise measurement can be for the purpose of measuring environmental noise or measuring noise in the workplace. Applications include monitoring of construction sites, aircraft noise, road traffic noise, entertainment venues and neighborhood noise. One of the definitions of noise covers all "unwanted sounds". When sound levels reach a high enough intensity, the sound, whether it is wanted or unwanted, may be damaging to hearing. Environmental noise monitoring is the measurement of noise in an outdoor environment caused by transport (e.g. motor vehicles, aircraft, and trains), industry (e.g. machines) and recreational activities (e.g. music). The laws and limits governing environmental noise monitoring differ from country to country.
At the very least, noise may be annoying or displeasing or may disrupt the activity or balance of human or animal life, increasing levels of aggression, hypertension and stress. In the extreme, excessive levels or periods of noise can have long-term negative health effects such as hearing loss, tinnitus, sleep disturbances, a rise in blood pressure, an increase in stress and vasoconstriction, and an increased incidence of coronary artery disease. In animals, noise can increase the risk of death by altering predator or prey detection and avoidance, interfering with reproduction and navigation, and contributing to permanent tinnitus and hearing loss.
Various interventions are available to combat environmental noise. Roadway noise can be reduced by the use of noise barriers, limitation of vehicle speeds, alteration of roadway surface texture, limitation of heavy vehicles, use of traffic controls that smooth vehicle flow to reduce braking and acceleration, and tire design. Aircraft noise can be reduced by using quieter jet engines, altering flight paths and considering the time of day to benefit residents near airports. Industrial noise is addressed by redesign of industrial equipment, shock mounted assemblies and physical barriers in the workplace.
Noise may be measured using a sound level meter at the source of the noise. Alternatively, an organization or company may measure a person's exposure to environmental noise in a workplace via a noise dosimeter. The measurements taken using either of these methods will be evaluated according to the standards below.
Audio Systems and Broadcasting
Noise measurement can also be part of a test procedure using white noise, or some other specialized form of test signal. In audio systems and broadcasting, specific methods are used to obtain subjectively valid results in order that different devices and signal paths may be compared regardless of the inconsistent spectral distribution and temporal properties of the noise that they generate. In particular, the ITU-R 468 noise weighting was devised specifically for this purpose and is widely used for professional audio and broadcast measurements.
Standards
There are a number of standards for noise measurement, each with a different goal or focus, including:
Standard:ITU-R BS 468 widely used in Broadcasting and professional Audio.
Standard:IEC A-weighting is widely used in Environmental Noise measurement.
Standard:CCIR recommendation 468-4 is now maintained as ITU-R BS 468
Standard:CCITT 0.41 refers to 'Psophometric weighting' used on telephone circuits.
Standard:CCITT P53 is now continued as CCITT0.41
Standard:BS 6402:1983 specifies Personal sound exposure meters.
Standard:BS 3539:1968 specifies Sound level meters for motor vehicle noise.
Standard:BSEN 60651 supersedes BS 5969:1981 Sound level meters
See also
Sound power level LWA
Audio system measurements
Rumble measurement
Noise (environmental)
Noise pollution
Noise music
Noise dosimeter
Equal-loudness contour
A-weighting
C-weighting
Weighting filter
References
External links
Noise-Planet: app to make an open source noise map of environmental noise
Koopen: Indoor Noise Measurement Dataset
Noise
Noise pollution
Sound measurements
Acoustics
Sound
Occupational safety and health | Noise measurement | Physics,Mathematics | 771 |
76,592,348 | https://en.wikipedia.org/wiki/StackBlitz | StackBlitz is a collaborative online integrated development environment (IDE). The platform allows server-side software such as Node.js to be run entirely in the web browser, enabling fully online full-stack development. A number of web frameworks such as React, Next.js and Angular are supported.
History
StackBlitz was released to the public on August 2, 2017 by entrepreneur Eric Simons as an online integrated development environment for creating and sharing Angular and React projects. Prior to launching StackBlitz, Simons had attracted media attention by secretly living at AOL headquarters for two months in 2011 while working on a different startup company.
In May 2021, StackBlitz released WebContainers, a containerization solution that allowed server-side runtime environments such as Node.js to operate fully with web browsers. The company stated that the technology could boot development environments in less than a second, and was more secure than local environments due to running fully within the browser sandbox.
Features
StackBlitz offers an online integrated development environment that operates fully within a user's web browser as opposed to a more traditional local development environment. The software primarily emphasizes JavaScript development and has a large number of web framework templates readily available. Other Node.js, Python and PHP projects are also supported.
References
External links
2018 establishments in California
Online integrated development environments
2018 software
Web services | StackBlitz | Technology | 284 |
1,151,454 | https://en.wikipedia.org/wiki/Pregnancy%20over%20age%2050 | Pregnancy over the age of 50 has become possible for more women because of advances in assisted reproductive technology, in particular egg donation. Typically, a woman's fecundity ends with menopause, which, by definition, is 12 consecutive months without any menstrual flow at all. During perimenopause, the menstrual cycle and the periods become irregular and eventually stop altogether. The female biological clock can vary greatly from woman to woman. A woman's individual level of fertility can be tested through a variety of methods.
In the United States, between 1997 and 1999, 539 births were reported among mothers over age 50 (four per 100,000 births), with 194 being over 55.
The oldest recorded mother to date to conceive was 74 years. According to statistics from the Human Fertilisation and Embryology Authority, in the UK more than 20 babies are born to women over age 50 per year through in vitro fertilization with the use of donor oocytes (eggs).
Maria del Carmen Bousada de Lara formerly held the record of oldest verified mother; she was aged 66 years 358 days when she gave birth to twins, 130 days older than Adriana Iliescu, who gave birth in 2005 to a baby girl. In both cases, the children were conceived through IVF with donor eggs. The oldest verified mother to conceive naturally (listed currently in the Guinness Records) is Dawn Brooke (Guernsey); she conceived a son at the age of 59 in 1997.
Erramatti Mangamma, who gave birth at the age of 73 through in-vitro fertilisation via caesarean section in the city of Hyderabad, India, currently holds the record for being the oldest living mother. She delivered twin baby girls, making her also the oldest mother to give birth to twins.
The previous record for being the oldest living mother was held by Daljinder Kaur Gill from Amritsar, India, who gave birth to a baby boy at age 72 through in-vitro fertilisation.
Age considerations
Menopause typically occurs between 44 and 58 years of age. DNA testing is rarely carried out to confirm claims of maternity at advanced ages, but in one large study, among 12,549 African and Middle Eastern immigrant mothers, confirmed by DNA testing, only two mothers were found to be older than fifty; the oldest mother being 52.1 years at conception (and the youngest mother 10.7 years old).
Medical considerations
The risk of pregnancy complications increases as the mother's age increases. Risks associated with childbearing over the age of 50 include an increased incidence of gestational diabetes, hypertension, delivery by caesarean section, miscarriage, preeclampsia, and placenta previa. In comparison to mothers between 20 and 29 years of age, mothers over 50 are at almost three times the risk of low birth weight, premature birth, and extremely premature birth; their risk of extremely low birth weight, small size for gestational age, and fetal mortality was almost double.
Cases of pregnancy over age 50
Debate
Pregnancies among older women have been a subject of controversy and debate. Some argue against motherhood late in life on the basis of the health risks involved, or out of concern that an older mother might not be able to give proper care for a child as she ages, while others contend that having a child is a fundamental right and that it is commitment to a child's wellbeing, not the parents' ages, that matters.
A survey of attitudes towards pregnancy over age 50 among Australians found that 54.6% believed it was acceptable for a post-menopausal woman to have her own eggs transferred and that 37.9% believed it was acceptable for a post-menopausal woman to receive donated ova or embryos.
Governments have sometimes taken actions to regulate or restrict later-in-life childbearing. In the 1990s, France approved a bill which prohibited post-menopausal pregnancy, which the French Minister of Health at the time, Philippe Douste-Blazy, said was "... immoral as well as dangerous to the health of mother and child". In Italy, the Association of Medical Practitioners and Dentists prevented its members from providing women aged 50 and over with fertility treatment. Britain's then-Secretary of State for Health, Virginia Bottomley, stated, "Women do not have the right to have a child; the child has a right to a suitable home". However, in 2005, age restrictions on IVF in the United Kingdom were officially withdrawn.
Legal restrictions are only one of the barriers confronting women seeking IVF, as many fertility clinics and hospitals set age limits of their own.
See also
List of oldest fathers
List of people with the most children
List of multiple births
Mother
Pregnancy
Sexuality in older age
References
Biological records
Gerontology
50
Lists of superlatives
Lists of mothers
Sexuality and age | Pregnancy over age 50 | Biology | 1,013 |
2,239,651 | https://en.wikipedia.org/wiki/Betaretrovirus | Betaretrovirus is a genus of the Retroviridae family. It has type B or type D morphology. The type B is common for a few exogenous, vertically transmitted and endogenous viruses of mice; some primate and sheep viruses are the type D.
Examples are Mouse mammary tumor virus, enzootic nasal tumor virus (ENTV-1, ENTV-2), and simian retrovirus types 1, 2 and 3 (SRV-1, SRV-2, SRV-3).
References
External links
Viralzone: Betaretrovirus
Betaretroviruses
Virus genera | Betaretrovirus | Biology | 128 |
52,372,684 | https://en.wikipedia.org/wiki/Fusarium%20acutatum | Fusarium acutatum is a fungus species of the genus Fusarium. Fusarium acutatum can cause gangrenous necrosis on the feet from diabetic patients. Fusarium acutatum produces fumonisin B1, fumonisin B2, fumonisin B3 and 8-O-Methyl-fusarubin.
References
Further reading
acaciae-mearnsii
Fungi described in 1998
Fungus species | Fusarium acutatum | Biology | 94 |
73,404,749 | https://en.wikipedia.org/wiki/Sea%20ice%20brine%20pocket | A sea ice brine pocket is an area of fluid sea water with a high salt concentration trapped in sea ice as it freezes. Due to the nature of their formation, brine pockets are most commonly found in areas below , where it is sufficiently cold for seawater to freeze and form sea ice. Though the high salinity and low light conditions of brine pockets create a challenging environment for marine mammals, brine pockets serve as a habitat for various microbes. Sampling and studying these pockets requires specialized equipment to accommodate the hypersaline conditions and subzero temperatures.
Formation
Brine pockets and channels are formed as seawater freezes, through a process called brine rejection. When sea ice forms, the water molecules form ice crystals, which have a regular lattice structure. The larger salt (NaCl) molecules in the sea water cannot be incorporated into this lattice, resulting in the salt being rejected from the sea ice. As seawater freezes and more pure water ice forms, the salt becomes more highly concentrated in the remaining sea water, forming a brine. As the brine salinity increases it becomes more dense compared to the surrounding sea ice, and the brine sinks downward through the ice, forming brine pockets. As the brine pockets form, they begin to coalesce, forming pockets of dense and saline brine. As these larger pockets of brine become interconnected, the may form a network of brine channels within the ice.
Analysis of structure
The internal structure of sea ice can be analyzed using scanning electron microscopy and water-soluble resin. Brine can be drained from the sea ice using centrifugation at sufficiently cold temperatures to prevent melting and to maintain the structural integrity of the sea ice sample. Water-soluble resin is then injected to fill the brine pockets and channels and subsequently polymerized under ultraviolet light at around . The ice is sublimated by freeze drying, freeing the hardened casts, which can be examined using scanning electron microscopes to determine the structure of the brine pockets and channels and the volume of habitable space available to microbes.
Abiotic conditions
Variability
Sea ice brine pockets create diverse and unique microecosystems, with abiotic factors such as chemical composition and physical conditions varying from one pocket to the next. Snow cover and temperature play the most significant role in influencing the variation of conditions present in brine pockets and channels. Sea ice brine pockets in general are extreme environments, due to their subzero temperatures and high salinities, but they harbor a diverse ecosystem of microbial life. Conditions within a brine pocket can vary drastically in a short time with a heavy snowfall or sudden temperature change, which means that microbial life within brine pockets must be flexible to environmental change.
Hypersaline environment
As sea ice forms, the water freezes into a lattice structure; this process ejects many of the salts and microbes from the ice, concentrating them in the remaining water. This high-salinity seawater is known as brine, and as more salts accumulate within the brine pockets, the remaining brine becomes more resistant to freezing. This accumulation of salts, producing a liquid environment that can remain liquid in subzero temperatures, provides a harsh-but-suitable environment for microorganisms to survive. These brine pockets maintain a very saline environment, have high concentrations of other dissolved minerals, and have a high density of microbial life. Brine salinity and concentration are directly dependent on the air temperature of the surrounding environment; as temperatures decrease, more salts become rejected from newly-formed ice, causing more salts to accumulate within the brine, and brine pockets decrease in size. This results in a hypersaline environment with dissolved salt contents which can reach up to 200 g/kg, in contrast to open seawater which has a salinity of 33-37 g/kg.
Light limitation
Brine pockets can form deep within sea ice where there is very low irradiance. Since snow and ice block and reflect incoming light, with deeper brine pockets experience more light limitation than shallower brine pockets. When salts in seawater become rejected during the ice formation, these salts can precipitate and accumulate within the ice, influencing the ability of light to pass through the ice. Given that more salts will precipitate with colder temperatures as brine becomes more concentrated, colder temperatures can result in a greater change to the optics of the ice as more salts accumulate. Lower light levels in brine pockets can impact the survivability of photosynthetic organisms such as cyanobacteria and diatoms. These organisms have developed adaptations so that they can survive in this extremely light-limited environment.
Microbial diversity and abundance
Bacteria
Brine pockets are home to a diverse and dynamic community of marine bacteria which are adapted to survive and thrive in the extreme cold, called psychrophiles. As psychrophiles are adapted to survive and grow at very low temperatures, they are capable of synthesizing enzymes that remain active at low temperatures, allowing them to metabolize in the extremely cold conditions of brine pockets and channels. Bacteria in brine pockets must also be able to tolerate high salt concentrations, so these bacteria are also halophilic. Halophilic psychrophiles are found within Proteobacteria, Actinobacteria and Bacteroidetes.
Two Proteobacteria found to be abundant in brine pockets are gammaproteobacteria and alphaproteobacteria. Many gammaproteobacteria are capable of degrading organic matter, making them important for nutrient cycling and organic matter turnover within the brine pocket. For example, aerobic anoxygenic phototrophic (AAP) bacteria are found in marine environments and play a vital role in supporting the electron transport chain by metabolizing bacteriochlorophyll. Alphaproteobacteria include species that are known to be important for nitrogen cycling and carbon cycling in marine environments. Some Alphaproteobacteria are capable of nitrogen fixation, which can provide an important source of nitrogen for other microorganisms within the pocket.
Actinobacteria are also halophilic psyschrophiles that have been found in brine pockets, known for their ability to produce a wide range of secondary metabolites, including antibiotics and other bioactive compounds. Actinobacteria are often found in association with other microorganisms, where they may play a role in protecting their host from pathogens or other threats.
Lastly, bacteroidetes are found to be abundant in brine pockets, as they can degrade complex organic matter, including carbohydrates and proteins, such as algae-derived ocean polysaccharides. Compared to other bacteria, bacteroidetes species have been shown to contain more genes associated with polysaccharide degradation, allowing them to play a major contributing role in brine pocket carbon- and nutrient-cycling.
Viruses
Brine pockets can support a wide variety of bacteria, and they are also home to high concentrations of marine viruses. Marine viruses in brine pockets may play a major role in regulating the population dynamics of their hosts and influencing biogeochemical cycles within the pocket. As viruses are highly specific to their hosts, viruses in brine pockets include bacteriophages, which infect bacteria, and archaeal viruses, which infect archaea. Algal viruses and other eukaryotic viruses can also be present in brine pockets, which influences the productivity and diversity of these microorganisms. Marine viruses in brine pockets can also influence biogeochemical processes by releasing nutrients through the lysis of infected cells, and by facilitating horizontal gene transfer between hosts. Infections caused by viruses can also trigger changes in the host metabolism, leading to altered nutrient uptake and production of metabolites, which in turn can influence the surrounding environment.
The few studies on viral abundance and composition in brine pockets focus mainly on the diverse concentrations of viruses, separated by molecular size. Brine pockets in the Antarctic lakes have been found to have three groups of viruses at different abundances. In the Arctic waters, viral concentrations were found to vary from 1.6 to 82 × 106 ml−1, with the highest concentrations found in the coldest brine pockets (–24 to –31 °C).
Protists
Brine pockets harbor a diverse and abundant array of protists that are able to survive in extreme conditions. The most common protists in sea ice are pennate diatoms, which can accumulate in numbers so high that sea ice is visibly discolored brown. Sea ice pennate diatom populations can become very dense, reaching up to 1000 μg of chlorophyll per liter of seawater, compared to a typical maximum of 5 μg/L in the open ocean. Due to their high abundance in sea ice, pennate diatoms can profoundly impact the microecosystem within a brine pocket, such as DMSP production. Although diatoms themselves are not high producers of DMSP overall, because of their high abundance within sea ice, the amount of DMSP produced within sea ice as a cryoprotectant and osmoregulator can be impactful.
In addition to pennate diatoms, brine pockets and channels house a variety of flagellates, amoebae, and ciliates. Protist abundance and diversity within a brine pocket/channel is primarily limited to brine pocket/channel structure. Specifically, the size of pores and channels within the ice can limit or encourage the distribution of certain protists and metazoans, with some areas with larger pore sizes having greater abundances of large predatory protists such as ciliates, and other areas with reduced populations of predatory protists due to smaller pore sizes. Brine pockets which are accessed by smaller pores can experience a higher abundance of photoautotrophic protists as well as smaller heterotrophic protists due to limited grazing pressure by the reduced abundance of large predators, such as large ciliates and metazoan predators.
High population densities
Since sea ice pockets are confined and highly-concentrated ecosystems, they are able to house several orders of magnitude greater population densities of bacteria and protists than are found in the open ocean (up to thousands of individuals per liter for protists). This high abundance of organisms can pose challenges, as different bacteria and protists will compete for resources. A high density of microorganisms can result in the accumulation of metabolic byproducts, such as oxygen, dissolved organic matter, ammonia, and dimethylsulfoniopropionate (DMSP). Some organisms can gain a selective advantage within brine pockets as the high population density can result in increased rates of horizontal gene transfer because organisms are in close proximity. Horizontal gene transfer can allow certain organisms to obtain genes from bacteria that may be advantageous in a light-limited, extremely cold environment.
Microbial adaptations
Survival in sea ice brine pockets and channels, which are freezing, hypersaline, and light-limited environments, requires organisms to adapt well to these conditions. Photosynthetic protists and cyanobacteria need to be able to produce energy through alternate metabolic pathways when light is limited within brine pockets. Sea ice brine pockets in Arctic and Antarctic sea ice sheets will experience several weeks of no light at certain locations. In addition to sea ice and snow blocking light from entering brine pockets, seasonal variations of light levels can result in brine pockets being extremely light-limited at times. Sea ice diatoms can alter their metabolic and photosynthetic pathways to survive during periods of little-to-no light. Such adaptations include developing flexible photosystems and altering photosynthetic pigment compositions to allow diatoms to photoacclimate and maintain high photosynthetic efficiency when light levels are low. Sea ice diatoms also have the ability to upregulate and downregulate proteins required for photosynthesis rapidly as light levels change, which helps them survive the environmental stresses of becoming trapped in sea ice and being released back into the ocean as ice melts. Additionally, sea ice microalgae (photosynthetic protists) may be mixotrophic, allowing them to switch to heterotrophy when light is limited. Some research has shown that sea ice diatoms can use an ancient bacterial metabolic pathway known as the Entner−Doudoroff pathway (EDP) to maintain metabolism and energy production during light limitation.
The ability of diatoms to use light for energy also depends on air temperature. As it gets colder, the thylakoid membranes within the microalgae plastids can become dense and compact, which influences how certain photosynthetic proteins (such as the proteins necessary for Photosystems I & II) function and self-assemble. Sea ice diatoms can alter the saturation of the fatty acids that compose the thylakoid membranes as temperatures decrease, which can provide more fluidity to these membranes and result in proper folding of photosynthetic proteins at subzero temperatures.
As temperatures within brine pockets decrease, organisms that survive within brine pockets produce substances that can help prevent freezing. Some sea ice diatoms can produce specialized ice-binding proteins and extracellular polymeric substances, which can help increase the habitat space available within a brine pocket by preventing ice formation and reducing the freezing temperature of the brine. Decreased temperatures can also reduce the efficiency of important physiological processes within many microorganisms. Psychrophilic diatoms and bacteria have the ability to regulate their production of proteins, DNA, and enzymes required for metabolism to help maintain metabolic efficiency in colder temperatures. In the same way that diatoms can regulate the fatty acid composition within their plastid membranes, they can also regulate the plasma membranes surrounding each cell. As temperatures decrease, membranes become less fluid. Both bacteria and sea ice diatoms can alter the fatty acid composition within their membranes to include more unsaturated fatty acids, which allow the plasma membrane to maintain fluidity in extreme cold temperatures.
Sampling
Melted sample analysis
Methods used to study larger eukaryotes present in sea ice are also used to study other smaller microbes. Regardless of sea ice type, standard practice has been to eventually melt the collected sea ice sample before analysis for convenience. Analytical methods developed to investigate pelagic microbes can readily be applied to these melted sea ice samples. One drawback of this approach is that melting the sea ice exposes microbes accustomed to the hypersaline conditions of brine pockets and channels to significantly fresher water. The melting sea ice contains little-to-no salt, greatly diluting the salt concentration of the liquid phase of the sea ice sample. Osmotic shock and lysis may occur if the salinity decreases too much; additionally, careless warming of the sea ice sample may cause the microbes present to undergo thermal shock. One solution has been to melt the ice into a known volume of seawater kept at subzero temperatures filtered by pelagic microbes. This minimizes the decrease in salinity and drop in temperature and subsequently minimizes the loss of live microbes in the sample. Ice samples colder than –10 °C, however, will still see the loss of over half of the microbial population in the sample when using this approach. Colder ice samples will have brine pools with microbe populations that are adapted to significantly greater salinity and much colder temperatures than underlying seawater, requiring them to be melted into sterile brine solutions that match their further elevated salinity and even lower temperatures prior to analysis.
Unmelted sample analysis
Methods to analyze the microbe populations of colder, unmelted ice samples (cold enough to prevent brine drainage) under microscopes were developed by designing specialized equipment. Epifluorescence microscopes that can operate at subzero temperatures allowed researchers to observe undisturbed brine pool microbe populations with the addition of DAPI (DNA staining 4’, 6-diamidino-2-phenylindole) mixed into an adequately salty and cold brine solution to highlight non-autofluorescing microbes. Alternatively, a microscope with a cold stage, commonly used to study glacial ice, may also be used to study unmelted sea ice with the right modifications.
Other stains such as Alcian Blue (stains extracellular polysaccharide substances) and CTC (stains oxygen-respiring bacteria, 5-cyano-2,3-ditolyl tetrazolium) have also been used. Alcian Blue stains have revealed that extracellular polymeric substances (EPS) are ubiquitous throughout brine pools found in sea ice, even without any microbes visible in the brine pool. Some EPS originates from seawater before freezing but is also produced in copious amounts within algal bands and by bacteria to a lesser extent but throughout the entirety of the sea ice. CTC stains have indicated greater percentages of microbial activity within the sea ice when compared to the seawater below it, especially bacteria associated with particulate matter.
CTC has also been applied to the staining of unmelted sections of sea ice sampled during spring and summer, which were subsequently returned to the ice core holes they were collected from for in situ incubation. After recollection, metabolic activity was halted by adding a fixative into the melting sea ice. DAPI and Alcian Blue were then used to stain subsamples of the resulting melted sea ice sample, bypassing the restrictive temperature requirement. It was found that gel-like particles of EPS associated with bacteria were in situ bacterial activity hotspots.
Extracellular enzyme activity has been detected down to as low as –18 °C in unmelted sea ice using a fluorescently-labeled protein substrate analogue. Relying on melted sea ice samples runs the risk of underestimating in situ activity due to the dilution of microbial populations.
Direct collection
A thick portion of sea ice is partially drilled into to create a hole that is covered and left to accumulate draining brine at the bottom before being collected later. This brine drainage occurs much more slowly as temperatures decrease, especially below –5 °C, which is the limit for bulk ice permeability. One limitation to this method is that the origins of the drained brine, as well as what proportion of microbes were left behind in the brine pool, cannot be known with certainty. Studies on these “sackhole” brines have illustrated that substantial bacteria and viruses can be found within brine pools.
References
Wikipedia Student Program
Sea ice | Sea ice brine pocket | Physics | 3,873 |
35,215,882 | https://en.wikipedia.org/wiki/Isothermal%20microcalorimetry | Isothermal microcalorimetry (IMC) is a laboratory method for real-time monitoring and dynamic analysis of chemical, physical and biological processes. Over a period of hours or days, IMC determines the onset, rate, extent and energetics of such processes for specimens in small ampoules (e.g. 3–20 ml) at a constant set temperature (c. 15 °C–150 °C).
IMC accomplishes this dynamic analysis by measuring and recording vs. elapsed time the net rate of heat flow (μJ/s = μW) to or from the specimen ampoule, and the cumulative amount of heat (J) consumed or produced.
IMC is a powerful and versatile analytical tool for four closely related reasons:
All chemical and physical processes are either exothermic or endothermic—produce or consume heat.
The rate of heat flow is proportional to the rate of the process taking place.
IMC is sensitive enough to detect and follow either slow processes (reactions proceeding at a few % per year) in a few grams of material, or processes which generate minuscule amounts of heat (e.g. metabolism of a few thousand living cells).
IMC instruments generally have a huge dynamic range—heat flows as low as ca. 1 μW and as high as ca. 50,000 μW can be measured by the same instrument.
The IMC method of studying rates of processes is thus broadly applicable, provides real-time continuous data, and is sensitive. The measurement is simple to make, takes place unattended and is non-interfering (e.g. no fluorescent or radioactive markers are needed).
However, there are two main caveats that must be heeded in use of IMC:
Missed data: If externally prepared specimen ampoules are used, it takes ca. 40 minutes to slowly introduce an ampoule into the instrument without significant disturbance of the set temperature in the measurement module. Thus any processes taking place during this time are not monitored.
Extraneous data: IMC records the aggregate net heat flow produced or consumed by all processes taking place within an ampoule. Therefore, in order to be sure what process or processes are producing the measured heat flow, great care must be taken in both experimental design and in the initial use of related chemical, physical and biologic assays.
In general, possible applications of IMC are only limited by the imagination of the person who chooses to employ it as an analytical tool and the physical constraints of the method. Besides the two general limitations (main caveats) described above, these constraints include specimen and ampoule size, and the temperatures at which measurements can be made. IMC is generally best suited to evaluating processes which take place over hours or days. IMC has been used in an extremely wide range of applications, and many examples are discussed in this article, supported by references to published literature. Applications discussed range from measurement of slow oxidative degradation of polymers and instability of hazardous industrial chemicals to detection of bacteria in urine and evaluation of the effects of drugs on parasitic worms. The present emphasis in this article is applications of the latter type—biology and medicine.
Overview
Definition, purpose, and scope
Calorimetry is the science of measuring the heat of chemical reactions or physical changes. Calorimetry is performed with a calorimeter.
Isothermal microcalorimetry (IMC) is a laboratory method for real-time, continuous measurement of the heat flow rate (μJ/s = μW) and cumulative amount of heat (J) consumed or produced at essentially constant temperature by a specimen placed in an IMC instrument. Such heat is due to chemical or physical changes taking place in the specimen. The heat flow is proportional to the aggregate rate of changes taking place at a given time. The aggregate heat produced during a given time interval is proportional to the cumulative amount of aggregate changes which have taken place.
IMC is thus a means for dynamic, quantitative evaluation of the rates and energetics of a broad range of rate processes, including biological processes. A rate process is defined here as a physical and/or chemical change whose progress over time can be described either empirically or by a mathematical model (Bibliography: Glasstone, et al. 1941 and Johnson, et al. 1974 and rate equation).
The simplest use of IMC is detecting that one or more rate processes are taking place in a specimen because heat is being produced or consumed at a rate that is greater than the detection limit of the instrument used. This can be a useful, for example, as a general indicator that a solid or liquid material is not inert but instead is changing at a given temperature. In biological specimens containing a growth medium, appearance over time of a detectable and rising heat flow signal is a simple general indicator of the presence of some type of replicating cells.
However, for most applications it is paramount to know, by some means, what process or processes are being measured by monitoring heat flow. In general this entails first having detailed physical, chemical and biological knowledge of the items placed in an IMC ampoule before it is placed in an IMC instrument for evaluation of heat flow over time. It is also then necessary to analyze the ampoule contents after IMC measurements of heat flow have been made for one or more periods of time. Also, logic-based variations in ampoule contents can be used to identify the specific source or sources of heat flow. When rate process and heat flow relationships have been established, it is then possible to rely directly on the IMC data.
What IMC can measure in practice depends in part on specimen dimensions, and they are necessarily constrained by instrument design. A given commercial instrument typically accepts specimens of up to a fixed diameter and height. Instruments accepting specimens with dimensions of up to ca. 1 or 2 cm in diameter x ca. 5 cm in height are typical. In a given instrument larger specimens of a given type usually produce greater heat flow signals, and this can augment detection and precision.
Frequently, specimens are simple 3 to 20 ml cylindrical ampoules (Fig. 1) containing materials whose rate processes are of interest—e.g. solids, liquids, cultured cells—or any combination of these or other items expected to result in production or consumption of heat. Many useful IMC measurements can be carried out using simple sealed ampoules, and glass ampoules are common since glass is not prone to undergoing heat-producing chemical or physical changes. However, metal or polymeric ampoules are sometimes employed. Also, instrument/ampoule systems are available which allow injection or controlled through-flow of gasses or liquids and/or provide specimen mechanical stirring.
Commercial IMC instruments allow heat flow measurements at temperatures ranging from ca. 15 °C – 150 °C. The range for a given instrument may be somewhat different.
IMC is extremely sensitive – e.g. heat from slow chemical reactions in specimens weighing a few grams, taking place at reactant consumption rates of a few percent per year, can be detected and quantified in a matter of days. Examples include gradual oxidation of polymeric implant materials and shelf life studies of solid pharmaceutical drug formulations (Applications: Solid materials).
Also the rate of metabolic heat production of e.g. a few thousand living cells, microorganisms or protozoa in culture in an IMC ampoule can be measured. The amount of such metabolic heat can be correlated (through experimentation) with the number of cells or organisms present. Thus, IMC data can be used to monitor in real time the number of cells or organisms present and the net rate of growth or decline in this number (Applications: Biology and medicine).
Although some non-biological applications of IMC are discussed (Applications: Solid materials) the present emphasis in this article is on the use of IMC in connection with biological processes (Applications: Biology and medicine).
Data obtained
A graphic display of a common type of IMC data is shown in Fig. 2. At the top is a plot of recorded heat flow (μJ/s = μW) vs. time from a specimen in a sealed ampoule, due to an exothermic rate process which begins, accelerates, reaches a peak heat flow and then subsides. Such data are directly useful (e.g. detection of a process and its duration under fixed conditions) but the data are also easily assessed mathematically to determine process parameters. For example, Fig. 2 also shows an integration of the heat flow data, giving accumulated heat (J) vs. time. As shown, parameters such as the maximum growth (heat generation) rate of the process, and the duration time of the lag phase before the process reaches maximum heat can be calculated from the integrated data. Calculations using heat flow rate data stored as computer files are easily automated. Analyzing IMC data in this manner to determine growth parameters has important applications the life sciences (Applications: Biology and medicine). Also, heat flow rates obtained at a series of temperatures can be used to obtain the activation energy of the process being evaluated (Hardison et al. 2003).
Development history
Lavoisier and Laplace are credited with creating and using the first isothermal calorimeter in ca. 1780 (Bibliography: Lavoisier A & Laplace PS 1780). Their instrument employed ice to produce a relatively constant temperature in a confined space. They realized that when they placed a heat-producing specimen on the ice (e.g. a live animal), the mass of liquid water produced by the melting ice was directly proportional to the heat produced by the specimen.
Many modern IMC instrument designs stem from work done in Sweden in the late 1960s and early 1970s (Wadsö 1968, Suurkuusk & Wadsö 1974). This work took advantage of the parallel development of solid-state electronic devices—particularly commercial availability of small thermoelectric effect (Peltier-Seebeck) devices for converting heat flow into voltage—and vice versa.
In the 1980s, multi-channel designs emerged (Suurkuusk 1982), which allow parallel evaluation of multiple specimens. This greatly increased the power and usefulness of IMC and led to efforts to fine-tune the method (Thorén et al. 1989). Much of the further design and development done in the 1990s was also accomplished in Sweden by Wadsö and Suurkuusk and their colleagues. This work took advantage of the parallel development of personal computer technology which greatly augmented the ability to easily store, process and interpret heat flow vs. time data.
Instrument development work since the 1990s has taken further advantage of the continued development of solid-state electronics and personal computer technology. This has created IMC instruments of increasing sensitivity and stability, numbers of parallel channels, and even greater ability to conveniently record, store and rapidly process IMC data. In connection with wider use, substantial attention has been paid to creating standards for describing the performance of IMC instruments (e.g. precision, accuracy, sensitivity) and for methods of calibration (Wadsö and Goldberg 2001).
Instruments and measurement principles
Instrument configurations
Modern IMC instruments are actually semi-adiabatic—i.e. heat transfer between the specimen and its surroundings is not zero (adiabatic), because IMC measurement of heat flow depends on the existence of a small temperature differential—ca. 0.001 °C. However, because the differential is so low, IMC measurements are essentially isothermal. Fig. 3. shows an overview of an IMC instrument which contains 48 separate heat flow measurement modules. One module is shown. The module's measuring unit is typically a Peltier-Seebeck device. The device produces a voltage proportional to the temperature difference between a specimen which is producing or consuming heat and a thermally inactive reference which is at the temperature of the heat sink. The temperature difference is in turn proportional to the rate at which the specimen is producing or consuming heat (see Calibration below). All the modules in an instrument use the same heat sink and thermostat and thus all produce data at the same set temperature. However, it is generally possible to start and stop measurements in each ampoule independently. In a highly parallel (e.g. 48-channel) instrument like the one shown in Fig. 3, this makes it possible to perform (start and stop) several different experiments whenever it is convenient to do so.
Alternatively, IMC instruments can be equipped with duplex modules which yield signals proportional to the heat flow difference between two ampoules. One of two such duplex ampoules is often a blank or control—i.e. a specimen which does not contain the material producing the rate process of interest, but whose content is otherwise identical to that which is in the specimen ampoule. This provides a means for eliminating minor heat-producing reactions which are not of interest—for example gradual chemical changes over a period of days in a cell culture medium at the measurement temperature. Many useful IMC measurements can be carried out using simple sealed ampoules. However, as mentioned above, instrument/ampoule systems are available which allow or even control flow of gasses or liquids to and/or from the specimens and/or provide specimen mechanical stirring.
Reference inserts
Heat flow is usually measured relative to a reference insert, as shown in Fig. 3. This is typically a metal coupon that is chemically and physically stable at any temperature in the instrument's operating range and thus will not produce or consume heat itself. For best performance, the reference should have a heat capacity close to that of the specimen (e.g. IMC ampoule plus contents).
Modes of operation
Heat conduction (hc) mode
Commercial IMC instruments are often operated as heat conduction (hc) calorimeters in which heat produced by the specimen (i.e. material in an ampoule) flows to the heat sink, typically an aluminum block contained in a thermostat (e.g. constant temperature bath). As mentioned above, an IMC instrument operating in hc mode is not precisely isothermal because small differences between the set temperature and the specimen temperature necessarily exist—so that there is measurable heat flow. However, small variations in specimen temperature do not significantly affect heat sink temperature because the heat capacity of the heat sink is much higher than the specimen—usually ca. 100×.
Heat transfer between the specimen and the heat sink takes place through a Peltier-Seebeck device, allowing dynamic measurement of heat produced or consumed. In research-quality instruments, thermostat/heat sink temperature is typically accurate to < ±0.1 K and maintained within ca. < ±100 μK/24h. The precision with which heat sink temperature is maintained over time is a major determinant of the precision of the heat flow measurements over time. An advantage of hc mode is a large dynamic range. Heat flows of ca. 50,000 μW can be measured with a precision of ca. ±0.2 μW. Thus measuring a heat flow of ca. >0.2 μW above baseline constitutes detection of heat flow, although a more conservative detection of 10× the precision limit is often used.
Power compensation (pc) mode
Some IMC instruments operate (or can also be operated) as power compensation (pc) calorimeters. In this case, in order to maintain the specimen at the set temperature, heat produced is compensated using a Peltier-Seebeck device. Heat consumed is compensated either by an electric heater or by reversing the polarity of the device (van Herwaarden, 2000). If a given instrument is operated in pc mode rather than hc, the precision of heat flow measurement remains the same (e.g. ca. ±0.2 μW). The advantage of compensation mode is a smaller time constant – i.e. the time needed to detect a given heat flow pulse is ca.10X shorter than in conduction mode. The disadvantage is a ca. 10X smaller dynamic range compared to hc mode.
Calibration
For operation in either hc or pc mode, routine calibration in commercial instruments is usually accomplished with built-in electric heaters. The performance of the electrical heaters can in turn be validated using specimens of known heat capacity or which produce chemical reactions whose heat production per unit mass is known from thermodynamics (Wadsö and Goldberg 2001). In either hc or pc mode, the resulting signal is a computer-recordable voltage, calibrated to represent specimen μ W-range heat flow vs. time. Specifically, if no significant thermal gradients exist in the specimen, then P = eC [U + t (dU/dt)], where P is heat flow (i.e. μW), εC is the calibration constant, U the measured potential difference across the thermopile, and t the time constant. Under steady-state conditions—for example during the release of a constant electrical calibration current, this simplifies to P = eC U. (Wadsö and Goldberg 2001).
Ampoules
Many highly useful IMC measurements can be conducted in sealed ampoules (Fig. 1) which offer advantages of simplicity, protection from contamination and (where needed) a substantial margin of bio-safety for persons handling or exposed to the ampoules. A closed ampoule can contain any desired combination of solids, liquids, gasses or items of biologic origin. Initial gas composition in the ampoule head space can be controlled by sealing the ampoule in the desired gas environment.
However, there are also IMC instrument/ampoule designs which permit controlled flow of gas or liquid through the ampoule during measurement and/or mechanical stirring. Also, with proper accessories, some IMC instruments can be operated as ITC (isothermal titration calorimetry) instruments. The topic of ITC is covered elsewhere (see Isothermal titration calorimetry). In addition some IMC instruments can record heat flow while the temperature is slowly changed (scanned) over time. The scanning rate has to be slow (ca. ) in order to keep IMC-scale specimens (e.g. a few grams) sufficiently close to the heat sink temperature (< ca. 0.1 °C). Fast scanning of temperature is the province of differential scanning calorimetry (DSC) instruments which generally use much smaller specimens. Some DSC instruments can be operated in IMC mode, but the small ampoule (and therefore specimen) size needed for scanning limit the utility and sensitivity of DSC instruments used in IMC mode.
Basic methodology
Setting a temperature
Heat flow rate (μJ/s = μW) measurements are accomplished by first setting an IMC instrument thermostat at a selected temperature and allowing the instrument's heat sink to stabilize at that temperature. If an IMC instrument operating at one temperature is set to a new temperature, re-stabilization at the new temperature setting may take several hours—even a day. As explained above, achievement and maintenance of a precisely stable temperature is fundamental to achieving precise heat flow measurements in the μW range over extended times (e.g. days).
Introducing a specimen
After temperature stabilization, if an externally prepared ampoule (or some solid specimen of ampoule dimensions) is used, it is slowly introduced (e.g. lowered) into an instrument's measurement module, usually in a staged operation. The purpose is to ensure that by the time the ampoule/specimen is in the measurement position, its temperature is close to (within c. 0.001 °C) of the measurement temperature. This is so that any heat flow then measured is due to specimen rate processes rather than due to a continuing process of bringing the specimen to the set temperature. The time for introduction of a specimen in a 3–20 ml IMC ampoule into measurement position is ca. 40 minutes in many instruments. This means that heat flow from any processes which take place within a specimen during that the introduction period will not be recorded.
If an in-place ampoule is used, and some agent or specimen is injected, this also produces a period of instability, but it is on the order ca. 1 minute. Fig. 5 provides examples of both the long period needed to stabilize an instrument if an ampoule is introduced directly, and the short period of instability due to injection.
Recording data
After the introduction process, specimen heat flow can be precisely recorded continuously, for as long as it is of interest. The extreme stability of research-grade instruments (< ±100 μK/24h ) means that accurate measurements can be (and often are) made for a period of days. Since the heat flow signal is essentially readable in real time, it serves as a means for deciding whether or not heat flow of interest is still occurring. Also, modern instruments store heat flow vs. time data as computer files, so both real-time and retrospective graphic display and mathematical analysis of data are possible.
Usability
As indicated below, IMC has many advantages as a method for analyzing rate processes, but there are also some caveats that must be heeded.
Advantages
Broadly applicable
Any rate process can be studied—if suitable specimens will fit IMC instrument module geometry, and proceed at rates amenable to IMC methodology (see above). As shown under Applications, IMC is in use to quantify an extremely wide range of rate processes in vitro—e.g. from solid-state stability of polymers (Hardison et al. 2003) to efficacy of drug compounds against parasitic worms (Maneck et al. 2011). IMC can also determine the aggregate rate of uncharacterized, complex, or multiple interactions (Lewis & Daniels). This is especially useful for comparative screening—e.g. the effects of different combinations of material composition and/or fabrication processes on overall physico-chemical stability.
Real-time and continuous
IMC heat flow data are obtained as voltage fluctuations vs. time, stored as computer files and can be displayed essentially in real time—as the rate process is occurring. The heat flow-related voltage is continuous over time, but in modern instruments it is normally sampled digitally. The frequency of digital sampling can be controlled as needed—i.e. frequent sampling of rapid heat flow changes for better time resolution or slower sampling of slow changes in order to limit data file size.
Sensitive and fast
IMC is sensitive enough to detect and quantify in short times (hours, days) reactions which consume only a few percent of reactants over long times (months). IMC thus avoids long waits often needed until enough reaction product has accumulated for conventional (e.g. chemical) assays. This applies to both physical and biological specimens (see Applications).
Direct
At each combination of specimen variables and set temperature of interest, IMC provides direct determination of the heat flow kinetics and cumulative heat of rate processes. This avoids any need to assume that a rate process remains the same when temperature or other controlled variables are changed before an IMC measurement.
Simple
For comparisons of the effect of experimental variables (e.g. initial concentrations) on rate processes, IMC does not require development and use of chemical or other assay methods. If absolute data are required (e.g. quantity of product produced by a process), then assays can be conducted in parallel on specimens identical to those used for IMC (and/or on IMC specimens after IMC runs). The resultant assay data is used to calibrate the rate data obtained by IMC.
Non-interfering
IMC does not require adding markers (e.g. fluorescent or radioactive substances) to capture rate processes. Unadulterated specimens can be used, and after an IMC run, the specimen is unchanged (except by the processes which have taken place). The post-IMC specimen can be subjected to any kind of physical, chemical, morphological or other evaluation of interest.
Caveats
Missed data
As indicated in the methodology description, when the IMC method of inserting a sealed ampoule is used, it is not possible to capture heat flow during the first ca. 40 minutes while the specimen is slowly being brought to the set temperature. In this mode therefore, IMC is best suited to studying processes which start slowly or occur slowly at a given temperature. This caveat also applies to the time before insertion—i.e. time elapsed between preparing a specimen (in which a rate process may then start) and starting the IMC insertion process (Charlebois et al. 2003). This latter effect is usually minimized if the temperature chosen for IMC is substantially higher (e.g. 37 °C) than the temperature at which the specimen is prepared (e.g. 25 °C).
Extraneous data
IMC captures the aggregate heat production or consumption resulting from all processes taking place within a specimen, including for example
Possible changes in the physico-chemical state of the specimen ampoule itself; e.g. stress relaxation in metal components, oxidation of polymeric components.
Degradation of a culture medium in which metabolism and growth of living cells is being studied.
Thus great care must be taken in experimental planning and design to identify all possible processes which may be taking place. It is often necessary to design and conduct preliminary studies intended to systematically determine if multiple processes are taking place and if so, their contributions to aggregate heat flow. One strategy, in order to eliminate extraneous heat flow data, is to compare heat flow for a specimen in which the rate process of interest is taking place with that from a blank specimen which includes everything in the specimen of interest—except the item which will undergo the rate process of interest. This can be directly accomplished with instruments having duplex IMC modules which report the net heat flow difference between two ampoules.
Applications
After a discussion of some special sources of IMC application information, several specific categories of IMC analysis of rate processes are covered, and recent examples (with literature references) are discussed in each category.
Special sources of IMC application information
Handbooks
The Bibliography lists the four extensive volumes of the Handbook of Thermal Analysis and Calorimetry: Vol. 1 Principles and Practice (1998), Vol. 2 Applications to Inorganic and Miscellaneous Materials (2003), Vol. 3 Applications to Polymers and Plastics (2002), and Vol. 4 From Macromolecules to Man (1999). These constitute a prime source of information on (and literature references to) IMC applications and examples published prior to ca. 2000.
Application notes
Some IMC instrument manufacturers have assembled application notes, and make them available to the public. The notes are often (but not always) adaptations of journal papers. An example is the Microcalorimetry Compendium Vol. I and II offered by TA Instruments, Inc. and listed in the Bibliography.
"Proteins" the first section of notes in Vol. I, is not of interest here, as it describes studies employing Isothermal titration calorimetry. The subsequent sections of Vol. I, Life & Biological Sciences and Pharmaceuticals contain application notes for both IMC and Differential scanning calorimetry. Vol. II of the compendium is devoted almost entirely to IMC applications. Its sections are entitled Cement, Energetics, Material and Other. A possible drawback to these two specific compendia is that none of the notes are dated. Although the compendia were published in 2009, some of the notes describe IMC instruments which were in use years ago and are no longer available. Thus, some of the notes, while still relevant and instructive, often describe studies done before 2000.
Examples of applications
In general, possible applications of IMC are only limited by the imagination of the person who chooses to employ IMC as an analytical tool—within the previously described constraints presented by existing IMC instruments and methodology. This is because it is a universal means for monitoring any chemical, physical or biological rate process. Below are some IMC application categories with examples in each. In most categories, there are many more published examples than those mentioned and referenced. The categories are somewhat arbitrary and often overlap. A different set of categories might be just as logical, and more categories could be added.
Solid materials
Formation
IMC is widely used for studying the rates of formation of a variety of materials by various processes. It is best suited to study processes which occur slowly—i.e. over hours or days. A prime example is the study of hydration and setting reactions of calcium mineral cement formulations. One paper provides an overview (Gawlicki, et al. 2010) and another describes a simple approach (Evju 2003). Other studies focus on insights into cement hydration provided by IMC combined with IR spectroscopy (Ylmen et al. 2010) and on using IMC to study the influence of compositional variables on cement hydration and setting times (Xu et al. 2011).
IMC can also be conveniently used to study the rate and amount of hydration (in air of known humidity) of calcium minerals or other minerals. To provide air of known humidity for such studies, small containers of saturated salt solutions can be placed in an IMC ampoule along with a non-hydrated mineral specimen. The ampoule is then sealed and introduced into an IMC instrument. The saturated salt solution keeps the air in the ampoule at a known rH, and various common salt solutions provide humidities ranging from e.g. 32-100% rH. Such studies have been performed on μm size range calcium hydroxyapatite particles and calcium-containing bioactive glass "nano" particles (Doostmohammadi et al. 2011).
Stability
IMC is well suited for rapidly quantifying the rates of slow changes in materials (Willson et al. 1995). Such evaluations are variously described as studies of stability, degradation, or shelf life.
For example, IMC has been widely used for many years in shelf life studies of solid drug formulations in the pharmaceutical industry (Pikal et al. 1989, Hansen et al. 1990, Konigbauer et al. 1992.) IMC has the ability to detect slow degradation during simulated shelf storage far sooner than conventional analytical methods and without the need to employ chemical assay techniques. IMC is also a rapid, sensitive method for determining the often functionally crucial amorphous content of drugs such as nifedipine (Vivoda et al. 2011).
IMC can be used for rapidly determining the rate of slow changes in industrial polymers. For example, gamma radiation sterilization of a material frequently used for surgical implants—ultra-high-molecular-weight polyethylene (UHMWPE)—is known to produce free radicals in the polymer. The result is slow oxidation and gradual undesirable embrittlement of the polymer on the shelf or in vivo. IMC could detect oxidation-related heat and quantified an oxidation rate of ca. 1% per year in irradiated UHMWPE at room temperature in air (Charlebois et al. 2003). In a related study the activation energy was determined from measurements at a series of temperatures (Hardison et al. 2003).
IMC is also of great utility in evaluating the "runaway potential" of materials which are significant fire or explosion hazards. For example, it has been used to determine autocatalytic kinetics of cumene hydroperoxide (CHP), an intermediate which is used in the chemical industry and whose sudden decomposition has caused a number of fires and explosions. Fig. 4 Shows the IMC data documenting thermal decomposition of CHP at 5 different temperatures (Chen et al. 2008).
Biology and medicine
The term metabolismics can be used to describe studies of the quantitative measurement of the rate at which heat is produced or consumed vs. time by cells (including microbes) in culture, by tissue specimens, or by small whole organisms. As described subsequently, metabolismics can be useful as a diagnostic tool; especially in either (a) identifying the nature of a specimen from its heat flow vs. time signature under a given set of conditions, or (b) determining the effects of e.g. pharmaceutical compounds on metabolic processes, organic growth or viability. Metabolismics is related to metabolomics. The latter is the systematic study of the unique chemical fingerprints that specific cellular processes leave behind; i.e. the study of their small-molecule metabolite profiles. When IMC is used to determine metabolismics, the products of the metabolic processes studied are subsequently available for metabolomics studies. Since IMC does not employ biochemical or radioactive markers, the post-IMC specimens consist only of metabolic products and remaining culture medium (if any was used). If metabolismics and metabolomics are used together, they can provide a comprehensive record of a metabolic process taking place in vitro: its rate and energetics, and its metabolic products.
To determine metabolismics using IMC, there must of course be sufficient cells, tissue or organisms initially present (or present later if replication is taking place during IMC measurements) to generate a heat flow signal above a given instrument's detection limit. A landmark 2002 general paper on the topic of metabolism provides an excellent perspective from which to consider IMC metabolismic studies (see Bibliography, West, Woodruff and Brown 2002). It describes how metabolic rates are related and how they scale over the entire range from "molecules and mitochondria to cells and mammals". Importantly for IMC, the authors also note that while the metabolic rate of a given type of mammalian cell in vivo declines markedly with increasing animal size (mass), the size of the donor animal has no effect on the metabolic rate of the cell when cultured in vitro.
Cell and tissue biology
Mammalian cells in culture have a metabolic rate of ca. 30×10−12 W/cell (Figs. 2 and 3 in Bibliography: West, Woodruff and Brown 2002). By definition, IMC instruments have a sensitivity of at least 1×10−6 W (i.e. 1 μW). Therefore, the metabolic heat of ca. 33,000 cells is detectable. Based on this sensitivity, IMC was used to perform a large number of pioneering studies of cultured mammalian cell metabolismics in the 1970s and 1980s in Sweden. One paper (Monti 1990) serves as an extensive guide to work done up until 1990. It includes explanatory text and 42 references to IMC studies of heat flow from cultured human erythrocytes, platelets, lymphocytes, lymphoma cells, granulocytes, adipocytes, skeletal muscle, and myocardial tissue. The studies were done to determine how and where IMC might be used as a clinical diagnostic method and/or provide insights into metabolic differences between cells from healthy persons and persons with various diseases or health problems.
Developments since ca. 2000 in IMC (e.g. massively parallel instruments, real-time, computer-based storage and analysis of heat flow data) have stimulated further use of IMC in cultured cell biology. For example, IMC has been evaluated for assessing antigen-induced lymphocyte proliferation (Murigande et al. 2009) and revealed aspects of proliferation not seen using a conventional non-continuous radioactive marker assay method. IMC has also been applied to the field of tissue engineering. One study (Santoro et al. 2011) demonstrated that IMC could be used to measure the growth (i.e. proliferation) rate in culture of human chondrocytes harvested for tissue engineering use. It showed that IMC can potentially serve to determine the effectiveness of different growth media formulations and also determine whether cells donated by a given individual can be grown efficiently enough to consider using them to produce engineered tissue.
IMC has also been used to measure the metabolic response of cultured macrophages to surgical implant wear debris. IMC showed that the response was stronger to μm size range particles of polyethylene than to similarly sized Co alloy particles (Charlebois et al. 2002). A related paper covers the general topic of applying IMC in the field of synthetic solid materials used in surgery and medicine (Lewis and Daniels 2003).
At least two studies have suggested IMC can be of substantial use in tumor pathology. In one study (Bäckman 1990), the heat production rate of T-lymphoma cells cultured in suspension was measured. Changes in temperature and pH induced significant variations, but stirring rate and cell concentration did not. A more direct study of possible diagnostic use (Kallerhoff et al. 1996) produced promising results. For the uro-genital tissue biopsy specimens studied, the results showed
"it is possible to differentiate between normal and tumorous tissue samples by microcalorimetric measurement based on the distinctly higher metabolic activity of malignant tissue. Furthermore, microcalorimetry allows a differentiation and classification of tissue samples into their histological grading."
Toxicology
As of 2012, IMC has not become widely used in cultured cell toxicology even though it has been used periodically and successfully since the 1980s. IMC is advantageous in toxicology when it is desirable to observe cultured cell metabolism in real time and to quantify the rate of metabolic decline as a function of the concentration of a possibly toxic agent. One of the earliest reports (Ankerst et al. 1986) of IMC use in toxicology was a study of antibody-dependent cellular toxicity (ADCC) against human melanoma cells of various combinations of antiserum, monoclonal antibodies and also peripheral blood lymphocytes as effector cells. Kinetics of melanoma cell metabolic heat flow vs. time in closed ampoules were measured for 20 hours. The authors concluded that
"...microcalorimetry is a sensitive and particularly suitable method for the analysis of cytotoxicity kinetics."
IMC is also being used in environmental toxicology. In an early study (Thorén 1992) toxicity against monolayers of alveolar macrophages of particles of MnO2, TiO2 and SiO2 (silica) were evaluated. IMC results were in accord with results obtained by fluorescein ester staining and microscopic image analysis—except that IMC showed toxic effects of quartz not discernable by image analysis. This latter observation—in accord with known alveolar effects—indicated to the authors that IMC was a more sensitive technique.
Much more recently (Liu et al. 2007), IMC has been shown to provide dynamic metabolic data which assess toxicity against fibroblasts of Cr(VI) from potassium chromate. Fig. 5 shows baseline results determining the metabolic heat flow from cultured fibroblasts prior to assessing the effects of Cr(VI). The authors concluded that
"Microcalorimetry appears to be a convenient and easy technique for measuring metabolic processes...in...living cells. As opposed to standard bioassay procedures, this technique allows continuous measurements of the metabolism of living cells. We have thus shown that Cr(VI) impairs metabolic pathways of human fibroblasts and particularly glucose utilization."
Simple closed ampoule IMC has also been used and advocated for assessing the cultured cell toxicity of candidate surgical implant materials—and thus serve as a biocompatibility screening method. In one study (Xie et al. 2000) porcine renal tubular cells in culture were exposed to both polymers and titanium metal in the form of "microplates" having known surface areas of a few cm2. The authors concluded that IMC
"...is a rapid method, convenient to operate and with good reproducibility. The present method can in most cases replace more time-consuming light and electron microscopic investigations for quantitating of adhered cells."
In another implant materials study (Doostmohammadi et al. 2011) both a rapidly growing yeast culture and a human chondrocyte culture were exposed to particles (diam.< 50 μm) of calcium hydroxyapatite (HA) and bioactive (calcium-containing) silica glass. The glass particles slowed or curtailed yeast growth as a function of increasing particle concentration. The HA particles had much less effect and never entirely curtailed yeast growth at the same concentrations. The effects of both particle types on chondrocyte growth were minimal at the concentration employed. The authors concluded that
"The cytotoxicity of particulate materials such as bioactive glass and hydroxyapatite particles can be evaluated using the microcalorimetry method. This is a modern method for in vitro study of biomaterials biocompatibility and cytotoxicity which can be used alongside the old conventional assays."
Microbiology
Publications describing use of IMC in microbiology began in the 1980s (Jesperson 1982). While some IMC microbiology studies have been directed at viruses (Heng et al. 2005) and fungi (Antoci et al. 1997), most have been concerned with bacteria. A recent paper (Braissant et al. 2010) provides a general introduction to IMC metabolismic methods in microbiology and an overview of applications in medical and environmental microbiology. The paper also explains how heat flow vs. time data for bacteria in culture are an exact expression—as they occur over time—of the fluctuations in microorganism metabolic activity and replication rates in a given medium (Fig. 6).
In general, bacteria are about 1/10 the size of mammalian cells and produce perhaps 1/10 as much metabolic heat-i.e. ca. 3x10−12 W/cell. Thus, compared to mammalian cells (see above) ca. 10X as many bacteria—ca. 330,000—must be present to produce detectable heat flow—i.e. 1 μW. However, many bacteria replicate orders of magnitude more rapidly in culture than mammalian cells, often doubling their number in a matter of minutes (see Bacterial growth). As a result, a small initial number of bacteria in culture and initially undetectable by IMC rapidly produce a detectable number. For example, 100 bacteria doubling every 20 minutes will in less than 4 hours produce >330,000 bacteria and thus an IMC-detectable heat flow. Consequently, IMC can be used for easy, rapid detection of bacteria in the medical field. Examples include detection of bacteria in human blood platelet products (Trampuz et al. 2007) and urine (Bonkat et al. 2011) and rapid detection of tuberculosis (Braissant et al. 2010, Rodriguez et al. 2011). Fig. 7 shows an example of detection times of tuberculosis bacteria as a function of the initial amount of bacteria present in a closed IMC ampoule containing a culture medium.
For microbes in growth media in closed ampoules, IMC heat flow data can also be used to closely estimate basic microbial growth parameters; i.e. maximum growth rate and duration time of the lag phase before maximum growth rate is achieved. This is an important special application of the basic analysis of these parameters explained previously (Overview: Data Obtained).
Unfortunately, the IMC literature contains some published papers in which the relation between heat flow data and microbial growth in closed ampoules has been misunderstood. However, in 2013 an extensive clarification was published, describing (a) details of the relation between IMC heat flow data and microbial growth, (b) selection of mathematical models which describe microbial growth and (c) determination of microbial growth parameters from IMC data using these models (Braissant et al. 2013).
Pharmacodynamics
In a logical extension of the ability of IMC to detect and quantify bacterial growth, known concentrations of antibiotics can be added to bacterial culture, and IMC can then be used to quantify their effects on viability and growth. Closed ampoule IMC can easily capture basic pharmacologic information—e.g. minimum inhibitory concentration (MIC) of an antibiotic needed to stop growth of a given organism. In addition it can simultaneously provide dynamic growth parameters—lag time and maximum growth rate (see Fig. 2, Howell et al. 2011, Braissant et al. 2013), which assess mechanisms of action. Bactericidal action (see Bactericide) is indicated by an increased lag time as a function of increasing antibiotic concentration, while bacteriostatic action (see Bacteriostatic agent) is indicated by a decrease in growth rate with concentration. The IMC approach to antibiotic assessment has been demonstrated for a number of a types of bacteria and antibiotics (von Ah et al. 2009). Closed ampoule IMC can also rapidly differentiate between normal and resistant strains of bacteria such as Staphylococcus aureus (von Ah et al. 2008, Baldoni et al. 2009). IMC has also been used to assess the effects of disinfectants on the viability of mouth bacteria adhered to dental implant materials (Astasov-Frauenhoffer et al. 2011). In a related earlier study, IMC was used to measure the heat of adhesion of dental bacteria to glass (Hauser-Gerspach et al. 2008).
Analogous successful use of IMC to determine the effects of antitumor drugs on tumor cells in culture within a few hours has been demonstrated (Schön and Wadsö 1988). Rather than the closed-ampoule approach, an IMC setup was used which allowed drug injection into stirred specimens.
As of 2013, IMC has been used less widely in mammalian cell in vitro pharmacodynamic studies than in microbial studies.
Multicellular organisms
It is possible to use IMC to perform metabolismic studies of living multicellular organisms—if they are small enough to be placed in IMC ampoules (Lamprecht & Becker 1988). IMC studies have been made of insect pupa metabolism during ventilating movements (Harak et al. 1996) and effects of chemical agents on pupal growth (Kuusik et al. 1995). IMC has also proved effective in assessing the effects of aging on nematode worm metabolism (Braekman et al. 2002).
IMC has also proved highly useful for in vitro assessments of the effects of pharmaceuticals on tropical parasitic worms (Manneck et al. 2011-1, Maneck et al. 2011-2, Kirchhofer et al. 2011). An interesting feature of these studies is the use of a simple manual injection system for introducing the pharmaceuticals into sealed ampoules containing the worms. Also, IMC not only documents the general metabolic decline over time due to the drugs, but also the overall frequency of worm motor activity and its decline in amplitude over time as reflected in fluctuations in the heat flow data.
Environmental biology
Because of its versatility, IMC can be an effective tool in the fields of plant and environmental biology. In an early study (Hansen et al. 1989), the metabolic rate of larch tree clone tissue specimens was measured. The rate was predictive of long-term tree growth rates, was consistent for specimens from a given tree and was found to correlate with known variations in the long-term growth of clones from different trees.
Bacterial oxalotrophic metabolism is common in the environment, particularly in soils. Oxalotrophic bacteria are capable of using oxalate as a sole carbon and energy source. Closed-ampoule IMC was used to study metabolism of oxalotrophic soil bacteria exposed to both an optimized medium containing potassium oxalate as the sole carbon source and a model soil (Bravo et al. 2011). Using an optimized medium, growth of six different strains of soil bacteria was easily monitored and reproducibly quantified and differentiated over a period days. IMC measurement of bacterial metabolic heat flow in the model soil was more difficult, but a proof of concept was demonstrated.
Moonmilk is a white, creamy material found in caves. It is a non-hardening, fine crystalline precipitate from limestone and is composed mainly of calcium and/or magnesium carbonates. Microbes may be involved in its formation. It is difficult to infer microbial activities in moonmilk from standard static chemical and microscopic assays of moonmilk composition and structure. Closed ampoule IMC has been used to solve this problem (Braissant, Bindscheidler et al. 2011). It was possible to determine the growth rates of chemoheterotrophic microbial communities on moonmilk after the addition of various carbon sources simulating mixes that would be brought into contact with moonmilk due to snow melt or rainfall. Metabolic activity was high and comparable to that found in some soils.
Harris et al. (2012), studying differing fertilizer input regimes, found that, when expressed as heat output per unit soil microbial biomass, microbial communities under organic fertilizer regimes produced less waste heat than those under inorganic regimes.
Food science
IMC has been shown to have diverse uses in food science and technology. An overview (Wadsö and Galindo 2009) discusses successful applications in assessing vegetable cutting wound respiration, cell death from blanching, milk fermentation, microbiological spoilage prevention, thermal treatment and shelf life. Another publication (Galindo et al. 2005) reviews the successful use of IMC for monitoring and predicting quality changes during storage of minimally processed fruits and vegetables.
IMC has also proven effective in accomplishing enzymatic assays for orotic acid in milk (Anastasi et al. 2000) and malic acid in fruits, wines and other beverages and also cosmetic products (Antonelli et al. 2008). IMC has also been used to assess the efficacy of anti-browning agents on fresh-cut potatoes (Rocculi et al. 2007). IMC has also proven effective in assessing the extent to which low-energy pulsed electric fields (PEFs) affect the heat of germination of barley seeds—important in connection with their use in producing malted beverages (Dymek et al. 2012).
See also
Calorimetry
Chemical thermodynamics
Differential scanning calorimetry
Isothermal titration calorimetry
Rate equation
Sorption calorimetry
Thermal analysis
Thermoelectric effect
Bibliography
Glasstone S, Laidler KJ, Eyring H (1941) The theory of rate processes: the kinetics of chemical reactions, viscosity, diffusion and electrochemical phenomena. McGraw-Hill (New York). 611p.
Johnson FH, Eyring H, Stover BJ (1974) The theory of rate processes in biology and medicine. Wiley (New York), , 703p.
Lavoisier A & Laplace PS (1780) M´emoire sur la chaleur. Académie des Sciences, Paris.
Brown ME, Editor (1998) Vol. 1 Principles and Practice (691p.), in Handbook of Thermal Analysis and Calorimetry. Gallagher PK (Series Editor). Elsevier (London).
Brown ME and Gallagher PK, Editors (2003) Vol. 2 Applications to Inorganic and Miscellaneous Materials (905p.), in Handbook of Thermal Analysis and Calorimetry. Gallagher PK (Series Editor). Elsevier (London).
Cheng SZD, Editor (2002) Vol. 3 Applications to Polymers and Plastics (828p.) in Handbook of Thermal Analysis and Calorimetry. Gallagher PK (Series Editor). Elsevier (London).
Kemp RB, Editor (1999) Vol. 4 From Macromolecules to Man (1032p.), in Handbook of Thermal Analysis and Calorimetry. Gallagher PK (Series Editor). Elsevier (London).
Microcalorimetry Compendium Vol. 1: Proteins, Life & Biological Sciences, Pharmaceuticals (2009). TA Instruments, Inc. (New Castle DE, USA).
Microcalorimetry Compendium Vol. 2: Cement, Energetics, Material, Other (2009). TA Instruments, Inc. (New Castle DE, USA).
References
External links
Some sources for IMC instruments, accessories, supplies, and software
Calmetrix
TA Instruments
Setaram
Symcel
Flow Adsorption Microcalorimeter instrument configurations Microscal Ltd (archived 2005)
Biological processes
Calorimetry
Chemical processes
Heat transfer
Materials science | Isothermal microcalorimetry | Physics,Chemistry,Materials_science,Engineering,Biology | 10,866 |
489,971 | https://en.wikipedia.org/wiki/Bionicle | Bionicle (stylized all caps) was a line of Lego construction toys marketed primarily towards 8-to-16-year-olds. The line was launched in 2001, originally as a subsidiary of Lego's Technic series. Over the following decade, it became one of the company's biggest-selling properties, turning into a franchise and subsequently becoming one of the factors in saving Lego from its financial crisis of the late 1990s. Despite a planned twenty-year tenure, the theme was discontinued in 2010, citing low sales, but was rebooted in 2015 for a further two years.
Unlike previous Lego themes, Bionicle was accompanied by an original story that was told across a multimedia spectrum, including books, comics, games, and animated films. It primarily depicts the exploits of the Toa, heroic biomechanical beings with innate elemental abilities whose duty is to protect and maintain peace throughout their universe. Bionicle's success prompted later Lego themes to use similar story-telling methods.
History
Concept
After suffering a decade-long downturn in the 1990s, the Lego Group went forward with the idea that a theme with a storyline behind it would appeal to consumers. Their first attempt was the space opera franchise Star Wars, which became an instant success; however, the royalty payments to Lucasfilm marginalized Lego's profits, prompting them to conceive their own story-driven themes.
The concept for Bionicle originated from an idea by co-creator Christian Faber named "Cybots", a line of humanoid action figures with attachable limbs and ball-and-socket joints. Faber recalled: "I was sitting with Lego Technic and thought I would love to build a character instead of a car. I thought of this biological thing: The human body is built from small parts into a functional body just like a model. What if you got a box full of spare parts and built a living thing?". He pitched the idea to Lego, but was initially implemented as the themes Slizer/Throwbots in 1999 and RoboRiders in 2000.
A new project called "BoneHeads of Voodoo Island" was later conceived by Faber and Lego employees Bob Thompson and Martin Riber Andersen from a brief by Erik Kramer that was sent to outside writers, one of whom was Alastair Swinnerton, who rewrote the concept and was later invited to pitch it to the Lego Group at their headquarters in Billund, Denmark. The revised concept was well received and Swinnerton was commissioned to expand his initial pitch into a full 'bible'. On his second visit to Billund, the project was given approval and entitled "Bionicle" at an internal Lego meeting (a portmanteau constructed from the words "biological chronicle", with reference to the word "bionics"). The names "BioKnights" and "Afterman" were also considered prior to the finalization of the brand.
To accompany theme, Lego worked with Swinnerton and the creative agency Advance to create an elaborate story with extensive lore centering on half-organic, half-robotic characters and telling it across a vast multimedia spectrum including comic books, novels, games, films and online content. Māori culture became a key inspiration behind the story and the theme at large. The use of tropical environments and characters based on classical elements were also carried over from Slizer/Throwbots and RoboRiders. The toys themselves would be an expansion of the Lego Technic sub-series, featuring the same building system that was already featured in the aforementioned themes. One particular element – the then-innovative ball-and-socket system which created free joint movement – would feature heavily in Bionicle's run and later across other Lego themes.
Launch and success
The first wave of Bionicle sets were initially launched in December 2000 in Europe and Australasia as a "test market" to predict how well the series would sell in North America. The official website, explaining the premise of Bionicle, also debuted around the same time. After a positive reception, Bionicle premiered in North America in mid-2001, where it generated massive success and garnered the Lego Group £100 million in its first year. New sets were released every six months, ranging from buildable action figures to play sets and vehicles, and would gradually increase in size and flexibility with every new wave. Collectibles such as weapon ammo and the "Kanohi" masks that certain characters wore were also sold; some became rare and valuable and withheld secret codes that when entered onto the official Bionicle website, provided the user with "Kanoka Points" that enabled them to access exclusive membership material.
As Bionicle's popularity rose, it became one of Lego's most successful properties, accounting for nearly all of their financial turnover from the previous decade. It was named as the #1 Lego theme in 2003 and 2006 in terms of sales and popularity, with other Lego themes at the time failing to match the profits generated by Bionicle. Its popularity led to high web traffic on its official website, averaging more than one million page views per month, and further kinds of merchandise such as clothes, toiletries and fast-food restaurant toy collectibles.
Discontinuation
In November 2009, Lego made the decision to cease production on new Bionicle sets after a final wave was released in 2010. The decision was made due to recent low sales and a lack of new consumer interest in the brand, thought to be brought on by its decade-long backstory and extensive lore.
At his request, long-term Bionicle comic book writer and story contributor Greg Farshtey was given permission to continue the Bionicle storyline, with chapters for new serials arranged to be posted regularly on the website BionicleStory.com. However, Farshtey stopped posting new content in 2011 due to his other commitments and the website was shut down in 2013, leaving a number of serials incomplete. Farshtey regularly contributed new story details and "canonization" of fan made models via online forums and message boards until his departure from Lego in 2022. Nevertheless, he continues to play an active role in the Bionicle fan community.
Reboot
Work on a reboot of Bionicle began in 2012. Matt Betteker, a junior designer who had previously worked on Hero Factory, a successor theme to the original Bionicle line, was promoted to senior designer for the project. The theme's comeback was announced in September 2014, with the first wave of sets and story details revealed at New York Comic Con on October 9. Dubbed colloquially as "Generation 2" by fans and later Lego themselves, the new storyline was based on the premise of the original, albeit with simplified lore and a smaller trans-media platform.
The reboot launched in January 2015 to a mixed reception from toy critics and fans of the original Bionicle franchise, with the playability of the new sets and the inspiration taken from the theme's first wave being praised, but the simplified story and undeveloped characters receiving less positive feedback.
Lego discontinued the reboot in late 2016, citing low sales, despite plans to release new sets through to at least 2017. It is widely believed by fans that a lack of marketing and reliance on fans to promote the theme, coupled with the new simplified story, were factors in Generation 2's decreased interest.
Legacy
During and after its run, Bionicle became the inspiration for several other Lego themes including Knights Kingdom, Exo-Force, Ninjago, Legends of Chima, and Nexo Knights. They all followed a similar story narrative about a group of heroes, each with varying abilities, battling the henchmen of an ally-turned-foe in a fantasy world. Bionicle writer Greg Farshtey would also go on to write material for the some of these themes, most notably Ninjago.
A direct successor theme to Bionicle, Hero Factory, was launched in 2010. It continued to use the building system introduced in Bionicle before evolving into the Character and Creature Building System (CCBS) that would later be carried over into other Lego sets and eventually Bionicle's 2015 reintroduction. Hero Factory itself ceased after 2014.
Despite its ending as a toyline, Bionicle's popularity has persisted and was acknowledged by Lego in its 90th anniversary poll, winning the first round. A promotional Lego System set celebrating Bionicle was released in 2023, featuring brick-built versions of the characters of Tahu and Takua, to which the community responded well. Additionally, fans have engaged in several community-based projects, including creating a "Bionicle Day" for August 10 (stylized as "810NICLE Day"), story and media archives, and several fan games.
Story
Generation 1 (2001–2010)
Set in a science fantasy universe featuring a diversity of biomechanical beings, the main story depicts the exploits of the Toa, heroes with elemental powers whose duty is to protect the Matoran, the prime populace of their world, and reawaken their god-like guardian, the Great Spirit Mata Nui, who was forced into a coma by the actions of his antagonist "brother", the Makuta.
The first story arc (2001–2003) takes place on the tropical island of Mata Nui, named after the Great Spirit, and deals with the arrival of the six Toa Mata and their adventures in protecting the Matoran villagers from Makuta's minions. They are later transformed into the more-powerful Toa Nuva. A heavy emphasis is placed on the Kanohi masks worn by the Toa, which supplement their elemental powers with abilities such as super-strength, super-speed, levitation and water-breathing. The second arc (2004–2005) acts as a prequel to the first; set on an island city called Metru Nui, it follows another group of Toa who would go on to become Turaga, the Matoran's elders, and explains how they all came to settle on the island of Mata Nui. The culminating third arc (2006–2008) sees a new team of Toa (transformed from Matoran) set out on a quest to find the Mask of Life, an artifact that can save the now-dying Mata Nui. A fourth arc (2009), originally envisioned as a soft reboot of the franchise, introduces the desert world of Bara Magna, its inhabitants, and Mata Nui's origins. However, all planned storylines were scrapped after Lego announced that Bionicle would be discontinued and was replaced with a new one that concluded the main narrative in 2010.
Characters such as the Toa and Matoran are typically divided into tribes based on six "primary" elements: fire, water, air, earth, ice, and stone. Less common "secondary" elements, such as light, gravity and lightning, began being introduced in 2003. The 2009 storyline, which features a different society, uses a similar grouping method for its Glatorian and Agori characters.
The entire saga was developed by a team of Lego employees led by Bob Thompson for a multimedia platform spanning animations, comic books, novels, console and online games, short stories, and a series of direct-to-video films. The majority of comics and novels were written by Greg Farshtey, who also published a number of in-character blogs, serials, and podcasts that expanded the franchise lore. After the toyline was discontinued, publication of these serials continued through to 2011 before halting abruptly due to Farshtey's other commitments.
Generation 2 (2015–2016)
A reboot of the original story, the revival chronicles the adventures of six elemental Toa heroes who protect the bio-mechanical inhabitants of the mystical island of Okoto from Makuta and his minions. Characters are again divided into six elemental tribes: fire, water, jungle (changed from air for creative reasons), earth, ice, and stone. The reboot's multimedia spectrum was scaled back in comparison to the first generation's – online animations, a series of books and graphics novels authored by Ryder Windham, and the animated Netflix series Lego Bionicle: The Journey to One (2016) detail the narrative. Christian Faber and Greg Farshtey served as creative consultants.
Reception
Initially, the idea of Bionicle faced resistance from company traditionalists as the Lego Group had no experience of marketing a story-based brand of their own. The "war-like" appearance of the characters also went against the company's values of creating sets without themes of modern warfare or violence. Lego reconciled on this statement by claiming that the theme was about "Good versus evil; "good hero warriors" designed to combat "evil enemy fighters" in a mythical universe, so children are not encouraged to fight each other".
The Bionicle franchise was well received over its venture and became one of the Lego Group's biggest-selling properties. At the time of its launch, one reviewer described the sets as "A good combination of assembly and action figure". and first-year sales of £100 million. Bionicle later received a Toy of the Year Award for Most Innovative Toy in 2001 from the Toy Industry Association.
Bionicle's rapid success had a major impact on the Lego Company. Stephanie Lawrence, the global director of licensing for Lego, stated: "We've created an evergreen franchise to complement the many event-based properties on the children's market. An increasing number of category manufacturers want to tap into the power of the Bionicle universe, and the key for us now is to manage the excitement to stay true to the brand and the lifestyle of our core consumer".
Since its launch, toy critics have said that Bionicle has changed the way children think and play with Lego products by combining "The best of Lego building with the story telling and adventure of an action figure". Toy statistics have revealed that as of 2009, 85% of American boys aged 6–12 had heard of Bionicle while 45% owned the sets.
Māori language controversy
In 2002, several Māori iwi (tribes) from New Zealand were angered by Lego's lack of respect for some of their words which were used to name certain characters, locations and objects in the Bionicle storyline. A letter of complaint was written, and the company agreed to change the names of certain story elements (e.g., the villagers originally known as "Tohunga" was changed to "Matoran") and met with an agreement with the Māori people to still use a small minority of their words.
In the story, the reason for certain name changes was dubbed as a Naming Ceremony for certain Matoran after doing heroic deeds (though the pronunciations remain the same), an example being the name change of 'Huki' to 'Hewkii'. Other names such as "Toa" meaning "Warrior", "Kanohi" meaning "Face" and "Kōpaka" meaning "Ice" were not changed.
See also
List of Bionicle media
Lego Technic
References
External links
BioMedia Project, an online archive of BIONICLE media
BIONICLEsector01.com, an external wiki
BZPower.info, a LEGO and BIONICLE news site
Wall of History, an online compendium of BIONICLE story content
2000s toys
2010s toys
2020s toys
Mass media franchises
Action figures
Fiction about cyborgs
DC Comics titles
Fictional cyborgs
Fiction about parasites
Fiction about robots
Bionicle
Products and services discontinued in 2010
Products and services discontinued in 2016
Products and services discontinued in 2023
Products introduced in 2001
Products introduced in 2015
Products introduced in 2023
Toy controversies | Bionicle | Biology | 3,217 |
56,209,584 | https://en.wikipedia.org/wiki/Trametes%20africana | Trametes africana is a poroid bracket fungus in the family Polyporaceae. It was described as new to science in 2004 by Norwegian mycologist Leif Ryvarden. It is found in Africa, where it has been recorded from Cameroon, Ethiopia, Kenya, Rwanda and Uganda. The fungus is characterized by its perennial habit and hard woody fruit bodies that become reddish to bay in colour with a waxy surface texture around the base. The pore surface and context are brownish to yellowish. Spores made by the fungus are cylindrical, hyaline, and thin-walled, measuring 5–8 by 2.5–3.3 μm.
See also
List of Trametes species
References
Polyporaceae
Fungi described in 2004
Fungi of Africa
Taxa named by Leif Ryvarden
Fungus species | Trametes africana | Biology | 164 |
4,009,928 | https://en.wikipedia.org/wiki/Edinburgh%20Parallel%20Computing%20Centre | EPCC, formerly the Edinburgh Parallel Computing Centre, is a supercomputing centre based at the University of Edinburgh. Since its foundation in 1990, its stated mission has been to accelerate the effective exploitation of novel computing throughout industry, academia and commerce.
The University has supported high performance computing (HPC) services since 1982. , through EPCC, it supports the UK's national high-end computing system, ARCHER (Advanced Research Computing High End Resource), and the UK Research Data Facility (UK-RDF).
Overview
EPCC's activities include: consultation and software development for industry and academia; research into high-performance computing; hosting advanced computing facilities and supporting their users; training and education.
The Centre offers two Masters programmes: MSc in High-Performance Computing and MSc in High-Performance Computing with Data Science.
It is a member of the Globus Alliance and, through its involvement with the OGSA-DAI project, it works with the Open Grid Forum DAIS-WG.
Around half of EPCC's annual turnover comes from collaborative projects with industry and commerce. In addition to privately funded projects with businesses, EPCC receives funding from Scottish Enterprise, the Engineering and Physical Sciences Research Council and the European Commission.
History
EPCC was established in 1990, following on from the earlier Edinburgh Concurrent Supercomputer Project and chaired by Jeffery Collins from 1991. From 2002 to 2016 EPCC was part of the University's School of Physics & Astronomy, becoming an independent Centre of Excellence within the University's College of Science and Engineering in August 2016.
It was extensively involved in all aspects of Grid computing including: developing Grid middleware and architecture tools to facilitate the uptake of e-Science; developing business applications and collaborating in scientific applications and demonstration projects.
The Centre was a founder member of the UK's National e-Science Centre (NeSC), the hub of Grid and e-Science activity in the UK. EPCC and NeSC were both partners in OMII-UK, which offers consultancy and products to the UK e-Science community. EPCC was also a founder partner of the Numerical Algorithms and Intelligent Software Centre (NAIS).
EPCC has hosted a variety of supercomputers over the years, including several Meiko Computing Surfaces, a Thinking Machines CM-200 Connection Machine, and a number of Cray systems including a Cray T3D and T3E. In October 2023 it was selected as the preferred site of the first UK exascale computer.
High-performance computing facilities
EPCC manages a collection of HPC systems including ARCHER (the UK's national high-end computing system) and a variety of smaller HPC systems. These systems are all available for industry use on a pay-per-use basis.
Current systems hosted by EPCC include:
ARCHER2: As of 2021, the ARCHER2 facility is based around a HPE Cray EX supercomputer that provides the central computational resource, with an estimated peaks performance of 28 Peta FLOPS. ARCHER 2 runs the HPE Cray Linux Environment, which is based on the SUSE Linux Enterprise Server 15.
Blue Gene/Q: As of 2013, this system consists of 6144 compute nodes housed in 6 frames. Each node comprises a 16 core Powerpc64 A2 processor, with 16GB memory per node, giving a total of 98,304 cores and a peak performance of 1.26 PetaFlops. It is part of the Distributed Research utilising Advanced Computing (DiRAC) consortium.
Recent systems hosted by EPCC include:
ARCHER: From 2014 to 2020, the EPCC hosted the ARCHER facility. ARCHER was a Cray XC30 supercomputer. It is supported by a number of additional components including: high-performance parallel filesystems, pre- and post-processing facilities, external login nodes, and UK-RDF, a large, resilient, long-term data facility. ARCHER ran the Cray Linux Environment (CLE), a Linux distribution based on SUSE Linux Enterprise Server (SLES). ARCHER was to be replaced in early 2020 but that was delayed by it being used for research on the COVID-19 pandemic. During May 2020 it was taken offline as a result of a security incident. The ARCHER service ended on 27th January 2021.
HECToR: The 2010 system (Phase 2b, XT6) was the first production Cray XT6 24-core system in the world. It was contained in 20 cabinets and comprised a total of 464 compute blades. Each blade contained four compute nodes, each with two 12-core AMD Opteron 2.1 GHz Magny Cours processors. This amounted to a total of 44,544 cores. Each 12-core socket was coupled with a Cray SeaStar2 routing and communications chip. This was upgraded in late 2010 to the Cray Gemini interconnect. Each 12-core processor shared 16Gb of memory, giving a system total of 59.4 Tb. The theoretical peak performance of the phase 2b system was over 360 Tflops. HECToR was decommissioned in 2014.
HPCx: Launched in 2002, when it was ranked ninth-fastest system in the world. HPCx was an IBM eServer p5 575 cluster, located at Daresbury Laboratory. It latterly operated under the complementarity capability computing scheme, preferably hosting workload which can not easily be accommodated on the HECToR system. EPCC supported the HPCx and HECToR systems on behalf of the UK research councils, making them available to UK academics and industry.
Blue Gene : Launched in 2005, EPCC's Blue Gene/L was the first Blue Gene system available outside the United States. EPCC operated this 2048-compute core service for the University of Edinburgh.
QCDOC: One of the world's most powerful systems dedicated to the numerical investigation of quantum chromodynamics, which describes the interactions between quarks and gluons. It was developed in collaboration with a consortium of UK lattice physicists (UKQCD), Columbia University (NY), Riken Brookhaven National Laboratory and IBM.
Maxwell: Maxwell was an innovative, award-winning FPGA-based supercomputer built by the FPGA High Performance Computing Alliance (FHPCA). Maxwell comprised 32 blades housed in an IBM BladeCenter. Each blade comprised one Xeon processor and two FPGAs. The FPGAs were connected by a fast communication subsystem which enabled the total of 64 FPGAs to be connected together in an 8×8 toroidal mesh. The processors were connected together via a PCI bus.
See also
DEISA: Distributed European Infrastructure for Supercomputing Applications.
References
External links
EPCC
Projects at EPCC
Computational science
Computer science institutes in the United Kingdom
Information technology organisations based in the United Kingdom
Research institutes in Edinburgh
Supercomputer sites
University of Edinburgh | Edinburgh Parallel Computing Centre | Mathematics | 1,426 |
24,170,999 | https://en.wikipedia.org/wiki/S-phase-promoting%20factor | Introduction:
S-phase-promoting factor(SPF) is varying Cdk/cyclin complexes in eukaryotes that initiates the S-phase in the cell cycle. SPF is at its peak when the cell cycle is transiting from G1 phase to the S-phase. The SPF is at its lowest during the cell cycle once the cyclin subunits are used up, and broken down. Therefore, everything that happens during mitosis is irreversible, which is why there are many steps within the cell cycle. However, these steps are irreversible because one is needed in order for the next step to occur.
Control of S-phase-promoting factor:
The S-phase-promoting factor is controlled by regulating cyclins levels, and by inhibitors seen in the other phases, such as G1. One specific inhibitor seen in G1 is known as stoichiometric inhibitors, and causes the inhibition of cdk/cyclin complexes. Regulating cyclin levels is done by the production and destruction of cyclin, which is done through the phosphorylation and dephosphorylation of anaphase-promoting complex (APC). This controls the rate of production of cyclin, and regulates cyclin levels and controls the S-phase-promoting factor.
S-phase:
During cell replication when DNA is replicated, and is initiated by the S-phase-promoting factor(SPF) cyclin complexes. The DNA replication takes place, due to the increase in SPF during the switching from G1 to S phase in the cell cycle. SPF is also used to inhibit double replication of chromosomes in the cell cycle, which is important for not allowing a duplication of our genome to occur.
Cyclins:
There are a variety of cyclins that can be found, and vary based on the type of eukaryotic cell. However, there are two cyclins that are found in all eukaryotes. The presence of cyclin-CDK is crucial for the replication of DNA to occur in the S-phase.
Through different studies done on the effects and contributions to DNA replication, it is clear that certain cyclins hold significant influences over SPF activity. For instance, there was a particular study done on the activity of Xenopus eggs. This research indicated the importance of cyclins A, E and B in regards to the activity of SPF. It was concluded that there was more influence over the activity of SPF with different combinations of cyclins A and E, whereas there was not for cyline B. Specifically, different concentrations contributed to the activity of SPF, which affects DNA replication. Having high concentration of cyclin A within the cell cycle causes mitosis to occur, which directly affects DNA replication by being inhibited. Therefore, the type of cyclins and their concentrations have a direct effect on the activity of SPF when in S-phase, which has an effect on DNA replication.
The table conveys different eukaryotes, and Cyclin-CDK complexes needed for the species to initiate DNA replication, which occurs in the S-phases.
References
See also
Restriction point
Cell cycle
DNA replication
Enzymes | S-phase-promoting factor | Biology | 670 |
24,917,556 | https://en.wikipedia.org/wiki/Agrocybe%20pediades | Agrocybe pediades, commonly known as the common fieldcap or common agrocybe, is a typically lawn and other types of grassland mushroom, but can also grow on mulch containing horse manure. It was first described as Agaricus pediades by Swedish mycologist Elias Magnus Fries in 1821, and moved to its current genus Agrocybe by Victor Fayod in 1889. A synonym for this mushroom is Agrocybe semiorbicularis, though some guides list these separately. Technically it is edible, but it could be confused with poisonous species, including one of the genus Hebeloma.
Description
The mushroom cap is 1–3 cm wide, round to convex (flattening with age), pale yellow to orange-brown, smooth, sometimes cracked, and tacky with moisture but otherwise dry. The stalks are 2–5 cm tall and 1–3 mm wide. A partial veil quickly disappears, leaving traces on the cap's edge, but no ring on the stem. The cap's odor and taste are mild or mealy.
The spores are brown, elliptical, and smooth. Some experts divide A. pediades into several species, mainly by habitat and microscopic features, such as spore size. It is recognized by the large, slightly compressed basidiospores which have a large central germ pore, 4-spored basidia, subcapitate cheilocystidia and, rarely, the development of pleurocystidia.
This species is edible, but it could be confused with poisonous species, including one of the genus Hebeloma. Some field guides just list it as inedible or say that it is not worthwhile.
Similar species
Similar species include Agrocybe praecox and A. putaminum.
References
Strophariaceae
Fungus species
Taxa named by Elias Magnus Fries | Agrocybe pediades | Biology | 387 |
23,602,618 | https://en.wikipedia.org/wiki/Pull%20off%20test | A pull-off test, also called stud pull test, is a type of test in which an adhesive connection is made between a stud and a carrier (or object to be tested) by using a glue, possibly an epoxy or polyester resin, that is stronger than the bond that needs to be tested. The force required to pull the stud from the surface, together with the carrier, is measured. Simple mechanical hand-operated loading equipment has been developed for this purpose. When higher accuracy is required, tests can be performed with more advanced equipment called a bond tester. A bond tester provides more control and possibly automation. Applying the glue automatically and curing with UV light is the next step in automation. This methodology can also be used to measure direct tensile strength or/and the bond strength between two different layers.
MIL-STD-883 methods 2011.9 destructive bond pull test and 2031.1 flip chip pull off test apply, as well as JEDEC JESD22-B109.
Partial coring may be used, if necessary, to eliminate surface skin effects.
References
Concrete
Tests | Pull off test | Engineering | 230 |
66,199,369 | https://en.wikipedia.org/wiki/SR8278 | SR-8278 is an experimental drug that was developed as an antagonist of Rev-ErbAα. It has been used to demonstrate potential applications of Rev-ErbAα antagonists in the treatment of conditions such as Duchenne muscular dystrophy and Alzheimer's disease.
See also
GSK4112
SR9009
SR9011
References
Isoquinolines | SR8278 | Chemistry | 79 |
1,157,828 | https://en.wikipedia.org/wiki/ROSAT | ROSAT (short for Röntgensatellit; in German X-rays are called Röntgenstrahlen, in honour of Wilhelm Röntgen) was a German Aerospace Center-led satellite X-ray telescope, with instruments built by West Germany, the United Kingdom and the United States. It was launched on 1 June 1990, on a Delta II rocket from Cape Canaveral, on what was initially designed as an 18-month mission, with provision for up to five years of operation. ROSAT operated for over eight years, finally shutting down on 12 February 1999.
In February 2011, it was reported that the satellite was unlikely to burn up entirely while re-entering the Earth's atmosphere due to the large amount of ceramics and glass used in construction. Parts as heavy as could impact the surface. ROSAT eventually re-entered the Earth's atmosphere on 23 October 2011 over the Bay of Bengal.
Overview
The Roentgensatellit (ROSAT) was a joint German, U.S. and British X-ray astrophysics project. ROSAT carried a German-built imaging X-ray Telescope (XRT) with three focal plane instruments: two German Position Sensitive Proportional Counters (PSPC) and the US-supplied High Resolution Imager (HRI). The X-ray mirror assembly was a grazing incidence four-fold nested Wolter I telescope with an 84-cm diameter aperture and 240-cm focal length. The angular resolution was less than 5 arcsecond at half energy width (the "angle within which half of the electromagnetic radiation" is focused). The XRT assembly was sensitive to X-rays between 0.1 and 2 keV (one thousand Electronvolt).
In addition, a British-supplied extreme ultraviolet (XUV) telescope, the Wide Field Camera (WFC), was coaligned with the XRT and covered the energy band from 0.042 to 0.21 keV (30 to 6 nm).
ROSAT's unique strengths were high spatial resolution, low-background, soft X-ray imaging for the study of the structure of low surface brightness features, and for low-resolution spectroscopy.
The ROSAT spacecraft was a three-axis stabilized satellite which can be used for pointed observations, for slewing between targets, and for performing scanning observations on great circles perpendicular to the plane of the ecliptic. ROSAT was capable of fast slews (180 deg. in ~15 min.) which makes it possible to observe two targets on opposite hemispheres during each orbit. The pointing accuracy was 1 arcminute with stability less than 5 arcsec per sec and jitter radius of ~10 arcsec. Two CCD star sensors were used for optical position sensing of guide stars and attitude determination of the spacecraft. The post facto attitude determination accuracy was 6 arcsec.
The ROSAT mission was divided into two phases:
After a two-month on-orbit calibration and verification period, an all-sky survey was performed for six months using the PSPC in the focus of XRT, and in two XUV bands using the WFC. The survey was carried out in the scan mode.
The second phase consists of the remainder of the mission and was devoted to pointed observations of selected astrophysical sources. In ROSAT's pointed phase, observing time was allocated to Guest Investigators from all three participating countries through peer review of submitted proposals. ROSAT had a design life of 18 months, but was expected to operate beyond its nominal lifetime.
Instruments
X-ray Telescope (XRT)
The main assembly was a German-built imaging X-ray Telescope (XRT) with three focal plane instruments: two German Position Sensitive Proportional Counters (PSPC) and the US-supplied High Resolution Imager (HRI). The X-ray mirror assembly was a grazing incidence four-fold nested Wolter I telescope with an diameter aperture and focal length. The angular resolution was less than 5 arcsec at half energy width. The XRT assembly was sensitive to X-rays between 0.1 and 2 keV.
Position Sensitive Proportional Counters (two) (PSPC)
There are two Position Sensitive Proportional Counters (PSPC), PSPC-B and PSPC-C, mounted on a carousel within the focal plane turret of ROSAT. PSPC-C was intended to be the primary detector for the mission and was used for the bulk of the All-Sky Survey until it was destroyed during the AMCS glitch on 1991, January 25th. After the glitch, PSPC-B was used for all further observations. Two more PSPCs(PSPC-A and PSPC-D) were mounted on ROSAT for ground calibration.
Each PSPC is a thin-window gas counter. Each incoming X-ray photon produces an electron cloud whose position and charge are detected using two wire grids. The photon position is determined with an accuracy of about 120 micrometers. The electron cloud's charge corresponds to the photon energy, with a nominal spectral bandpass 0.1-2.4 keV.
High Resolution Imager (HRI)
The US-supplied High Resolution Imager used a crossed grid detector with a position accuracy to 25 micrometers. The instrument was damaged by solar exposure on 20 September 1998.
Wide Field Camera (WFC)
The Wide Field Camera (WFC) was a UK-supplied extreme ultraviolet (XUV) telescope co-aligned with the XRT and covered the wave band between 300 and 60 angstroms (0.042 to 0.21 keV).
Highlights
X-ray all-sky survey catalog, more than 150,000 objects
XUV all-sky survey catalog (479 objects)
Source catalogs from the pointed phase (PSPC and HRI) containing ~ 100,000 serendipitous sources
Detailed morphology of supernova remnants and clusters of galaxies.
Detection of shadowing of diffuse X-ray emission by molecular clouds.
Detection of pulsations from Geminga.
Detection of isolated neutron stars.
Discovery of X-ray emission from comets.
Observation of X-ray emission from the collision of Comet Shoemaker-Levy with Jupiter.
Catalogues
1RXS – an acronym which is the prefix used for the First ROSAT X-ray Survey (1st ROSAT X-ray Survey), a catalogue of astronomical objects visible for ROSAT in the X-ray spectrum.
See also
:Category:ROSAT objects
Launch
ROSAT was originally planned to be launched on the Space Shuttle but the Challenger disaster caused it to be moved to the Delta platform. This move made it impossible to recapture ROSAT with a Shuttle and bring it back to Earth.
End of operations
Originally designed for a five-year mission, ROSAT continued in its extended mission for a further four years before equipment failure forced an end to the mission. For some months after this, ROSAT completed its very last observations before being finally switched off on 12 February 1999.
On 25 April 1998, failure of the primary star tracker on the X-ray Telescope led to pointing errors that in turn had caused solar overheating. A contingency plan and the necessary software had already been developed to utilise an alternative star tracker attached to the Wide Field Camera.
ROSAT was soon operational again, but with some restrictions to the effectiveness of its tracking and thus its control. It was severely damaged on 20 September 1998 when a reaction wheel in the spacecraft's Attitude Measuring and Control System reached its maximum rotational speed, losing control of a slew, damaging the High Resolution Imager by exposure to the sun. This failure was initially attributed to the difficulties of controlling the satellite under these difficult circumstances outside its initial design parameters.
Allegations of cyber-attacks causing the failure
In 2008, NASA investigators were reported to have found that the ROSAT failure was linked to a cyber-intrusion at Goddard Space Flight Center. The root of this allegation is a 1999 advisory report by Thomas Talleur, senior investigator for cyber-security at NASA. This advisory is reported to describe a series of attacks from Russia that reached computers in the X-ray Astrophysics Section (i.e. ROSAT's) at Goddard, and took control of computers used for the control of satellites, not just a passive "snooping" attack. The advisory stated: "Hostile activities compromised [NASA] computer systems that directly and indirectly deal with the design, testing, and transferring of satellite package command-and-control codes."
The advisory is further reported as claiming that the ROSAT incident was "coincident with the intrusion" and that, "Operational characteristics and commanding of the ROSAT were sufficiently similar to other space assets to provide intruders with valuable information about how such platforms are commanded,". Without public access to the advisory, it is obviously impossible to comment in detail. Even if it did describe a real intrusion, there is a plausible "no attack" explanation for ROSAT's failure, and the report is claimed to link the two incidents as no more than "coincident." However, NASA officials in charge of the day-to-day operations of the ROSAT mission at Goddard, including GSFC Rosat Project Scientist Rob Petre, say definitively that no such incident occurred. Talleur's information appears to have come from one of his interns who exaggerated a hacking incident on an office computer not related to flight operations.
IT security remains a significant issue for NASA. Other systems including the Earth Observing System have also been attacked.
Re-entry
In 1990, the satellite was put in an orbit at an altitude of and inclination of 53°. Due to atmospheric drag, the satellite slowly lost height until, in September 2011, the satellite was orbiting approximately above the Earth. On 23 October 2011 ROSAT re-entered the Earth's atmosphere sometime between 1:45 UTC and 2:15 UTC over the Bay of Bengal, east of India. There was no confirmation if pieces of debris had reached the Earth's surface.
Successor
eROSITA launched on board the Russian-German Spektr-RG space observatory in 2019. It will provide an updated all-sky survey of the X-ray sky, extending the energy range to 10keV, increase the sensitivity by a factor of 25 and improve the spatial and spectral resolution.
Notes
References
See also
List of X-ray space telescopes
External links
1RXS Catalog site
MPE – ROSAT Development
Spacecraft which reentered in 2011
Space telescopes
X-ray telescopes
Satellites of Germany
Spacecraft launched in 1990 | ROSAT | Astronomy | 2,152 |
59,101,213 | https://en.wikipedia.org/wiki/Loch%20Maree%20Hotel%20botulism%20poisoning | The Loch Maree Hotel botulism poisoning of 1922 was the first recorded outbreak of botulism in the United Kingdom. Eight people died, with the resulting public inquiry linking all the deaths to the hotel's potted duck paste.
Loch Maree was a popular location for holiday makers, sports fishermen and romantic breaks, and interest in the event was heightened by the hotel's scenic location.
Background
Multiple deaths caused by botulism had occurred in 1920 in the United States when the origin was found to be glassed olives. Previously, there had been an outbreak associated with sausages from Württemberg in Germany.
Chronology
14 August 1922
On 14 August 1922, a group of 13 sports fishermen, two of their wives, 17 gillies and three mountain climbers were present at the Loch Maree Hotel on the edge of Loch Maree in the Scottish Highlands. Totalling 35, they took packed lunches prepared by the hotel staff.
Forty-eight people dined at the hotel that evening, and the gillies returned home to their cottages. There were no complaints of illness that evening.
15 August 1922
At 3 am, John Stewart, aged 70, who had been visiting the hotel regularly for the previous 40 years, was the first guest to fall ill with vomiting and drooped eyelids. Alex Robertson, the owner, called Dr Knox, the local physician. But by the time he came later in the morning, several others had become ill, and he in turn called upon T. K. Monro, Regius Professor of Medicine and Therapeutics at Glasgow University, who happened to be nearby. However, by the time Monro arrived, 9 pm that day, the first death had occurred. The doctors then went to visit a gillie who had become ill; and after returning to the hotel, witnessed the second fatality.
16 August 1922
By midday on 16 August 1922, there was a third death and a second gillie had also fallen sick. A number of physicians, many of whom were holidaying in and around the hotel, were also now involved. Baffled by the unusual presentations and the sudden deaths, they suspected food poisoning. The public prosecutor was now informed, and the incident became a legal case and crime scene; primary suspect was the hotel's owner until Monro suggested food-borne disease.
17–21 August 1922
By the end of 17 August, there had been six deaths. The final two deaths were on 21 August, just under a week after the first person fell ill. There were no other cases.
The symptoms were similar but varied in time of onset and in severity.
Investigation
The attending physicians had suspected food poisoning early in the week and began inquiries. Medical officer of health William MacLean reported to the public prosecutor that "the symptoms and course of the disease are identical in essentials with those described by Van Ermengem as his ‘second type’ in his investigations of sausage poisoning in the eighteen-nineties".
A second inquiry also began in the first week, on behalf of the Scottish health board by Dr Dittmer. This was to prevent further occurrences and to allay public fear.
All eight victims were the only members of the group to have eaten duck paste sandwiches. The duck paste was home-canned by hotel staff.
A detailed assessment was made of the making up of sandwiches, including how many sandwiches were placed in each packed lunch. Two glass pots of meat were used, each providing for nine or 12 sandwiches, or a total of 18 or 24. Twenty lots of sandwiches were made that day using the contents of two containers of potted wild duck paste and ham and tongue. Other sandwiches were also made, from ingredients including beef, ham, jam and eggs.
Investigators obtained the little that remained of the left-over food from the hotel rubbish. Botulinum toxin was found, and samples were sent to the distinguished microbiologist Bruce White. The most significant sample was from one of the ill gillie's leftover potted duck sandwich, which was buried in a garden by a colleague to avoid the hens eating it. Wrapped in paper, it was recovered during the night of the 17 August, fully intact.
The origins of the potted meat were traced. Four jars of the meat were purchased from Lazenby's of London in June 1922 and included "chicken, ham, and turkey, all mixed with tongue; and wild duck". The Lazenby production plant was scrutinised too, but all was found to be in good working order.
The public enquiry was held in March 1922, with intense publicity. At one time, one tabloid named the hotel "The Hotel of Death".
Witness accounts
The future air marshal Sir Thomas Elmhirst described how, as a young guest at the hotel, he narrowly avoided being poisoned when he joined his brother and parents there for a week of fishing. Arriving late, they were given ham sandwiches rather than potted meat, which had run out, and thus were saved. Elmhirst recorded how a judge in the room next to him suffered a painful death while a major and his wife, who were guests, gave their sandwich to their gillie as they never ate potted meat. The gillie later died.
Reaction and legacy
As the first recorded outbreak of botulism in the United Kingdom, the incident resulted in extraordinary publicity. The area was a popular location for holiday makers, sports fishermen and romantic breaks and the poisoning occurred in the height of the holiday season. The hotel's scenic location made the unusual interest in the tragedy even more extensive.
The incident ultimately led to anti-toxins being made more easily available and packaging of preserved food was changed to allow easier identification of its origin. However, botulism did not become a notifiable disease in the UK until 1949. The events at Loch Maree are now used as a case study in the detection of food poisoning. Similar outbreaks are considered rare, with 17 incidents reported in the UK between 1922 and 2011, including a large outbreak in 1989 connected to hazelnut yoghurt and an episode in 2011 with a suspected link to a korma sauce.
References
1922 in Scotland
Botulism
Poisoning by drugs, medicaments and biological substances | Loch Maree Hotel botulism poisoning | Environmental_science | 1,258 |
19,447 | https://en.wikipedia.org/wiki/Group%20%28mathematics%29 | In mathematics, a group is a set with an operation that associates an element of the set to every pair of elements of the set (as does every binary operation) and satisfies the following constraints: the operation is associative, it has an identity element, and every element of the set has an inverse element.
Many mathematical structures are groups endowed with other properties. For example, the integers with the addition operation form an infinite group, which is generated by a single element called (these properties characterize the integers in a unique way).
The concept of a group was elaborated for handling, in a unified way, many mathematical structures such as numbers, geometric shapes and polynomial roots. Because the concept of groups is ubiquitous in numerous areas both within and outside mathematics, some authors consider it as a central organizing principle of contemporary mathematics.
In geometry, groups arise naturally in the study of symmetries and geometric transformations: The symmetries of an object form a group, called the symmetry group of the object, and the transformations of a given type form a general group. Lie groups appear in symmetry groups in geometry, and also in the Standard Model of particle physics. The Poincaré group is a Lie group consisting of the symmetries of spacetime in special relativity. Point groups describe symmetry in molecular chemistry.
The concept of a group arose in the study of polynomial equations, starting with Évariste Galois in the 1830s, who introduced the term group (French: ) for the symmetry group of the roots of an equation, now called a Galois group. After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely, both from a point of view of representation theory (that is, through the representations of the group) and of computational group theory. A theory has been developed for finite groups, which culminated with the classification of finite simple groups, completed in 2004. Since the mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become an active area in group theory.
Definition and illustration
First example: the integers
One of the more familiar groups is the set of integers
together with addition. For any two integers and , the sum is also an integer; this closure property says that is a binary operation on . The following properties of integer addition serve as a model for the group axioms in the definition below.
For all integers , and , one has . Expressed in words, adding to first, and then adding the result to gives the same final result as adding to the sum of and . This property is known as associativity.
If is any integer, then and . Zero is called the identity element of addition because adding it to any integer returns the same integer.
For every integer , there is an integer such that and . The integer is called the inverse element of the integer and is denoted .
The integers, together with the operation , form a mathematical object belonging to a broad class sharing similar structural aspects. To appropriately understand these structures as a collective, the following definition is developed.
Definition
A group is a non-empty set together with a binary operation on , here denoted "", that combines any two elements and of to form an element of , denoted , such that the following three requirements, known as group axioms, are satisfied:
Associativity For all , , in , one has .
Identity element There exists an element in such that, for every in , one has and .
Such an element is unique (see below). It is called the identity element (or sometimes neutral element) of the group.
Inverse element For each in , there exists an element in such that and , where is the identity element.
For each , the element is unique (see below); it is called the inverse of and is commonly denoted .
Notation and terminology
Formally, a group is an ordered pair of a set and a binary operation on this set that satisfies the group axioms. The set is called the underlying set of the group, and the operation is called the group operation or the group law.
A group and its underlying set are thus two different mathematical objects. To avoid cumbersome notation, it is common to abuse notation by using the same symbol to denote both. This reflects also an informal way of thinking: that the group is the same as the set except that it has been enriched by additional structure provided by the operation.
For example, consider the set of real numbers , which has the operations of addition and multiplication . Formally, is a set, is a group, and is a field. But it is common to write to denote any of these three objects.
The additive group of the field is the group whose underlying set is and whose operation is addition. The multiplicative group of the field is the group whose underlying set is the set of nonzero real numbers and whose operation is multiplication.
More generally, one speaks of an additive group whenever the group operation is notated as addition; in this case, the identity is typically denoted , and the inverse of an element is denoted . Similarly, one speaks of a multiplicative group whenever the group operation is notated as multiplication; in this case, the identity is typically denoted , and the inverse of an element is denoted . In a multiplicative group, the operation symbol is usually omitted entirely, so that the operation is denoted by juxtaposition, instead of .
The definition of a group does not require that for all elements and in . If this additional condition holds, then the operation is said to be commutative, and the group is called an abelian group. It is a common convention that for an abelian group either additive or multiplicative notation may be used, but for a nonabelian group only multiplicative notation is used.
Several other notations are commonly used for groups whose elements are not numbers. For a group whose elements are functions, the operation is often function composition ; then the identity may be denoted id. In the more specific cases of geometric transformation groups, symmetry groups, permutation groups, and automorphism groups, the symbol is often omitted, as for multiplicative groups. Many other variants of notation may be encountered.
Second example: a symmetry group
Two figures in the plane are congruent if one can be changed into the other using a combination of rotations, reflections, and translations. Any figure is congruent to itself. However, some figures are congruent to themselves in more than one way, and these extra congruences are called symmetries. A square has eight symmetries. These are:
the identity operation leaving everything unchanged, denoted id;
rotations of the square around its center by 90°, 180°, and 270° clockwise, denoted by , and , respectively;
reflections about the horizontal and vertical middle line ( and ), or through the two diagonals ( and ).
These symmetries are functions. Each sends a point in the square to the corresponding point under the symmetry. For example, sends a point to its rotation 90° clockwise around the square's center, and sends a point to its reflection across the square's vertical middle line. Composing two of these symmetries gives another symmetry. These symmetries determine a group called the dihedral group of degree four, denoted . The underlying set of the group is the above set of symmetries, and the group operation is function composition. Two symmetries are combined by composing them as functions, that is, applying the first one to the square, and the second one to the result of the first application. The result of performing first and then is written symbolically from right to left as ("apply the symmetry after performing the symmetry "). This is the usual notation for composition of functions.
A Cayley table lists the results of all such compositions possible. For example, rotating by 270° clockwise () and then reflecting horizontally () is the same as performing a reflection along the diagonal (). Using the above symbols, highlighted in blue in the Cayley table:
Given this set of symmetries and the described operation, the group axioms can be understood as follows.
Binary operation: Composition is a binary operation. That is, is a symmetry for any two symmetries and . For example,
that is, rotating 270° clockwise after reflecting horizontally equals reflecting along the counter-diagonal (). Indeed, every other combination of two symmetries still gives a symmetry, as can be checked using the Cayley table.
Associativity: The associativity axiom deals with composing more than two symmetries: Starting with three elements , and of , there are two possible ways of using these three symmetries in this order to determine a symmetry of the square. One of these ways is to first compose and into a single symmetry, then to compose that symmetry with . The other way is to first compose and , then to compose the resulting symmetry with . These two ways must give always the same result, that is,
For example, can be checked using the Cayley table:
Identity element: The identity element is , as it does not change any symmetry when composed with it either on the left or on the right.
Inverse element: Each symmetry has an inverse: , the reflections , , , and the 180° rotation are their own inverse, because performing them twice brings the square back to its original orientation. The rotations and are each other's inverses, because rotating 90° and then rotation 270° (or vice versa) yields a rotation over 360° which leaves the square unchanged. This is easily verified on the table.
In contrast to the group of integers above, where the order of the operation is immaterial, it does matter in , as, for example, but . In other words, is not abelian.
History
The modern concept of an abstract group developed out of several fields of mathematics. The original motivation for group theory was the quest for solutions of polynomial equations of degree higher than 4. The 19th-century French mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion for the solvability of a particular polynomial equation in terms of the symmetry group of its roots (solutions). The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois's ideas were rejected by his contemporaries, and published only posthumously. More general permutation groups were investigated in particular by Augustin Louis Cauchy. Arthur Cayley's On the theory of groups, as depending on the symbolic equation (1854) gives the first abstract definition of a finite group.
Geometry was a second field in which groups were used systematically, especially symmetry groups as part of Felix Klein's 1872 Erlangen program. After novel geometries such as hyperbolic and projective geometry had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884.
The third field contributing to group theory was number theory. Certain abelian group structures had been used implicitly in Carl Friedrich Gauss's number-theoretical work Disquisitiones Arithmeticae (1798), and more explicitly by Leopold Kronecker. In 1847, Ernst Kummer made early attempts to prove Fermat's Last Theorem by developing groups describing factorization into prime numbers.
The convergence of these various sources into a uniform theory of groups started with Camille Jordan's (1870). Walther von Dyck (1882) introduced the idea of specifying a group by means of generators and relations, and was also the first to give an axiomatic definition of an "abstract group", in the terminology of the time. As of the 20th century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside (who worked on representation theory of finite groups), Richard Brauer's modular representation theory and Issai Schur's papers. The theory of Lie groups, and more generally locally compact groups was studied by Hermann Weyl, Élie Cartan and many others. Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley (from the late 1930s) and later by the work of Armand Borel and Jacques Tits.
The University of Chicago's 1960–61 Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, led to the classification of finite simple groups, with the final step taken by Aschbacher and Smith in 2004. This project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. Research concerning this classification proof is ongoing. Group theory remains a highly active mathematical branch, impacting many other fields, as the examples below illustrate.
Elementary consequences of the group axioms
Basic facts about all groups that can be obtained directly from the group axioms are commonly subsumed under elementary group theory. For example, repeated applications of the associativity axiom show that the unambiguity of
generalizes to more than three factors. Because this implies that parentheses can be inserted anywhere within such a series of terms, parentheses are usually omitted.
Uniqueness of identity element
The group axioms imply that the identity element is unique; that is, there exists only one identity element: any two identity elements and of a group are equal, because the group axioms imply . It is thus customary to speak of the identity element of the group.
Uniqueness of inverses
The group axioms also imply that the inverse of each element is unique. Let a group element have both and as inverses. Then
Therefore, it is customary to speak of the inverse of an element.
Division
Given elements and of a group , there is a unique solution in to the equation , namely . It follows that for each in , the function that maps each to is a bijection; it is called left multiplication by or left translation by .
Similarly, given and , the unique solution to is . For each , the function that maps each to is a bijection called right multiplication by or right translation by .
Equivalent definition with relaxed axioms
The group axioms for identity and inverses may be "weakened" to assert only the existence of a left identity and left inverses. From these one-sided axioms, one can prove that the left identity is also a right identity and a left inverse is also a right inverse for the same element. Since they define exactly the same structures as groups, collectively the axioms are not weaker.
In particular, assuming associativity and the existence of a left identity (that is, ) and a left inverse for each element (that is, ), one can show that every left inverse is also a right inverse of the same element as follows.
Indeed, one has
Similarly, the left identity is also a right identity:
These proofs require all three axioms (associativity, existence of left identity and existence of left inverse). For a structure with a looser definition (like a semigroup) one may have, for example, that a left identity is not necessarily a right identity.
The same result can be obtained by only assuming the existence of a right identity and a right inverse.
However, only assuming the existence of a left identity and a right inverse (or vice versa) is not sufficient to define a group. For example, consider the set with the operator satisfying and . This structure does have a left identity (namely, ), and each element has a right inverse (which is for both elements). Furthermore, this operation is associative (since the product of any number of elements is always equal to the rightmost element in that product, regardless of the order in which these operations are done). However, is not a group, since it lacks a right identity.
Basic concepts
When studying sets, one uses concepts such as subset, function, and quotient by an equivalence relation. When studying groups, one uses instead subgroups, homomorphisms, and quotient groups. These are the analogues that take the group structure into account.
Group homomorphisms
Group homomorphisms are functions that respect group structure; they may be used to relate two groups. A homomorphism from a group to a group is a function such that
It would be natural to require also that respect identities, , and inverses, for all in . However, these additional requirements need not be included in the definition of homomorphisms, because they are already implied by the requirement of respecting the group operation.
The identity homomorphism of a group is the homomorphism that maps each element of to itself. An inverse homomorphism of a homomorphism is a homomorphism such that and , that is, such that for all in and such that for all in . An isomorphism is a homomorphism that has an inverse homomorphism; equivalently, it is a bijective homomorphism. Groups and are called isomorphic if there exists an isomorphism . In this case, can be obtained from simply by renaming its elements according to the function ; then any statement true for is true for , provided that any specific elements mentioned in the statement are also renamed.
The collection of all groups, together with the homomorphisms between them, form a category, the category of groups.
An injective homomorphism factors canonically as an isomorphism followed by an inclusion, for some subgroup of .
Injective homomorphisms are the monomorphisms in the category of groups.
Subgroups
Informally, a subgroup is a group contained within a bigger one, : it has a subset of the elements of , with the same operation. Concretely, this means that the identity element of must be contained in , and whenever and are both in , then so are and , so the elements of , equipped with the group operation on restricted to , indeed form a group. In this case, the inclusion map is a homomorphism.
In the example of symmetries of a square, the identity and the rotations constitute a subgroup , highlighted in red in the Cayley table of the example: any two rotations composed are still a rotation, and a rotation can be undone by (i.e., is inverse to) the complementary rotations 270° for 90°, 180° for 180°, and 90° for 270°. The subgroup test provides a necessary and sufficient condition for a nonempty subset of a group to be a subgroup: it is sufficient to check that for all elements and in . Knowing a group's subgroups is important in understanding the group as a whole.
Given any subset of a group , the subgroup generated by consists of all products of elements of and their inverses. It is the smallest subgroup of containing . In the example of symmetries of a square, the subgroup generated by and consists of these two elements, the identity element , and the element . Again, this is a subgroup, because combining any two of these four elements or their inverses (which are, in this particular case, these same elements) yields an element of this subgroup.
Cosets
In many situations it is desirable to consider two group elements the same if they differ by an element of a given subgroup. For example, in the symmetry group of a square, once any reflection is performed, rotations alone cannot return the square to its original position, so one can think of the reflected positions of the square as all being equivalent to each other, and as inequivalent to the unreflected positions; the rotation operations are irrelevant to the question whether a reflection has been performed. Cosets are used to formalize this insight: a subgroup determines left and right cosets, which can be thought of as translations of by an arbitrary group element . In symbolic terms, the left and right cosets of , containing an element , are
The left cosets of any subgroup form a partition of ; that is, the union of all left cosets is equal to and two left cosets are either equal or have an empty intersection. The first case happens precisely when , i.e., when the two elements differ by an element of . Similar considerations apply to the right cosets of . The left cosets of may or may not be the same as its right cosets. If they are (that is, if all in satisfy ), then is said to be a normal subgroup.
In , the group of symmetries of a square, with its subgroup of rotations, the left cosets are either equal to , if is an element of itself, or otherwise equal to (highlighted in green in the Cayley table of ). The subgroup is normal, because and similarly for the other elements of the group. (In fact, in the case of , the cosets generated by reflections are all equal: .)
Quotient groups
Suppose that is a normal subgroup of a group , and
denotes its set of cosets.
Then there is a unique group law on for which the map sending each element to is a homomorphism.
Explicitly, the product of two cosets and is , the coset serves as the identity of , and the inverse of in the quotient group is .
The group , read as " modulo ", is called a quotient group or factor group.
The quotient group can alternatively be characterized by a universal property.
The elements of the quotient group are and . The group operation on the quotient is shown in the table. For example, . Both the subgroup and the quotient are abelian, but is not. Sometimes a group can be reconstructed from a subgroup and quotient (plus some additional data), by the semidirect product construction; is an example.
The first isomorphism theorem implies that any surjective homomorphism factors canonically as a quotient homomorphism followed by an isomorphism: .
Surjective homomorphisms are the epimorphisms in the category of groups.
Presentations
Every group is isomorphic to a quotient of a free group, in many ways.
For example, the dihedral group is generated by the right rotation and the reflection in a vertical line (every element of is a finite product of copies of these and their inverses).
Hence there is a surjective homomorphism from the free group on two generators to sending to and to .
Elements in are called relations; examples include .
In fact, it turns out that is the smallest normal subgroup of containing these three elements; in other words, all relations are consequences of these three.
The quotient of the free group by this normal subgroup is denoted .
This is called a presentation of by generators and relations, because the first isomorphism theorem for yields an isomorphism .
A presentation of a group can be used to construct the Cayley graph, a graphical depiction of a discrete group.
Examples and applications
Examples and applications of groups abound. A starting point is the group of integers with addition as group operation, introduced above. If instead of addition multiplication is considered, one obtains multiplicative groups. These groups are predecessors of important constructions in abstract algebra.
Groups are also applied in many other mathematical areas. Mathematical objects are often examined by associating groups to them and studying the properties of the corresponding groups. For example, Henri Poincaré founded what is now called algebraic topology by introducing the fundamental group. By means of this connection, topological properties such as proximity and continuity translate into properties of groups.
Elements of the fundamental group of a topological space are equivalence classes of loops, where loops are considered equivalent if one can be smoothly deformed into another, and the group operation is "concatenation" (tracing one loop then the other). For example, as shown in the figure, if the topological space is the plane with one point removed, then loops which do not wrap around the missing point (blue) can be smoothly contracted to a single point and are the identity element of the fundamental group. A loop which wraps around the missing point times cannot be deformed into a loop which wraps times (with ), because the loop cannot be smoothly deformed across the hole, so each class of loops is characterized by its winding number around the missing point. The resulting group is isomorphic to the integers under addition.
In more recent applications, the influence has also been reversed to motivate geometric constructions by a group-theoretical background. In a similar vein, geometric group theory employs geometric concepts, for example in the study of hyperbolic groups. Further branches crucially applying groups include algebraic geometry and number theory.
In addition to the above theoretical applications, many practical applications of groups exist. Cryptography relies on the combination of the abstract group theory approach together with algorithmical knowledge obtained in computational group theory, in particular when implemented for finite groups. Applications of group theory are not restricted to mathematics; sciences such as physics, chemistry and computer science benefit from the concept.
Numbers
Many number systems, such as the integers and the rationals, enjoy a naturally given group structure. In some cases, such as with the rationals, both addition and multiplication operations give rise to group structures. Such number systems are predecessors to more general algebraic structures known as rings and fields. Further abstract algebraic concepts such as modules, vector spaces and algebras also form groups.
Integers
The group of integers under addition, denoted , has been described above. The integers, with the operation of multiplication instead of addition, do not form a group. The associativity and identity axioms are satisfied, but inverses do not exist: for example, is an integer, but the only solution to the equation in this case is , which is a rational number, but not an integer. Hence not every element of has a (multiplicative) inverse.
Rationals
The desire for the existence of multiplicative inverses suggests considering fractions
Fractions of integers (with nonzero) are known as rational numbers. The set of all such irreducible fractions is commonly denoted . There is still a minor obstacle for , the rationals with multiplication, being a group: because zero does not have a multiplicative inverse (i.e., there is no such that ), is still not a group.
However, the set of all nonzero rational numbers does form an abelian group under multiplication, also denoted . Associativity and identity element axioms follow from the properties of integers. The closure requirement still holds true after removing zero, because the product of two nonzero rationals is never zero. Finally, the inverse of is , therefore the axiom of the inverse element is satisfied.
The rational numbers (including zero) also form a group under addition. Intertwining addition and multiplication operations yields more complicated structures called rings and – if division by other than zero is possible, such as in – fields, which occupy a central position in abstract algebra. Group theoretic arguments therefore underlie parts of the theory of those entities.
Modular arithmetic
Modular arithmetic for a modulus defines any two elements and that differ by a multiple of to be equivalent, denoted by . Every integer is equivalent to one of the integers from to , and the operations of modular arithmetic modify normal arithmetic by replacing the result of any operation by its equivalent representative. Modular addition, defined in this way for the integers from to , forms a group, denoted as or , with as the identity element and as the inverse element of .
A familiar example is addition of hours on the face of a clock, where 12 rather than 0 is chosen as the representative of the identity. If the hour hand is on and is advanced hours, it ends up on , as shown in the illustration. This is expressed by saying that is congruent to "modulo " or, in symbols,
For any prime number , there is also the multiplicative group of integers modulo . Its elements can be represented by to . The group operation, multiplication modulo , replaces the usual product by its representative, the remainder of division by . For example, for , the four group elements can be represented by . In this group, , because the usual product is equivalent to : when divided by it yields a remainder of . The primality of ensures that the usual product of two representatives is not divisible by , and therefore that the modular product is nonzero. The identity element is represented by , and associativity follows from the corresponding property of the integers. Finally, the inverse element axiom requires that given an integer not divisible by , there exists an integer such that
that is, such that evenly divides . The inverse can be found by using Bézout's identity and the fact that the greatest common divisor equals . In the case above, the inverse of the element represented by is that represented by , and the inverse of the element represented by is represented by , as . Hence all group axioms are fulfilled. This example is similar to above: it consists of exactly those elements in the ring that have a multiplicative inverse. These groups, denoted , are crucial to public-key cryptography.
Cyclic groups
A cyclic group is a group all of whose elements are powers of a particular element . In multiplicative notation, the elements of the group are
where means , stands for , etc. Such an element is called a generator or a primitive element of the group. In additive notation, the requirement for an element to be primitive is that each element of the group can be written as
In the groups introduced above, the element is primitive, so these groups are cyclic. Indeed, each element is expressible as a sum all of whose terms are . Any cyclic group with elements is isomorphic to this group. A second example for cyclic groups is the group of th complex roots of unity, given by complex numbers satisfying . These numbers can be visualized as the vertices on a regular -gon, as shown in blue in the image for . The group operation is multiplication of complex numbers. In the picture, multiplying with corresponds to a counter-clockwise rotation by 60°. From field theory, the group is cyclic for prime : for example, if , is a generator since , , , and .
Some cyclic groups have an infinite number of elements. In these groups, for every non-zero element , all the powers of are distinct; despite the name "cyclic group", the powers of the elements do not cycle. An infinite cyclic group is isomorphic to , the group of integers under addition introduced above. As these two prototypes are both abelian, so are all cyclic groups.
The study of finitely generated abelian groups is quite mature, including the fundamental theorem of finitely generated abelian groups; and reflecting this state of affairs, many group-related notions, such as center and commutator, describe the extent to which a given group is not abelian.
Symmetry groups
Symmetry groups are groups consisting of symmetries of given mathematical objects, principally geometric entities, such as the symmetry group of the square given as an introductory example above, although they also arise in algebra such as the symmetries among the roots of polynomial equations dealt with in Galois theory (see below). Conceptually, group theory can be thought of as the study of symmetry. Symmetries in mathematics greatly simplify the study of geometrical or analytical objects. A group is said to act on another mathematical object if every group element can be associated to some operation on and the composition of these operations follows the group law. For example, an element of the (2,3,7) triangle group acts on a triangular tiling of the hyperbolic plane by permuting the triangles. By a group action, the group pattern is connected to the structure of the object being acted on.
In chemistry, point groups describe molecular symmetries, while space groups describe crystal symmetries in crystallography. These symmetries underlie the chemical and physical behavior of these systems, and group theory enables simplification of quantum mechanical analysis of these properties. For example, group theory is used to show that optical transitions between certain quantum levels cannot occur simply because of the symmetry of the states involved.
Group theory helps predict the changes in physical properties that occur when a material undergoes a phase transition, for example, from a cubic to a tetrahedral crystalline form. An example is ferroelectric materials, where the change from a paraelectric to a ferroelectric state occurs at the Curie temperature and is related to a change from the high-symmetry paraelectric state to the lower symmetry ferroelectric state, accompanied by a so-called soft phonon mode, a vibrational lattice mode that goes to zero frequency at the transition.
Such spontaneous symmetry breaking has found further application in elementary particle physics, where its occurrence is related to the appearance of Goldstone bosons.
Finite symmetry groups such as the Mathieu groups are used in coding theory, which is in turn applied in error correction of transmitted data, and in CD players. Another application is differential Galois theory, which characterizes functions having antiderivatives of a prescribed form, giving group-theoretic criteria for when solutions of certain differential equations are well-behaved. Geometric properties that remain stable under group actions are investigated in (geometric) invariant theory.
General linear group and representation theory
Matrix groups consist of matrices together with matrix multiplication. The general linear group consists of all invertible -by- matrices with real entries. Its subgroups are referred to as matrix groups or linear groups. The dihedral group example mentioned above can be viewed as a (very small) matrix group. Another important matrix group is the special orthogonal group . It describes all possible rotations in dimensions. Rotation matrices in this group are used in computer graphics.
Representation theory is both an application of the group concept and important for a deeper understanding of groups. It studies the group by its group actions on other spaces. A broad class of group representations are linear representations in which the group acts on a vector space, such as the three-dimensional Euclidean space . A representation of a group on an -dimensional real vector space is simply a group homomorphism
from the group to the general linear group. This way, the group operation, which may be abstractly given, translates to the multiplication of matrices making it accessible to explicit computations.
A group action gives further means to study the object being acted on. On the other hand, it also yields information about the group. Group representations are an organizing principle in the theory of finite groups, Lie groups, algebraic groups and topological groups, especially (locally) compact groups.
Galois groups
Galois groups were developed to help solve polynomial equations by capturing their symmetry features. For example, the solutions of the quadratic equation are given by
Each solution can be obtained by replacing the sign by or ; analogous formulae are known for cubic and quartic equations, but do not exist in general for degree 5 and higher. In the quadratic formula, changing the sign (permuting the resulting two solutions) can be viewed as a (very simple) group operation. Analogous Galois groups act on the solutions of higher-degree polynomial equations and are closely related to the existence of formulas for their solution. Abstract properties of these groups (in particular their solvability) give a criterion for the ability to express the solutions of these polynomials using solely addition, multiplication, and roots similar to the formula above.
Modern Galois theory generalizes the above type of Galois groups by shifting to field theory and considering field extensions formed as the splitting field of a polynomial. This theory establishes—via the fundamental theorem of Galois theory—a precise relationship between fields and groups, underlining once again the ubiquity of groups in mathematics.
Finite groups
A group is called finite if it has a finite number of elements. The number of elements is called the order of the group. An important class is the symmetric groups , the groups of permutations of objects. For example, the symmetric group on 3 letters is the group of all possible reorderings of the objects. The three letters ABC can be reordered into ABC, ACB, BAC, BCA, CAB, CBA, forming in total 6 (factorial of 3) elements. The group operation is composition of these reorderings, and the identity element is the reordering operation that leaves the order unchanged. This class is fundamental insofar as any finite group can be expressed as a subgroup of a symmetric group for a suitable integer , according to Cayley's theorem. Parallel to the group of symmetries of the square above, can also be interpreted as the group of symmetries of an equilateral triangle.
The order of an element in a group is the least positive integer such that , where represents
that is, application of the operation "" to copies of . (If "" represents multiplication, then corresponds to the th power of .) In infinite groups, such an may not exist, in which case the order of is said to be infinity. The order of an element equals the order of the cyclic subgroup generated by this element.
More sophisticated counting techniques, for example, counting cosets, yield more precise statements about finite groups: Lagrange's Theorem states that for a finite group the order of any finite subgroup divides the order of . The Sylow theorems give a partial converse.
The dihedral group of symmetries of a square is a finite group of order 8. In this group, the order of is 4, as is the order of the subgroup that this element generates. The order of the reflection elements etc. is 2. Both orders divide 8, as predicted by Lagrange's theorem. The groups of multiplication modulo a prime have order .
Finite abelian groups
Any finite abelian group is isomorphic to a product of finite cyclic groups; this statement is part of the fundamental theorem of finitely generated abelian groups.
Any group of prime order is isomorphic to the cyclic group (a consequence of Lagrange's theorem).
Any group of order is abelian, isomorphic to or .
But there exist nonabelian groups of order ; the dihedral group of order above is an example.
Simple groups
When a group has a normal subgroup other than and itself, questions about can sometimes be reduced to questions about and . A nontrivial group is called simple if it has no such normal subgroup. Finite simple groups are to finite groups as prime numbers are to positive integers: they serve as building blocks, in a sense made precise by the Jordan–Hölder theorem.
Classification of finite simple groups
Computer algebra systems have been used to list all groups of order up to 2000.
But classifying all finite groups is a problem considered too hard to be solved.
The classification of all finite simple groups was a major achievement in contemporary group theory. There are several infinite families of such groups, as well as 26 "sporadic groups" that do not belong to any of the families. The largest sporadic group is called the monster group. The monstrous moonshine conjectures, proved by Richard Borcherds, relate the monster group to certain modular functions.
The gap between the classification of simple groups and the classification of all groups lies in the extension problem.
Groups with additional structure
An equivalent definition of group consists of replacing the "there exist" part of the group axioms by operations whose result is the element that must exist. So, a group is a set equipped with a binary operation (the group operation), a unary operation (which provides the inverse) and a nullary operation, which has no operand and results in the identity element. Otherwise, the group axioms are exactly the same. This variant of the definition avoids existential quantifiers and is used in computing with groups and for computer-aided proofs.
This way of defining groups lends itself to generalizations such as the notion of group object in a category. Briefly, this is an object with morphisms that mimic the group axioms.
Topological groups
Some topological spaces may be endowed with a group law. In order for the group law and the topology to interweave well, the group operations must be continuous functions; informally, and must not vary wildly if and vary only a little. Such groups are called topological groups, and they are the group objects in the category of topological spaces. The most basic examples are the group of real numbers under addition and the group of nonzero real numbers under multiplication. Similar examples can be formed from any other topological field, such as the field of complex numbers or the field of -adic numbers. These examples are locally compact, so they have Haar measures and can be studied via harmonic analysis. Other locally compact topological groups include the group of points of an algebraic group over a local field or adele ring; these are basic to number theory Galois groups of infinite algebraic field extensions are equipped with the Krull topology, which plays a role in infinite Galois theory. A generalization used in algebraic geometry is the étale fundamental group.
Lie groups
A Lie group is a group that also has the structure of a differentiable manifold; informally, this means that it looks locally like a Euclidean space of some fixed dimension. Again, the definition requires the additional structure, here the manifold structure, to be compatible: the multiplication and inverse maps are required to be smooth.
A standard example is the general linear group introduced above: it is an open subset of the space of all -by- matrices, because it is given by the inequality
where denotes an -by- matrix.
Lie groups are of fundamental importance in modern physics: Noether's theorem links continuous symmetries to conserved quantities. Rotation, as well as translations in space and time, are basic symmetries of the laws of mechanics. They can, for instance, be used to construct simple models—imposing, say, axial symmetry on a situation will typically lead to significant simplification in the equations one needs to solve to provide a physical description. Another example is the group of Lorentz transformations, which relate measurements of time and velocity of two observers in motion relative to each other. They can be deduced in a purely group-theoretical way, by expressing the transformations as a rotational symmetry of Minkowski space. The latter serves—in the absence of significant gravitation—as a model of spacetime in special relativity. The full symmetry group of Minkowski space, i.e., including translations, is known as the Poincaré group. By the above, it plays a pivotal role in special relativity and, by implication, for quantum field theories. Symmetries that vary with location are central to the modern description of physical interactions with the help of gauge theory. An important example of a gauge theory is the Standard Model, which describes three of the four known fundamental forces and classifies all known elementary particles.
Generalizations
More general structures may be defined by relaxing some of the axioms defining a group. The table gives a list of several structures generalizing groups.
For example, if the requirement that every element has an inverse is eliminated, the resulting algebraic structure is called a monoid. The natural numbers (including zero) under addition form a monoid, as do the nonzero integers under multiplication . Adjoining inverses of all elements of the monoid produces a group , and likewise adjoining inverses to any (abelian) monoid produces a group known as the Grothendieck group of .
A group can be thought of as a small category with one object in which every morphism is an isomorphism: given such a category, the set is a group; conversely, given a group , one can build a small category with one object in which .
More generally, a groupoid is any small category in which every morphism is an isomorphism.
In a groupoid, the set of all morphisms in the category is usually not a group, because the composition is only partially defined: is defined only when the source of matches the target of .
Groupoids arise in topology (for instance, the fundamental groupoid) and in the theory of stacks.
Finally, it is possible to generalize any of these concepts by replacing the binary operation with an -ary operation (i.e., an operation taking arguments, for some nonnegative integer ). With the proper generalization of the group axioms, this gives a notion of -ary group.
See also
List of group theory topics
Notes
Citations
References
General references
, Chapter 2 contains an undergraduate-level exposition of the notions covered in this article.
, an elementary introduction.
.
.
.
.
.
.
Special references
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Historical references
.
.
.
(Galois work was first published by Joseph Liouville in 1843).
.
.
.
.
.
.
External links
Algebraic structures
Symmetry | Group (mathematics) | Physics,Mathematics | 9,314 |
10,948,785 | https://en.wikipedia.org/wiki/Wildlife%20of%20Guinea | The wildlife of Guinea is very diverse due to its wide variety of habitats. The southern part of the country lies within the Guinean Forests of West Africa biodiversity hotspot, while the north-east is characterized by dry savanna woodlands. Ecoregions of Guinea are the Western Guinean lowland forests, Guinean montane forests, Guinean forest–savanna mosaic, West Sudanian savanna, and Guinean mangroves.
Populations of large mammals are restricted to uninhabited distant parts of parks and reserves, and those populations are declining. Strongholds of Guinean wildlife are Pinselly Classified Forest, National Park of Upper Niger, Badiar National Park, Mount Nimba Strict Nature Reserve, Ziama Massif, Bossou Hills Reserve, and Diécké Classified Forest.
Fauna
Mammals
Birds
Blue-headed wood-dove
Iris glossy-starling
White-necked rockfowl
White-breasted guineafowl
Reptiles
Amphibians
Insects
Butterflies and moths
Flora
References
Biota of Guinea
Guinea
Nature conservation in Guinea | Wildlife of Guinea | Biology | 201 |
2,576,676 | https://en.wikipedia.org/wiki/Steelcase | Steelcase Inc. is an international manufacturer of furniture, casegoods, seating, and storage and partitioning systems for offices, hospitals, classrooms, and residential interiors. It is headquartered in Grand Rapids, Michigan, United States.
History
Originally known as The Metal Office Furniture Company, Steelcase was founded by Peter Martin Wege in 1912. Prior to starting the company, Wege had filed approximately 25 patents related to the sheet metal and fireproofing industries. The Metal Office Furniture Company's first products included fireproof metal safes and four-drawer metal filing cabinets.
In 1914, the company received its first product patent for "The Victor", a fireproof steel wastebasket. The Victor gained popularity due to its light weight—achieved through a patented process of bending flat steel at right angles to create boxes—and its ability to prevent fires at a time when smoking was common indoors, particularly in the workplace. In 1915, the company began manufacturing and distributing steel desks after designing and producing 200 for Boston's first skyscraper, the Custom House Tower. In 1937, the company collaborated with Frank Lloyd Wright on office furniture for the Johnson Wax Headquarters. The partnership lasted two years and resulted in some of the first modern workstations.
The name Steelcase was a result of an advertising campaign to promote metal office furniture over wood and was trademarked in 1921. The company officially changed its name to Steelcase, Inc. in 1954.
The company became an industry leader in the late 1960s due to the volume of its sales. Steelcase expanded into new markets during the 1970s, including Asia, Europe, and North Africa. In 1973, the company debuted the Series 9000 furniture line, a panel-based office system that became a best seller and the company's flagship brand. That same year, the company delivered the largest single furniture shipment to the then-new Sears Tower. The delivery included 43,565 pieces of furniture and furnished 44 floors.
During the 1980s and 1990s, Steelcase was working closely with architects and interior designers to develop products as well the company's own workspace in Grand Rapids. The company's current headquarters were built in 1983 on 901 44th St. SE in Grand Rapids, Michigan. In 1989, Steelcase opened the pyramid-shaped Steelcase Inc. Corporate Development Center. The center contained ten research laboratories and workspaces meant to encourage interdisciplinary collaboration on product development. Steelcase vacated the Pyramid in 2010, and the Pyramid was sold to Switch (company) in 2016. In 1996, Steelcase became the majority stakeholder in design firm IDEO and the firm's CEO, David M. Kelley, became Steelcase's vice president of technical discovery and innovation. Steelcase sold its shares back to IDEO's managers starting in 2007.
In 1996, Steelcase was found at fault in a patent infringement suit brought against them by Haworth, Inc., another furniture company. Steelcase was ordered to pay $211.5 million in damages and interest, thus ending a 17-year dispute with Haworth.
Steelcase became a publicly traded company in 1998 under the symbol SCS. During the 2000s, Steelcase reorganized its workforce and began integrating modern technologies in its products. In 2000, the company opened Steelcase University, a center for ongoing employee development and learning. Steelcase's wood furniture plant in Caledonia, MI earned LEED certification in 2001, becoming the first plant to receive the certification. In 2002, Steelcase partnered with IBM to create BlueSpace, a "smart office" prototype designed using new office technologies. In 2010, Steelcase and IDEO launched new models for higher education classrooms called LearnLabs.
In January 2016 the company recalled 12 models of Steelcase "Rocky" style swivel chairs manufactured between 2005 and 2015, due to fall hazard.
Noteworthy products
Steelcase released Multiple 15 desks in 1946, which introduced standardized desk sizing and became a universal industry standard. Series 9000 was released in 1973 and became Steelcase's most popular line of office systems. The Leap chair, introduced in 1999, sold 5,000 units a week during its first year and became the company's most popular release. The ergonomic office chair was designed with eight adjustable areas for users to control, including chair height, armrest positioning, lumbar support, seat depth, and back positioning. The chair was developed over four years, cost $35 million to design, and resulted in 11 academic studies and 23 patents. The company released the Gesture chair in 2013, which is designed to support the way workers naturally sit.
Steelcase innovates the industry with the 1945 Metal Office Furniture Company path in an attempt to be more sustainable. The idea started when Steelcase saw the need for furniture to be personalized for custom size spaces with the ability to be able to fix a broken part if necessary. This series then came to be over 200 compatible arrangements for tables and desks. The process for this simple assembly of parts for the new design was to repair, replace or recycle as many times as the user needs.
Brands
Subsidiaries include AMQ, Coalesse, Halcon, Orangebox, Smith System, and Viccarbe, as well as several other brands such as Steelcase Health and Education. The company established an office accessories brand called Details in 1990. In 1993, Steelcase launched Turnstone, a line of furniture designed for small businesses and home offices. Designtex, which produces interior textiles and upholstery, was acquired in 1998. Nurture was founded in 2006 to create products for the health care industry, including furniture and interiors for waiting rooms, offices, and clinics. The brand became Steelcase Health in 2014.
Steelcase merged three of its subsidiaries (Brayton International, Metro Furniture and Vecta) to form Coalesse in 2008. Coalesse products are meant for what the company calls "live/work" spaces, a result of the frequent overlap of home and office in modern working habits.
Company culture
In 1985, Steelcase purchased the Meyer May House designed by Frank Lloyd Wright and restored it, opening it to the public in 1987. A corporate art program has resulted in a collection including pieces by Pablo Picasso, Andy Warhol and Dale Chihuly.
The company employs a research group called WorkSpace Futures to study workplace trends. In 2010, Steelcase underwent a three-year project to update its Grand Rapids headquarters to promote employee productivity and employee well-being, including redesigning a cafeteria into an all-purpose work environment that provides food service and space for meetings, socializing, and independent work.
Steelcase's sustainability efforts have included reducing packaging, using regional facilities to reduce shipping distance, cutting greenhouse gas emissions and water consumption, and a goal to reduce its environmental footprint by 25 percent by 2020. As of 2012, Steelcase had reduced its waste by 80 percent, greenhouse gas emissions by 37 percent and water consumption by 54 percent since 2006. According to the company's WorkFutures group, the company also analyzes its supply chain and materials chemistry to determine product sustainability. As of 2014, the company led its industry in Cradle to Cradle-certified products. In 2016, Steelcase employees volunteered 38,913 hours and the Steelcase Foundation donated more than US$5.7 million.
Steelcase became Carbon Neutral on August 25, 2020, with the plan of becoming Carbon negative (eliminating more carbon than they produce) by 2030. As a company they have a focus on green chemistry and have stopped manufacturing with many chemicals like Polyvinyl chloride (PVC).
Awards
Company Awards
The company won the Editors' Choice award at the 2014 NeoCon product competition for "Quiet Spaces", a series of workspaces designed for introverts and a collaboration with Susan Cain, author of Quiet: The Power of Introverts in a World That Can't Stop Talking.
Steelcase was named The World's Most Admired Company by Forbes in 2018, 2019 and 2020. They earned the 2020 Civic Award.
Design Awards
2014 Steelcase's SOTO II Worktools won a Silver Award in the Office Accessories category from Editor's Choice.
2018 Best Large Showroom and Best of Competition at NeoCon
2019 Steelcase won the Red Dot Award in 2019 for their SILQ chair design.
2021 Best of NeoCon Gold and Best of NeoCon Innovation Awards
References
External links
Furniture companies of the United States
Manufacturing companies based in Grand Rapids, Michigan
Industrial design
Manufacturing companies established in 1912
1912 establishments in Michigan
Companies listed on the New York Stock Exchange | Steelcase | Engineering | 1,732 |
66,016,004 | https://en.wikipedia.org/wiki/HL-2M | HL-2M is a research tokamak at the Southwestern Institute of Physics in Chengdu, China. It was completed on November 26, 2019 and commissioned on December 4, 2020. HL-2M is now used for nuclear fusion research, in particular to study heat extraction from the plasma. With a major radius of , the tokamak is a medium-scale device. The magnetic field of up to is created by non-superconducting copper coils.
References
Tokamaks
Fusion reactors | HL-2M | Physics,Chemistry | 106 |
76,245,774 | https://en.wikipedia.org/wiki/Russula%20cremoricolor | Russula cremoricolor, also known as the winter russula, is a species of gilled mushroom. This mushroom has red, cream-yellow, and pink color variants, which complicates attempts at field identification, although finding "red and creamy capped fruitbodies in close proximity is a good clue indicating this species". The winter russula is "mildly toxic," and causes intestinal distress even when consumed in small amounts. The red morph was previously identified as Russula silvicola, but was found to be genetically identical to the cream-colored individuals called R. cremoricolor. The red morph is superficially similar to Russula californiensis but R. cremicolor has a much sharper, peppier taste, likes to associate with mixed forest or tanoak rather than pine, and keeps its gills and stipe white even in age.
See also
List of Russula species
Russula emetica
References
Fungi of North America
Fungi described in 1902
Fungus species
cremoricolor | Russula cremoricolor | Biology | 217 |
49,178,304 | https://en.wikipedia.org/wiki/Sarcodon%20quercinofibulatus | Sarcodon quercinofibulatus is a species of tooth fungus in the family Bankeraceae. Found in Spain, where it grows under Quercus petraea, it was described as new to science in 2011. The thick, fleshy caps of its fruit bodies are up to in diameter. The cap cuticle breaks up in age into concentric brown scales, revealing the cream-coloured brown flesh underneath. Spines on the underside of the cap are 5–8 mm long. They are initially cream, but become gray to grayish-brown in maturity. Application of a solution of potassium hydroxide turns the flesh grayish-green. The spores of S. quercinofibulatus are spherical, or nearly so, and typically measure 6.5–7.4 by 5.4–6.4 μm.
References
External links
Fungi described in 2011
Fungi of Europe
quercinofibulatus
Fungus species | Sarcodon quercinofibulatus | Biology | 188 |
38,025,273 | https://en.wikipedia.org/wiki/Role-taking%20theory | Role-taking theory (or social perspective taking) is the social-psychological concept that one of the most important factors in facilitating social cognition in children is the growing ability to understand others’ feelings and perspectives, an ability that emerges as a result of general cognitive growth. Part of this process requires that children come to realize that others’ views may differ from their own. Role-taking ability involves understanding the cognitive and affective (i.e. relating to moods, emotions, and attitudes) aspects of another person's point of view, and differs from perceptual perspective taking, which is the ability to recognize another person's visual point of view of the environment. Furthermore, albeit some mixed evidence on the issue, role taking and perceptual perspective taking seem to be functionally and developmentally independent of each other.
Robert Selman is noted for emphasizing the importance of this theory within the field of cognitive development. He argues that a matured role-taking ability allows us to better appreciate how our actions will affect others, and if we fail to develop the ability to role take, we will be forced to erroneously judge that others are behaving solely as a result of external factors. One of Selman's principal additions to the theory has been an empirically supported developmental theory of role-taking ability.
Social cognitive research on children's thoughts about others’ perspectives, feelings, and behaviors has emerged as one of the largest areas of research in the field. Role-taking theory can provide a theoretical foundation upon which this research can rest and be guided by and has relations and applications to numerous other theories and topics.
Selman's developmental theory
Robert Selman developed his developmental theory of role-taking ability based on four sources. The first is the work of M. H. Feffer (1959, 1971), and Feffer and Gourevitch (1960), which related role-taking ability to Piaget's theory of social decentering, and developed a projective test to assess children's ability to decenter as they mature. The second is the research of John H. Flavell (1968), which studied children's growing abilities to judge other people's conceptual and perceptual perspectives. The third is the developmental ideas of differentiation, whereupon one learns to distinguish his/her perspective from the perspectives of others, and integration, the ability to relate one's perspective to the perspectives of others. The final source of influence comes from Selman's own previous research where he assessed children's ability to describe the different perspectives of characters in a story.
One example of Selman's stories is that of Holly and her father. Children are told about Holly, an avid 8-year-old tree climber. One day, Holly falls off a tree, but does not hurt herself. Holly's father sees this and makes Holly promise that she will stop climbing trees, and Holly promises. Later, however, Holly and her friends meet Shawn, a boy whose kitten is stuck in a tree. Holly is the only one amongst her friends who can climb trees well enough to save Shawn's kitten, who may fall at any moment, but she remembers the promise she made with her father. Selman then goes on to ask children about the perspectives of Holly and her father, and each stage is associated with typical responses.
Stages
Level 0: Egocentric Role Taking
Level 0 (ages 3–6, roughly) is characterized by two lacking abilities. The first is the failure to distinguish perspectives (differentiation). More specifically, the child is unable to distinguish between his perspective, including his perspective on why a social action occurred, and that of others. The second ability the child lacks is relating perspectives (social integration).
In the Holly dilemma, children tend to respond that Holly will save the kitten and that the father will not mind Holly's disobedience because he will be happy and he likes kittens. In actuality, the child is displaying his/her inability to separate his/her liking for kittens from the perspectives of Holly and her father.
Level 1: Subjective role taking
At level 1 (ages 6–8, roughly), children now recognize that they and others in a situation may have different information available to them, and thus may differ in their views. In other words, children have matured in differentiation. The child still significantly lacks integration ability, however: he/she cannot understand that his views are influenced by the views of others, and vice versa, ad infinitum. In addition, the child believes that the sole reason for differing social perspectives is because of different information, and nothing else.
In the Holly dilemma, when asked if the father would be angry if he found out that Holly climbed the tree again, children might respond, “If he didn’t know why she climbed the tree, he would be angry. But if he knew why she did it, he would realize that she had a good reason,” not recognizing that the father may still be angry, regardless of her wanting to save the kitten, because of his own values, such as his concern for his daughter's safety.
Level 2: Self-reflective role taking
The child's differentiation ability matures at this level (ages 8–10, roughly) enough so that he/she understands that people can also differ in their social perspectives because of their particularly held and differing values and set of purposes. In turn, the child is able to better put him/herself in the position of another person. In terms of integration, the child can now understand that others think about his/her point of view too. This allows the child to predict how the other person might react to the child's behaviour. What is still lacking, however, is for the child to be able to consider another person's point of view and another person's point of view of the child simultaneously.
In the Holly dilemma, when children are asked if Holly will climb the tree, they will typically respond, “Yes. She knows that her father will understand why she did it.” This indicates the child is considering the father's perspective in light of Holly's perspective; however, when asked if the father would want Holly to climb the tree, children typically respond that he would not. This shows that the child is solely considering the father's point of view and his worry for Holly's safety.
Level 3: Mutual role taking
In level 3 (ages 10–12, roughly), the child can now differentiate his/her own perspective from the viewpoint likely for the average member of the group. In addition, the child can take the view of a detached third-person and view a situation from that perspective. In terms of integration, the child can now simultaneously consider his/her view of others and others’ view of the child, and the consequences of this feedback loop of perspectives in terms of behaviour and cognition.
In describing the result of the Holly dilemma, the child may take the perceptive of a detached third party, responding that “Holly wanted to get the kitten because she likes kittens, but she knew that she wasn’t supposed to climb trees. Holly’s father knew that Holly had been told not to climb trees, but he couldn’t have known about [the kitten].”
Level 4: Societal role taking
At level 4 (ages 12–15+, roughly), the adolescent now considers others’ perspectives with reference to the social environment and culture the other person comes from, assuming that the other person will believe and act in accord to their society's norms and values.
When asked if Holly deserves to be punished for her transgression, adolescents typically respond that Holly should not as her father should understand that we need to humanely treat animals.
Evidence for Selman's Stages
Three studies have been conducted to assess Selman's theory, all of which having shown support for his developmental outline of role-taking ability progression. Selman conducted the first study of his own theory using 60 middle-class children from ages 4 to 6. In this experiment, the children were asked to predict and explain their predictions about another child's behaviour in a certain situation. The child participants were given situational information not available to the child they were making behavioural and cognitive predictions about. Results implied a stage progression of role taking ability as a function of age, as theorized by Selman.
In a second assessment of the theory, Selman and D. F. Byrne interviewed 40 children, ages 4, 6, 8, and 10, on two socio-moral dilemmas. Children were required to discuss the perspectives of different characters in each dilemma, and results again showed that role taking ability progressed through stages as a function of age.
The third study assessing Selman's theory was a 5-year longitudinal study of 41 male children on their role taking ability. Results showed that 40 of the 41 children interviewed followed the stages as outlined by Selman and none skipped over a stage.
Relation to other topics
Piaget's theory of cognitive development
Jean Piaget stressed the importance of play in children, especially play that involves role taking. He believed that role taking play in children promotes a more mature social understanding by teaching children to take on the roles of others, allowing them to understand that different people can have differing perspectives. In addition, Piaget argued that good solutions to interpersonal conflicts involve compromise which arises out of our ability to consider the points of view of others.
Two of Piaget's fundamental concepts have primarily influenced role taking theory:
egocentrism, the mode of thinking that characterizes preoperational thinking, which is the child's failure to consider the world from other points of view.
decentration, the mode of thinking that characterizes operational thinking, which is the child's growing ability to perceive the world with more than one perspective in mind.
In Piagetian theory, these concepts were used to describe solely cognitive development, but they have been applied in role taking theory to the social domain.
Evidence that Piaget's cognitive theories can be applied to the interpersonal aspects of role-taking theory comes from two sources. The first is empirical evidence that children's ability to role take is correlated to their IQ and performance on Piagetian tests. Secondly, the two theories have been conceptually linked in that Selman's role-taking stages correspond to Piaget's cognitive development stages, where preoperational children are at level 1 or 2, concrete operators are at level 3 or 4, and formal operators are at level 4 or 5 of Selman's stages. Given this relation, M. H. Feffer, as well as Feffer and V. Gourevitch, have argued that social role-taking is an extension of decentering in the social sphere. Selman has argued this same point, also noting that the growth of role-taking ability is brought on by the child's decreased egocentrism as he/she ages.
Kohlberg's stages of moral development
Lawrence Kohlberg argued that higher moral development requires role-taking ability. For instance, Kohlberg's conventional level of morality (between ages 9 and 13, roughly), involves moral stereotyping, empathy-based morality, alertness to and behaviour guided by predicted evaluations by others, and identifying with authority, all of which require role taking.
Selman tested 60 children, ages 8 to 10, on Kohlberg's moral-judgment measure and two role-taking tasks. He found that the development of role taking, within this age range, related to the progression into Kohlberg's conventional moral stage. A retest a year later confirmed Kohlberg's argument, and in general, it was shown that higher moral development at the conventional stage requires children's achieved ability at this age to reciprocally deal with their own and others’ perspectives. Mason and Gibbs (1993) found that moral judgment development, as measured by Kohlberg's theory, consistently related to role taking opportunities experienced after childhood in adolescence and adulthood. This finding supported Kohlberg's view that moral progress beyond his third stage necessitates contact with other perspectives, namely those of entire cultures or political groups, which individuals are likely to encounter as they become adolescents and adults and thus meet many different people in school and the workplace.
Relation between Kohlberg’s stages, Piaget’s theory, and Selman’s theory
Kohlberg and Piaget both emphasized that role taking ability facilitates moral development. Kohlberg argued that cognitive and role-taking development are required but not sufficient for moral development. In turn, he maintained that Piaget's cognitive developmental stages underlie Selman's role taking stages, which are subsequently fundamental to his own moral developmental stages. This predicts that cognition develops first, followed by the corresponding role taking stage, and finally the corresponding moral stage, and never the other way around.
Conceptually, the three processes have been tied together by Walker (1980). His reasoning is that cognitive development involves the progressive understanding of the environment as it is. Role-taking is a step upon this, which is the recognition that people each have their own subjective interpretation of the environment, including how they think about and behave towards other people. Moral development, the final step, is the grasping of how people should think and behave towards one another.
Evidence in support of this view comes first from three reviews which showed moderate correlations between Selman's role taking theory, Piaget's cognitive developmental stages, and Kohlberg's moral developmental stages. More evidence comes from Walker and Richards' (1979) finding that moral development to Kohlberg's stage 4 occurred only for those who already had early basic formal operations according to Piaget's developmental theory, and not for those in an earlier stage. Similarly, Paolitto's attempts to stimulate moral development worked only for subjects who already attained the corresponding role taking stage. Previous research has also shown that short role taking treatments, such as exposing subjects to the role taking reasoning of subjects in one stage higher, can lead to moral development. In more general demonstrations of this argument, Faust and Arbuthnot and other researchers have shown that moral development is most probable for subjects with higher cognitive development.
In a direct investigation of Kohlberg's necessary but not sufficient argument, Walker tested the hypothesis that only children who had attained both beginning formal operations and role taking stage 3 could progress to Kohlberg's moral stage 3. 146 grade 4-7 children participated in this study, and the results strongly supported the hypothesis, given that only children who had the beginning formal substage of cognitive development and role taking stage 3 progressed to moral stage 3. Further support came from the study's demonstration that a short role playing treatment stimulated progress in moral reasoning in a 6-week follow-up retest. Krebs and Gilmore also directly tested Kohlberg's necessary but not sufficient argument of moral development in 51 children, ages 5–14, for the first three stages of cognitive, role taking, and moral development. Results generally supported Kohlberg's view, but not as strongly, given that it was only demonstrated that cognitive development is a prerequisite for role taking development, and not for moral development. Based on these results, researchers have suggested that moral education programs underlain by Kohlberg's theory must first ensure that the prerequisite cognitive and role taking abilities have developed.
Prosocial behavior
Role-taking ability has been argued to be related to prosocial behaviours and feelings. Evidence for this claim has been found from many sources. Underwood and Moore (1982), for instance, have found that perceptual, affective, and cognitive perspective-taking are positively correlated with prosocial behaviour. Children trained to improve their role-taking ability subsequently become more generous, more cooperative, and more apprehensive to the needs of others in comparison to children who received no role taking training. Research has also shown that people who are good at role-taking have greater ability to sympathize with others. Overall, the picture is clear: prosocial behaviour is related to role taking ability development and social deviance is linked to egocentrism.
To study one reason for the link between role-taking ability and prosociality, second-grade children found to be either high or low in role-taking were instructed to teach two kindergartners on an arts and crafts task. Sixteen measures of prosocial behaviour were scored, and high and low role takers diverged on 8 of the measures, including several helping measures, providing options, and social problem solving. Analysis of the results showed that low role takers helped less than high role takers not because of a lack of wanting to help, but because of their poorer ability in interpreting social cues indicating the need for help. In other words, low role takers tended to only be able to recognize problems when they were made plainly obvious.
Role taking has also been related to empathy. Batson had participants listen to an interview of a woman going through hardship. He then instructed participants to imagine how she feels, or, to imagine how they would feel in her situation, and found that both conditions produced feelings of empathy. Schoenrade has found the same result, where imagining how a person in distress feels or how one would feel in that person's situation produces feelings of empathy.
Finally, many theorists, including Mead, Piaget, Asch, Heider, Deutsch, Madsen, and Kohlberg have theorized a relationship between cooperation and role taking ability. In one study, children's predisposition to cooperate was shown to strongly correlate with their affective role taking ability. Other researchers have also shown an indirect relationship between cooperation and role-taking capacity.
Social functioning
A child's ability to function in social relationships has been found to depend partially on his/her role-taking ability. For instance, researchers found that children poor in role-taking ability had greater difficulty in forming and sustaining social relationships, as well as receiving lower peer nominations. Davis (1983) found that role-taking ability was positively correlated with social understanding. In general, progress in role-taking ability has shown to be beneficial for one's personal and interpersonal life.
Better functioning in the interpersonal domain is particularly shown in the relation between role-taking ability and social problem solving ability. Role playing has been shown to improve male teenagers’ handling of social problem tasks. Gehlbach (2004) found a similar supporting result, demonstrating that adolescents with better role taking abilities had superior ability in conflict resolution. Many other researchers have also found that role taking ability development positively affects interpersonal problem solving skills. Additionally, role taking can promote better social functioning in the interpersonal domain through smoothening social interactions by improving behavioural mimicking ability.
Training children on role-taking ability can improve interpersonal functioning as well. In one study, preschoolers were made to role play interpersonal conflicts using puppets. Their task at the end was to discuss alternative solutions to the problems and how each solution would affect each character. Over the 10 weeks that the preschoolers participated in this role playing, their solutions became less aggressive and their classroom adjustment became better. Moreover, the use of role reversal in interpersonal problem situations has been found to stimulate cooperation, help participants better understand each other and each other's arguments and position, elicit new interpretations of the situation, change attitudes about the problem, and improve perceptions about the other person's efforts at solving the issue, willingness to compromise and cooperate, and trustworthiness. As a result of this research, it has been suggested that one way to improve cooperative skills is to improve affective role taking abilities.
Role-taking can also work to decrease prejudice and stereotyping. Importantly, the decrease in prejudice and stereotyping occurs for both the target individual and the target's group. In addition, role taking ability has been demonstrated to decrease social aggression.
Applications
Attention Deficit Hyperactivity Disorder (ADHD)
Children with ADHD struggle in their social environments, but the social-cognitive reasons for this are unknown. Several studies have indicated a difference between children with and without ADHD on their role taking ability, wherein children with ADHD have lower role taking ability, lower role taking use, and slower role taking development than children without ADHD. Given these results, it has been suggested that children with ADHD be trained on role taking to improve their social skills, including their often comorbid oppositional and conduct problems.
Delinquency and social-skills training
The relationship between childhood and adolescent delinquency and role taking is considerable. Burack found that maltreated children and adolescents with behavioural problems exhibited egocentrism at higher levels than non-maltreated children and adolescents who had progressed faster and more expectedly in their role taking development. Chandler (1973) found that chronically delinquent boys exhibited lower role taking abilities so much so that their role taking ability was comparable to the role taking ability scores of non-delinquent children nearly half their age. In turn, one-third of the delinquent boys in this study were assigned to a treatment program designed to improve role taking skills. Post-treatment measures demonstrated that the program successfully induced role taking ability progress in this group, and an 18-month follow-up assessment found a nearly 50% decrease in delinquent behaviours following these progresses in role taking skills. The same has been found for delinquent girls. Chalmers and Townsend trained delinquent girls, ages 10–16, on role taking skills over 15 sessions, following which the girls demonstrated improved understanding of interpersonal situations and problems, greater empathy, more acceptance of individual differences, and exhibited more prosociality in the classroom. The overall picture, then, is that role-taking training can help delinquent youth and youth with conduct disorders as they lag behind in role-taking ability.
Autism
Several researchers have argued that the deficits in the social lives, communication ability, and imagination of autistic children are a result of their deficiencies in role taking. It is believed that autistic children's inability to role take prevents them from developing a theory of mind. Indeed, role taking has been described as the theory of mind in action. Failing to role take and failing to develop a theory of mind may lead autistic children to use only their own understanding of a situation to predict others’ behaviour, resulting in deficits in social understanding.
In support, two studies found shortcomings in role-taking ability in autistic children in comparison to controls. Another study found that lower ability in role taking related significantly with the lower social competency in autistic children. In particular, the autistic children in the study could not focus concurrently on different cognitions required for successful role taking and proficient social interaction. More specifically, Dawson and Fernald found that conceptual role-taking related most to the social deficits and severity of autism experienced by autistic children, while affective role taking was related only to the severity of autism.
Criticism
The main criticism of Selman's role-taking theory is that it focuses too much on the effect of cognitive development on role-taking ability and social cognition, thereby overlooking the non-cognitive factors that affect children's abilities in these domains. For instance, social experiences, such as disagreements between close friends, have been found to foster role taking skills and social cognitive growth. In addition, parental influence amongst sibling conflicts matters, as mothers who act as mediators to help solve sibling disagreements have been found to promote role taking skills and social cognitive maturation.
See also
Role theory
References
Behavioral concepts
Cognitive science
Cognition
Enactive cognition
Social learning theory
Role theory
Social philosophy | Role-taking theory | Biology | 4,870 |
18,643,788 | https://en.wikipedia.org/wiki/Cyanopindolol | Cyanopindolol is a drug related to pindolol which acts as both a β1 adrenoceptor antagonist and a 5-HT1A receptor antagonist. Its radiolabelled derivative iodocyanopindolol has been widely used in mapping the distribution of beta adrenoreceptors in the body.
References
5-HT1A antagonists
Beta blockers
Indoles
Nitriles
N-tert-butyl-phenoxypropanolamines | Cyanopindolol | Chemistry | 107 |
1,659,281 | https://en.wikipedia.org/wiki/Asbestos%20cement | Asbestos cement, genericized as fibro, fibrolite (short for "fibrous (or fibre) cement sheet"; but different from the natural mineral fibrolite), or AC sheet, is a composite building material consisting of cement and asbestos fibres pressed into thin rigid sheets and other shapes.
Invented at the end of the 19th century, the material was adopted extensively during World War II to make easily-built, sturdy and inexpensive structures for military purposes. It continued to be used widely following the war as an affordable external cladding for buildings. Advertised as a fireproof alternative to other roofing materials such as asphalt, asbestos-cement roofs were popular, not only for safety but also for affordability. Due to asbestos cement's imitation of more expensive materials such as wood siding and shingles, brick, slate, and stone, the product was marketed as an affordable renovation material. Asbestos cement competed with aluminum alloy, available in large quantities after WWII, and the reemergence of wood clapboard and vinyl siding in the mid to late 20th century.
Asbestos cement is usually formed into flat or corrugated sheets or into pipes, but can be molded into any shape that can be formed using wet cement. In Europe, cement sheets came in a wide variety of shapes, while there was less variation in the US, due to labor and production costs. Although fibro was used in a number of countries, in Australia and New Zealand its use was most widespread. Predominantly manufactured and sold by James Hardie until the mid-1980s, fibro in all its forms was a popular building material, largely due to its durability. The reinforcing fibres used in the product were almost always asbestos.
The use of fibro that contains asbestos has been banned in several countries, including Australia, but the material was discovered in new components sold for construction projects.
Health effects
When exposed to weathering and erosion, particularly when used on roofs, the surface deterioration of asbestos cement can release toxic airborne fibres. Exposure to asbestos causes or increases the risk of several life-threatening diseases, including asbestosis, pleural mesothelioma (lung), and peritoneal mesothelioma (abdomen).
Safer asbestos-free fibre cement sheet is still readily available, but the reinforcing fibres are cellulose. The name "fibro" is still traditionally applied to fibre cement.
Products used in the building industry
Roofs - most usually on industrial or farmyard buildings and domestic garages.
Flat sheets for house walls and ceilings were usually thick, wide, and from long.
Battens wide × thick, used to cover the joints in fibro sheets.
"Super Six" corrugated roof sheeting and fencing.
Internal wet area sheeting, "Tilux".
Pipes of various sizes for water reticulation and drainage. Drainage pipes tend to be made of pitch fibre, with asbestos cement added to strengthen.
Moulded products ranging from plant pots to outdoor telephone cabinet roofs and cable pits.
Cleaning of asbestos cement
Some Australian states, such as Queensland, prohibit the cleaning of fibro with pressure washers, because it can spread the embedded asbestos fibres over a wide area. Safer cleaning methods involve using a fungicide and a sealant.
In popular culture
The 1973 song, "Way Out West", by The Dingoes, later covered by James Blundell & James Reyne, mentions living in a "house made of fibro cement". Fibro is also referred to several times on the Australian TV show Housos.
See also
Cemesto
Eternit
Fibre cement
Transite, a brand of fibre cement originally produced as asbestos cement
References
External links
Fibro and Asbestos - A Renovator and Homeowner's Guide, NSW (archived 2013)
Advice if you have FAC in your home
Building materials
Asbestos | Asbestos cement | Physics,Engineering,Environmental_science | 782 |
53,264,410 | https://en.wikipedia.org/wiki/Multipole%20density%20formalism | The Multipole Density Formalism (also referred to as Hansen-Coppens Formalism) is an X-ray crystallography method of electron density modelling proposed by Niels K. Hansen and Philip Coppens in 1978. Unlike the commonly used Independent Atom Model, the Hansen-Coppens Formalism presents an aspherical approach, allowing one to model the electron distribution around a nucleus separately in different directions and therefore describe numerous chemical features of a molecule inside the unit cell of an examined crystal in detail.
Theory
Independent Atom Model
The Independent Atom Model (abbreviated to IAM), upon which the Multipole Model is based, is a method of charge density modelling. It relies on an assumption that electron distribution around the atom is isotropic, and that therefore charge density is dependent only on the distance from a nucleus. The choice of the radial function used to describe this electron density is arbitrary, granted that its value at the origin is finite. In practice either Gaussian- or Slater-type 1s-orbital functions are used.
Due to its simplistic approach, this method provides a straightforward model that requires no additional parameters (other than positional and Debye–Waller factors) to be refined. This allows the IAM to perform satisfactorily while a relatively low amount of data from the diffraction experiment is available. However, the fixed shape of the singular basis function prevents any detailed description of aspherical atomic features.
Kappa Formalism
In order to adjust some valence shell parameters, the Kappa formalism was proposed. It introduces two additional refineable parameters: an outer shell population (denoted as ) and its expansion/contraction (). Therefore, the electron density is formulated as:
While , being responsible for the charge flow part, is linearly coupled with partial charge, the normalised parameter scales radial coordinate . Therefore, lowering the parameter results in expansion of the outer shell and, conversely, raising it results in contraction. Although the Kappa formalism is still, strictly speaking, a spherical method, it is an important step towards understanding modern approaches as it allows one to distinguish chemically different atoms of the same element.
Multipole description
In the multipole model description, the charge density around a nucleus is given by the following equation:
The spherical part remains almost indistinguishable from the Kappa formalism, the only difference being one parameter corresponding to the population of the inner shell. The real strength of the Hansen-Coppens formalism lies in the right, deformational part of the equation. Here fulfils a role similar to in the Kappa formalism (expansion/contraction of the aspherical part), whereas individual are fixed spherical functions, analogous to . Spherical harmonics (each with its populational parameter ) are, however, introduced to simulate the electrically anisotropic charge distribution.
In this approach, a fixed coordinate system for each atom needs to be applied. Although at first glance it seems practical to arbitrarily and indiscriminately make it contingent on the unit cell for all atoms present, it is far more beneficial to assign each atom its own local coordinates, which allows for focusing on hybridisation-specific interactions. While the singular sigma bond of the hydrogen can be described well using certain z-parallel pseudoorbitals, xy-plane oriented multipoles with a 3-fold rotational symmetry will prove more beneficial for flat aromatic structures.
Applications
The primary advantage of the Hansen-Coppens formalism is its ability to free the model from spherical restraints and describe the surroundings of a nucleus far more accurately. In this way it becomes possible to examine some molecular features which would normally be only roughly approximated or completely ignored.
Hydrogen positioning
X-ray crystallography allows the researcher to precisely determine the position of peak electron density and to reason about the placement of nuclei based on this information. This approach works without any problems for heavy (non-hydrogen) atoms, whose inner shell electrons contribute to the density function to a far greater degree then outer shell electrons.
However, hydrogen atoms possess a feature unique among all the elements - they possess exactly one electron, which additionally is located on their valence shell and therefore is involved in creating strong covalent bonds with atoms of various other elements. While a bond is forming, the maximum of the electron density function moves significantly away from the nucleus and towards the other atom. This prevents any spherical approach from determining hydrogen position correctly by itself. Therefore, usually the hydrogen position is estimated basing on neutron crystallography data for similar molecules, or it is not modelled at all in the case of low-quality diffraction data.
It is possible (albeit disputable) to freely refine hydrogen atoms' positions using the Hansen-Coppens formalism, after releasing the bond lengths from any restraints derived from neutron measurements. The bonding orbital simulated with adequate multipoles describes the density distribution neatly while preserving believable bond lengths. It may be worth approximating hydrogen atoms' anisotropic displacement parameters, e.g. using SHADE, before introducing the formalism and, possibly, discarding bond distance constraints.
Bonding modelling
In order to analyse the length and strength of various interactions within the molecule, Richard Bader's "Atoms in molecules" theorem may be applied. Due to the complex description of the electron field provided by this aspherical model, it becomes possible to establish realistic bond paths between interacting atoms as well as to find and characterise their critical points. Deeper insight into this data yields useful information about bond strength, type, polarity or ellipticity, and when compared with other molecules brings greater understanding about the actual electron structure of the examined compound.
Charge flow
Due to the fact that for each multipole of every atom its population is being refined independently, individual charges will rarely be integers. In real cases, electron density flows freely through the molecule and is not bound by any restrictions resulting from the outdated Bohr atom model and found in IAM. Therefore, through e.g. an accurate Bader analysis, net atomic charges may be estimated, which again is beneficial for deepening the understanding of systems under investigation.
Drawbacks and limitations
Although the Multipole Formalism is a simple and straightforward alternative means of structure refinement, it is definitely not flawless. While usually for each atom either three or nine parameters are to be refined, depending on whether an anisotropic displacement is being taken into account or not, a full multipole description of heavy atoms belonging to the fourth and subsequent periods (such as chlorine, iron or bromine) requires refinement of up to 37 parameters. This proves problematic for any crystals possessing large asymmetric units (especially macromolecular compounds) and renders a refinement using the Hansen-Coppens Formalism unachievable for low-quality data with an unsatisfactory ratio of independent reflections to refined parameters.
Caution should be taken while refining some of the parameters simultaneously (i.e. or , multipole populations and thermal parameters), as they may correlate strongly, resulting in an unstable refinement or unphysical parameter values. Applying additional constraints resulting from local symmetry for each atom in a molecule (which decreases the number of refined multipoles) or importing populational parameters from existing databases may also be necessary to achieve a passable model. On the other hand, the aforementioned approaches significantly reduce the amount of information required from experiments, while preserving some level of detail concerning aspherical charge distribution. Therefore, even macromolecular structures with satisfactory X-ray diffraction data can be modelled aspherically in a similar fashion.
Despite their similarity, individual multipoles do not correspond to atomic projections of molecular orbitals of a wavefuntion as resulting from quantum calculations. Nevertheless, as brilliantly summarized by Stewart, "The structure of the model crystal density, as a superposition of pseudoatoms [...] does have quantitative features which are close to many results based on quantum chemical calculations". If the overlap between the atomic wavefunctions is small enough, as it occurs for example in transition metal complexes, the atomic multipoles may be correlated with the atomic valence orbitals and multipolar coefficients may be correlated with populations of metal d-orbitals.
A stronger correlation between the X-ray measured diffracted intensities and quantum mechanical wavefunctions is possible using the wavefunction based methods of Quantum Crystallography, as for example the X-ray atomic orbital model, the so-called experimental wavefunction or the Hirshfeld Atom Refinement.
References
Theoretical chemistry
X-ray crystallography
Crystallography
Diffraction | Multipole density formalism | Physics,Chemistry,Materials_science,Engineering | 1,774 |
27,662,186 | https://en.wikipedia.org/wiki/Opioid%20food%20peptides | Opioid food peptides include:
Casomorphin (from milk)
Gluten exorphin (from gluten)
Gliadorphin/gluteomorphin (from gluten)
Rubiscolin (from spinach)
Soymorphin-5 (from soy)
Oryzatensin (from rice)
Peptides
Opioids | Opioid food peptides | Chemistry,Biology | 82 |
3,525,088 | https://en.wikipedia.org/wiki/Crisis%20of%20the%20late%20Middle%20Ages | The crisis of the Middle Ages was a series of events in the 14th and 15th centuries that ended centuries of European stability during the late Middle Ages. Three major crises led to radical changes in all areas of society: demographic collapse, political instability, and religious upheavals.
The Great Famine of 1315–1317 and the Black Death of 1347–1351 potentially reduced the European population by half or more as the Medieval Warm Period came to a close and the first century of the Little Ice Age began. It took until 1500 for the European population to regain the levels of 1300. Popular revolts in late medieval Europe and civil wars between nobles such as the English Wars of the Roses were common, with France fighting internally nine times. There were also international conflicts between kingdoms such as France and England in the Hundred Years' War.
The unity of the Catholic Church was shattered by the Western Schism. The Holy Roman Empire was also in decline. In the aftermath of the Great Interregnum (1247–1273), the empire lost cohesion and the separate dynasties of the various German states became more politically important than their union under the emperor.
Historiography
The expression "crisis of the late Middle Ages" is commonly used in western historiography, especially in English and German, and somewhat less in other western European scholarship, to refer to the array of crises besetting Europe in the 14th and 15th centuries. The expression often carries a modifier to specify it, such as the urban crisis of the late Middle Ages, or the cultural, monastic, religious, social, economic, intellectual, or agrarian crisis, or a regional modifier, such as the Catalan or French crisis.
By 1929, the French historian Marc Bloch was already writing about the effects of the crisis, and by mid-century there were academic debates being held about it. In his 1981 article "Late Middle Age Agrarian Crisis or Crisis of Feudalism?", Peter Kriedte reprises some of the early works in the field from historians writing in the 1930s, including Marc Bloch, Henri Pirenne, Wilhelm Abel, and Michael Postan. Referring to the crisis in Italy as the "Crisis of the 14th Century", Giovanni Cherubini alluded to the debate that already by 1974 had been going on "for several decades" in French, British, American, and German historiography.
Arno Borst (1992) states that it "is a given that fourteenth century Latin Christianity was in a crisis", goes on to say that the intellectual aspects and how universities were affected by the crisis is underrepresented in the scholarship hitherto ("When we discuss the crisis of the late Middle Ages, we consider intellectual movements beside religious, social, and economic ones"), and gives some examples.
Some question whether "crisis" is the right expression for the period at the end of the Middle Ages and the transition to Modernity. In his 1981 article "The End of the Middle Ages: Decline, Crisis or Transformation?" Donald Sullivan addresses this question, claiming that scholarship has neglected the period and viewed it largely as a precursor to subsequent climactic events such as the Renaissance and Reformation.
In his "Introduction to the History of the Middle Ages in Europe", Mitre Fernández wrote in 2004: "To talk about a general crisis of the late Middle Ages is already a commonplace in the study of medieval history."
Heribert Müller, in his 2012 book on the religious crisis of the late Middle Ages, discussed whether the term itself was in crisis:
In his 2014 historiographical article about the crisis in the Middle Ages, Peter Schuster quotes the historian Léopold Genicot's 1971 article "Crisis: From the Middle Ages to Modern Times": "Crisis is the word which comes immediately to the historian's mind when he thinks of the fourteenth and the fifteenth centuries."
Demography
The Medieval Warm Period ended sometime towards the end of the 13th century. This marked the start of the Little Ice Age, which resulted in harsher winters with reduced harvests. In Northern Europe, new technological innovations such as the heavy plough and the three-field system were not as effective in clearing new fields for harvest as they were in the Mediterranean because the north had poor, clay-like soil. Food shortages and rapidly inflating prices were a fact of life for as much as a century before the plague. Wheat, oats, hay and consequently livestock were all in short supply.
Their scarcity resulted in malnutrition, which increases susceptibility to infections due to weakened immune systems. In the autumn of 1314, heavy rains began to fall, which were the start of several years of cold and wet winters. The already weak harvests of the north suffered, and a seven-year famine ensued. In the years 1315 to 1317, a catastrophic famine, known as the Great Famine, struck much of North West Europe. It was arguably the worst in European history, perhaps reducing the population by more than 10%.
Most governments instituted measures that prohibited exports of foodstuffs, condemned black market speculators, set price controls on grain and outlawed large-scale fishing. At best, they proved mostly unenforceable and at worst they contributed to a continent-wide downward spiral. The hardest hit lands, like England, were unable to buy grain from France because of the prohibition, and from most of the rest of the grain producers because of crop failures from shortage of labor. Any grain that could be shipped was eventually taken by pirates or looters to be sold on the black market.
Meanwhile, many of the largest countries, most notably England and Scotland, had been at war. This resulted in them using up much of their treasury and creating inflation. In 1337, on the eve of the first wave of the Black Death, England and France went to war in what became known as the Hundred Years' War. This situation was worsened when landowners and monarchs such as Edward III of England (r. 1327–1377) and Philip VI of France (r. 1328–1350), raised the fines and rents of their tenants out of a fear that their comparatively high standard of living would decline.
When a typhoid epidemic emerged, many thousands died in populated urban centres, most significantly Ypres (now in Belgium). In 1318, a pestilence of unknown origin, which some contemporary scholars now identify as anthrax, targeted the animals of Europe. Sheep and cattle were particularly affected, further reducing the food supply and income of the peasantry.
Little Ice Age and the Great Famine
As Europe moved out of the Medieval Warm Period and into the Little Ice Age, a decrease in temperature and a great number of devastating floods disrupted harvests and caused mass famine. The cold and the rain proved to be particularly disastrous from 1315 to 1317 in which poor weather interrupted the maturation of many grains and beans, and flooding turned fields rocky and barren. Scarcity of grain caused price inflation, as described in one account of grain prices in Europe in which the price of wheat doubled from twenty shillings per quarter in 1315 to forty shillings per quarter by June of the following year. Grape harvests also suffered, which reduced wine production throughout Europe. The wine production from the vineyards surrounding the Abbey of Saint-Arnould in France decreased as much as eighty percent by 1317. During this climatic change and subsequent famine, Europe's cattle were struck with The Great Bovine Pestilence, a pathogen of unknown identity.
The pathogen spread throughout Europe from Eastern Asia in 1315 and reached the British Isles by 1319. Manorial accounts of cattle populations in the year 1319–20 place a 62 percent loss in England and Wales alone. In these countries, some correlation can be found between the places where poor weather reduced crop harvests and places where the bovine population was particularly negatively affected. It is hypothesized that both low temperatures and lack of nutrition lowered the cattle populations' immune systems and made them vulnerable to disease. The mass death and illness of cattle drastically affected dairy production, and the output did not return to its pre-pestilence amount until 1331. Much of the medieval peasants' protein was obtained from dairy, and milk shortages likely caused nutritional deficiency in the European population. Famine and pestilence, exacerbated with the prevalence of war during this time, led to the death of an estimated ten to fifteen percent of Europe's population.
Climate change and plague pandemic correlation
The Black Death was a particularly devastating epidemic in Europe during this time, and is notable due to the number of people who succumbed to the disease within the few years the disease was active. It was fatal to an estimated thirty to sixty percent of the population where the disease was present. While there is some question of whether it was a particularly deadly strain of Yersinia pestis that caused the Black Death, research indicates no significant difference in bacterial phenotype. Thus, environmental stressors are considered when hypothesizing the deadliness of the Black Plague, such as crop failures due to changes in weather, the subsequent famine, and an influx of host rats into Europe from China.
Popular revolt
There were some popular uprisings in Europe before the 14th century, but these were local in scope, for example uprisings at a manor house against an unpleasant overlord. This changed in the 14th and 15th centuries when new downward pressures on the poor resulted in mass movements and popular uprisings across Europe. To indicate how common and widespread these movements became, in Germany between 1336 and 1525 there were no less than sixty phases of militant peasant unrest.
Malthusian hypothesis
Scholars such as David Herlihy and Michael Postan use the term Malthusian limit to explain some calamities as results of overpopulation. In his 1798 Essay on the Principle of Population, Thomas Malthus asserted that exponential population growth will invariably exceed available resources, making mass death inevitable. In his book The Black Death and the Transformation of the West, David Herlihy explores whether the plague was an inevitable crisis of population and resources. In The Black Death; A Turning Point in History? (ed. William M. Bowsky), he "implies that the Black Death's pivotal role in late medieval society... was now being challenged. Arguing on the basis of a neo-Malthusian economics, revisionist historians recast the Black Death as a necessary and long overdue corrective to an overpopulated Europe."
Herlihy also examined the arguments against the Malthusian crisis, stating "if the Black Death was a response to excessive human numbers it should have arrived several decades earlier" in consequence of the population growth before the Black Death. Herlihy also brings up other, biological factors that argue against the plague as a "reckoning" by arguing "the role of famines in affecting population movements is also problematic. The many famines preceding the Black Death, even the 'great hunger' of 1315 to 1317, did not result in any appreciable reduction in population levels". Herlihy concludes the matter stating, "the medieval experience shows us not a Malthusian crisis but a stalemate, in the sense that the community was maintaining at stable levels very large numbers over a lengthy period" and states that the phenomenon should be referred to as more of a deadlock, rather than a crisis, to describe Europe before the epidemics.
See also
A Distant Mirror: The Calamitous 14th Century
Crisis of the Third Century
History of science in the Middle Ages
Renaissance of the 12th century
The Autumn of the Middle Ages
The General Crisis
Citations
General and cited sources
Further reading
External links
The Waning of the Middle Ages': Crisis and Recovery, 1300–1450"—Lecture 11, Western Civilization to 1650 (42.125), M. Hickey, Bloomsburg University of Pennsylvania
Demography
Late Middle Ages
Medieval politics
Medieval society | Crisis of the late Middle Ages | Environmental_science | 2,434 |
2,876,898 | https://en.wikipedia.org/wiki/Chromium%28III%29%20picolinate | Chromium(III) picolinate (also trivalent chromium) is a chemical compound with the formula , commonly abbreviated as CrPic3. It is a bright-red coordination compound derived from chromium(III) and picolinic acid.
Trivalent chromium occurs naturally in many foods and is one of several forms of chromium sold as a dietary supplement intended to correct chromium deficiency. However, there is no evidence of chromium deficiency in healthy people and no medical symptoms of chromium deficiency exist. Supplementation with trivalent chromium does not prevent or treat obesity, impaired prediabetes condition, type 2 diabetes or metabolic syndrome, and is not considered effective for maintaining or losing body weight.
Although daily doses of trivalent chromium up to 1,000 μg are considered to be safe, some adverse effects have been reported, and there is no clinical evidence that chromium supplementation provides any health benefit.
History
Although some research suggested that chromium(III) picolinate may assist in weight loss and increase muscle mass, a 2013 Cochrane review was unable to find "reliable evidence to inform firm decisions" to support such claims.
Among the transition metals, Cr3+ is the most controversial in terms of nutritional value and toxicity. Furthermore, this controversy is amplified by the fact that no chromium-containing biomolecules have had their structure characterized, nor has the mode of action been determined. The first experiment that led to the discovery of Cr3+ playing a role in glucose metabolism proposed that the biologically active form of the metal existed in a protein called glucose tolerance factor, but further research indicated it is simply an artifact obtained from isolation procedures. The only accepted indicator of chromium deficiency is the reversal of symptoms that occurs when chromium(III) supplementation is administered in pharmacological doses to people on total parenteral nutrition.
Physicochemical properties
Chromium(III) picolinate is a pinkish-red compound and was first reported in 1917. It is poorly soluble in water, having a solubility of 600 μM in water at near neutral pH. Similar to other chromium(III) compounds, it is relatively inert and unreactive, meaning that this complex is stable at ambient conditions and high temperatures are required to decompose the compound. At lower pH levels, the complex hydrolyzes to release picolinic acid and free Cr3+.
Structure
Chromium(III) picolinate has a distorted octahedral geometry and is isostructural to cobalt (III) and manganese (III) counterparts. Chromium(III) is a hard lewis acid and as such has high affinity to the carboxylate oxygen and medium affinity to the pyridine nitrogen of picolinate. Each picolinate ligand acts as a bidentate chelating agent and neutralizes the +3 charge of Cr3+. Evidence that the Cr3+ center coordinates to the pyridine nitrogen comes from a shift in the IR spectra of a C=N vibration at 1602.4 cm−1 for free picolinic acid to 1565.9 cm−1 for chromium(III) picolinate. The bond length between Cr3+ and the nitrogen atom of the pyridine ring on picolinate ranges from 2.047 to 2.048 Å. The picolinate ligand coordinates to Cr3+ only when deprotonated and this is evident by the disappearance of IR bands ranging from 2400 to 2800 cm−1 (centered at 2500 cm−1) and 1443 cm−1, corresponding to the O-H stretching and bending, respectively, on the carboxyl functional group. Furthermore, this IR shift also indicates that only one oxygen atom from the carboxylate of picolinate coordinates to the Cr3+ center. The Cr-O bond length ranges from 1.949 to 1.957 Å. The crystal structure has only been recently described in 2013. Water does not coordinate to the Cr3+ center and is instead thought to hydrogen bond between other Cr(Pic)3 complexes to form a network of Cr(Pic)3 complexes.
Biochemistry of chromium(III) picolinate
Chromium was once proposed as an essential nutrient in maintaining normal blood glucose levels, but this function has not been sufficiently demonstrated. The European Food Safety Authority concluded that there is no convincing evidence to show chromium as an essential nutrient, thereby not justifying the setting of recommendations for chromium dietary intake.
Absorption and excretion of chromium(III) picolinate
Once chromium(III) picolinate is ingested and enters the stomach, acidic hydrolysis of the complex occurs when in contact with the stomach mucosa. The hydrolyzed Cr3+ is present in the hexaaqua form and polymerizes to form an insoluble Cr(III)-hydroxide-oxide (the process of olation) once it reaches the alkaline pH of the small intestine. Approximately 2% of Cr3+ is absorbed through the gut as chromium(III) picolinate via unsaturated passive transport. Although absorption is low, CrPic3 absorbs more efficiently than other organic and inorganic sources (i.e. CrCl3 and chromium nicotinate) and thus accumulate at higher concentrations in tissues. This has been one major selling point for chromium(III) picolinate over other chromium(III) supplements. Organic sources tend to absorb better as they have ligands which are more lipophilic and usually neutralize the charge of the metal, thus permitting for easier passage through the intestinal membrane.
It has also been shown that dietary factors affect Cr3+ absorption. Starch, simple sugars, oxalic acid, and some amino acids tend to increase the rate of absorption of chromium(III). This is a result of ligand chelation, converting hexaaqua Cr3+ into more lipophilic forms. In contrast, calcium, magnesium, titanium, zinc, vanadium, and iron reduce the rate of absorption. Presumably, these ions introduce new metal-ligand equilibria, thus decreasing the lipophilic ligand pool available to Cr3+. Once absorbed into the bloodstream, 80% of the Cr3+ from CrPic3 is passed along to transferrin. The exact mechanism of release is currently unknown, however, it is believed not to occur by a single electron reduction, as in the case of Fe3+, due to the high instability of Cr2+. Administered Cr3+ can be found in all tissues ranging from 10 to 100 μg/kg body weight. It is excreted primarily in the urine (80%) while the rest is excreted in sweat and feces.
Binding of chromium(III) to transferrin
Transferrin, in addition to chromodulin has been identified as a major physiological chromium transport agent, although a recent study found that Cr3+ in fact disables transferrin from acting as a metal ion transport agent. While transferrin is highly specific for ferric ions, at normal conditions, only 30% of transferrin molecules are saturated with ferric ions, allowing for other metals, particularly those with a large charge to size ratio, to bind as well. The binding sites consist of a C-lobe and an N-lobe which are nearly identical in structure. Each lobe contains aspartic acid, histidine, 2 tyrosine residues and a bicarbonate ion that acts as a bidentate ligand to allow iron or other metals to bind to transferrin in a distorted octahedral geometry. Evidence supporting the binding of Cr3+ to transferrin comes from extensive clinical studies that showed a positive correlation between levels of ferritin and of fasting glucose, insulin, and glycated hemoglobin (Hb1Ac) levels. Furthermore, an in vivo study in rats showed that 80% of isotopically labelled Cr3+ ended up on transferrin while the rest were bound to albumin. An in vitro study showed that when chromium(III) chloride was added to isolated transferrin, the Cr3+ readily bound transferrin, owing to changes in the UV-Vis spectrum. The formation constant for Cr3+ in the C-lobe is 1.41 x 1010 M−1 and 2.04 x 105 M−1 in the N-lobe, which indicates that Cr3+ preferentially binds the C-lobe. Overall, the formation constant for chromium(III) is lower than that of the ferric ion. The bicarbonate ligand is crucial in binding Cr3+ as when bicarbonate concentrations are very low, the binding affinity is also significantly lower. Electron paramagnetic resonance (EPR) studies have shown that below pH 6, chromium(III) binds only to the N-lobe and that at near neutral pH, chromium(III) binds to the C-lobe as well. Chromium(III) can compete with the ferric ion for binding to the C-lobe when the saturation greatly exceeds 30%. As such, these effects are only seen in patients with hemochromatosis, an iron-storage disease characterized by excessive iron saturation in transferrin.
Mechanism of action
The precise composition and structure of the form of chromium having biological activity is not known.
Low-molecular-weight chromium-binding substance (LMWCr; also known as chromodulin) is an oligopeptide that seems to bind chromium(III) in the body. It consists of four amino acid residues; aspartate, cysteine, glutamate, and glycine, bonded with four (Cr3+) centers. In vitro, it interacts with the insulin receptor by prolonging kinase activity through stimulating the tyrosine kinase pathway, although this effect has not been adequately shown in vivo.
Health claims
Body weight
Although chromium(III) picolinate has been marketed in the United States as an aid to body development for athletes, and as a means of losing weight, there is insufficient evidence that it provides this effect. Reviews have reported either no effect on either muscle growth or fat loss, or a small weight loss in trials longer than 12 weeks, preventing a conclusion about a positive effect of chromium supplementation. The European Food Safety Authority reviewed the literature and concluded that there was insufficient evidence to support a claim of effect on body weight.
Diabetes
Although there were claims that trivalent chromium supplementation aids in reducing insulin resistance, particularly in type 2 diabetes, reviews showed no association between chromium supplementation and glucose or insulin concentrations. Two reviews concluded that chromium(III) picolinate may be more effective at lowering blood glucose levels compared to other chromium-containing dietary supplements.
In 2005, the U.S. Food and Drug Administration (FDA) approved a qualified health claim for chromium picolinate as a dietary supplement relating to insulin resistance and risk of type 2 diabetes. Any company wishing to make such a claim must use the exact wording: "One small study suggests that chromium picolinate may reduce the risk of insulin resistance, and therefore possibly may reduce the risk of type 2 diabetes. FDA concludes, however, that the existence of such a relationship between chromium picolinate and either insulin resistance or type 2 diabetes is highly uncertain." As part of the petition review process, the FDA rejected other claims for reducing abnormally elevated blood sugar, risk of cardiovascular disease, risk of retinopathy or risk of kidney disease. In 2006, the FDA added that the "relationship between chromium(III) picolinate intake and insulin resistance is highly uncertain".
Safety and toxicity
There is little evidence that trivalent chromium in typical supplement amounts causes toxicity in humans. Oral use of chromium is considered to be relatively safe because ingested chromium is poorly absorbed and the amount absorbed is rapidly excreted.
Use of chromium picolinate may cause an allergic reaction, headache, insomnia, or irritability, and may interfere with normal thinking and muscular coordination. It has possible interactions with dozens of prescription drugs and other supplements.
Diabetic people who take insulin should not use chromium picolinate, as it may adversely affect insulin levels and control of blood glucose. Chromium picolinate should not be used while pregnant or during breastfeeding.
Although the safety of daily chromium doses of up to 1,000 μg has been shown, there are some reports of serious adverse effects by using chromium picolinate, including kidney failure from a six-week course of 600 μg per day and liver disease after using 1,200 to 2,400 μg per day over four to five months.
Regulation of chromium(III) picolinate
In 2004, the UK Food Standards Agency advised consumers to use other forms of trivalent chromium in preference to chromium(III) picolinate until specialist advice was received from the Committee on Mutagenicity. This was due to concerns raised by the Expert Group on Vitamins and Minerals that chromium(III) picolinate might be genotoxic (cause cancer). The committee also noted two case reports of kidney failure that might have been caused by this supplement and called for further research into its safety. In December 2004, the Committee on Mutagenicity published its findings, which concluded that "overall it can be concluded that the balance of the data suggest that chromium(III) picolinate should be regarded as not being mutagenic in vitro" and that "the available in-vivo tests in mammals with chromium(III) picolinate are negative". Following these findings, the UK Food Standards Agency withdrew its advice to avoid chromium(III) picolinate, though it plans to keep its advice about chromium supplements under review.
In 2010, chromium(III) picolinate was approved by Health Canada to be used in dietary supplements. Approved labeling statements include: a factor in the maintenance of good health, provides support for healthy glucose metabolism, helps the body to metabolize carbohydrates and helps the body to metabolize fats.
References
External links
Merck Manual
Coordination complexes
Dietary supplements
Picolinates
Chromium(III) compounds | Chromium(III) picolinate | Chemistry | 3,018 |
17,349,229 | https://en.wikipedia.org/wiki/Catholic%20Earthcare%20Australia | Catholic Earthcare Australia is an agency of the Australian Catholic Bishops Conference and is the environmental arm of the Catholic Church in Australia. This executive agency of the Bishops' Commission for Justice and Development (BCJD) is mandated with the mission of advising, supporting and assisting the BCJD in responding to Pope John Paul II's call to "stimulate and sustain the ecological conversion" throughout the Catholic church in Australia and beyond.
In May 2017, the Australian Catholic Bishops Conference decided to incorporate Catholic Earthcare Australia into its sister agency, Caritas Australia. This change was made to strengthen the capacity of Catholic Earthcare Australia, particularly in advocating and educating about the principles of Holy Father's 2015 encyclical, Laudato Si', and to achieve synergies with Caritas Australia's extensive education and advocacy work around Australia, including parishes, schools and the wider Catholic community on environmental issues such as climate change.
After the Laudato Si' Action Platform was created and the Plenary Council decreed that all schools, parishes, eparchy's, organisations and diocese were to have a Laudato Si' action plan by 2030, Catholic Earthcare Australia was tasked by the Australian Catholic Bishops to support in the rolling out of this initiative. To support each sector in their endeavour to create a plan to respond to the 7 Laudato Si' goals, an Australian Guide to Laudato Si' action planning was created. As well as documents to support self-assessment, reflection and planning processes to align with the Laudato Si' action platform.
Catholic Earthcare also coordinates a number of state and community networks for the purpose of resource sharing, providing advice and strengthening the Australian Catholic Church's response to care for our common home.
Mandate from the Australian Bishops
Tasks and Responsibilities
Catholic Earthcare Australia will act as an advisory agency to the BCJD on ecological matters, including the safeguarding of the integrity of creation, environmental justice and ecological sustainability.
Its tasks will include
carrying out research, from the perspective of scripture and the Church's environmental and social justice teachings;
developing national networks, with a view to initiating, linking, resourcing and supporting ecological endeavours within the Church, and extending the hand of friendship and cooperation to other like-minded groups working in the broader community;
undertaking initiatives by encouraging a reverence for creation, a responsible stewardship of Earth's natural resources and ecosystems, and providing a voice for the victims of pollution, environmental degradation and injustice;
providing educational materials and services to Catholic schools, organisations, congregations and parishes – particularly information to assist in the carrying out of environmental audits and the implementation of more ecologically and ethically sustainable practices.
References
External links
Catholic Earthcare
Christianity and environmentalism
Catholic Church in Australia
Environmental education
Environmental organisations based in Australia | Catholic Earthcare Australia | Environmental_science | 565 |
30,838,779 | https://en.wikipedia.org/wiki/Dallas%20%282012%20TV%20series%29 | Dallas is an American prime time soap opera developed by Cynthia Cidre and produced by Warner Horizon Television, that aired on TNT from June 13, 2012, to September 22, 2014. The series was a revival of the prime time television soap opera of the same name that was created by David Jacobs and which aired on CBS from 1978 to 1991. The series revolves around the Ewings, an affluent Dallas family in the oil and cattle-ranching industries.
The series brought back several stars of the original series, including Patrick Duffy as Bobby Ewing, Linda Gray as Sue Ellen Ewing, and Larry Hagman as J.R. Ewing in major roles. Other stars of the original series made guest appearances, including Ken Kercheval as Cliff Barnes, Steve Kanaly as Ray Krebbs, and Charlene Tilton as Lucy Ewing, as well as Ted Shackelford as Gary Ewing, and Joan van Ark as Valene Ewing, who starred in the Dallas spin-off series Knots Landing. They were joined by the next generation of characters, including Josh Henderson as John Ross Ewing III, the son of J.R. and Sue Ellen Ewing; Jesse Metcalfe as Christopher Ewing, the adopted son of Bobby and Pamela Barnes Ewing; and Julie Gonzalo as Pamela Rebecca Barnes, the daughter of Cliff Barnes and Afton Cooper.
The series was made for TNT, sister company to Warner Bros. Television, which has owned the original series since its purchase of Lorimar Television (the original show's production company) in 1989. On July 8, 2011, after viewing the completed pilot episode, TNT gave a green light for the series with a 10-episode order, which premiered on June 13, 2012. On June 29, 2012, TNT renewed Dallas for a second season consisting of 15 episodes, which premiered on January 28, 2013. On April 30, 2013, TNT renewed Dallas for a third season consisting of 15 episodes that premiered on Monday, February 24, 2014. On October 3, 2014, the series was cancelled by TNT after three seasons, because of the declining ratings and the death of Larry Hagman.
Plot
The series revolves around the Ewings, an affluent Dallas family in the oil and cattle-ranching industries. It focuses mainly on Christopher Ewing, the adopted son of Bobby and Pam Ewing, and John Ross Ewing III, the son of J.R. and Sue Ellen Ewing. Both John Ross and Christopher were born during the original series' run and were featured in it as children (played by different actors). Now grown up, John Ross has become almost a mirror of his father, bent on oil, money, and power. Christopher, meanwhile, has become a lot like Bobby, in that he is more interested in the upkeep of Southfork Ranch. As an additional point of contention, Christopher is also becoming a player in alternative energy, thereby eschewing the oil business. However, John Ross is determined to resurrect the Ewings' former position in the oil industry. John Ross states in season one that he is J.R.'s eldest child, which contradicts the storyline in the original series where J.R.'s first born son James Beaumont appeared in seasons 13–14.
Alongside John Ross and Christopher, original series characters Bobby, J.R. and Sue Ellen return for the new series. Additional familiar characters, including J.R.'s and Bobby's niece Lucy Ewing, their half-brother Ray Krebbs, and Ewing family rival Cliff Barnes (Ken Kercheval) appear occasionally. Various other characters from the original series also make appearances, including Audrey Landers (Afton Cooper), Cathy Podewell (Cally Harper Ewing) and Deborah Shelton (Mandy Winger). Ted Shackelford and Joan Van Ark, who first appeared on Dallas in the late 1970s before joining the spin-off series Knots Landing, also return as Gary and Valene Ewing.
New main characters that made their appearances in season one included Bobby's third wife, Ann; Christopher's new wife, introduced as "Rebecca Sutter" but later revealed to be Pamela Rebecca Barnes, the daughter of Cliff Barnes and Afton Cooper; and Elena Ramos, the daughter of Ewing family cook Carmen Ramos (Marlene Forte), who is caught in a love triangle with Christopher and John Ross. Harris Ryland is Ann's villainous ex-husband. New main characters that made their appearances in season 2 included Ann and Harris's daughter, Emma Ryland, and Elena Ramos's brother Drew Ramos. In season two, Judith Brown Ryland joined as Harris Ryland's controlling mother, while in season three, Nicolas Treviño joined as a childhood friend of Elena and Drew's who returns to help Cliff Barnes take over the Ewing oil company.
Cast and characters
Regular cast
Josh Henderson as John Ross Ewing III, J.R. and Sue Ellen's son. Ambitious and anxious to prove himself by following in his father's footsteps, he is determined to start drilling for oil at Southfork.
Jesse Metcalfe as Christopher Ewing, the adopted son of Bobby and his ex-wife Pam Ewing and the biological son of Sue Ellen's younger sister Kristin Shepard. In the pilot, after spending years in Asia researching alternative energy, Christopher returns to Southfork to get married.
Jordana Brewster as Elena Ramos, the daughter of the Ewing family cook, and childhood friend of Christopher and John Ross, both of whom are in love with her. She has a master's degree in energy resources.
Julie Gonzalo as Pamela Rebecca Barnes, under the alias of Rebecca Sutter she marries Christopher in the pilot and was pregnant with twins but miscarried them in "Guilt & Innocence". It is revealed in the season 1 finale that she is Cliff Barnes's daughter with Afton Cooper.
Brenda Strong as Ann Ewing, Bobby's third wife and an old friend of Sue Ellen. She has assumed the role of matriarch of Southfork while dealing with her ruthless brother-in-law J.R. Ewing and her ex-husband Harris Ryland.
Patrick Duffy as Bobby Ewing, the youngest son of Jock and Miss Ellie and the adoptive father of Christopher. A family man at heart and owner of the Southfork Ranch, Bobby is determined to keep the promise he made to his now-deceased mother: never to allow oil drilling on Southfork.
Linda Gray as Sue Ellen Ewing, the mother of John Ross and J.R.'s ex-wife. Since leaving J.R., Sue Ellen has grown confident and influential with a budding career in politics and ran for governor. She still harbors feelings of guilt for using John Ross in revenge against J.R. during his childhood.
Larry Hagman as J.R. Ewing, (seasons 1–2) The eldest son of Jock and Miss Ellie and John Ross's father. A cunning and ruthless oil baron, J.R. has spent his recent years in a nursing home, being treated for clinical depression. Hagman died during production of season 2, signalling the on-screen death of J.R. Ewing. J.R. returned in 2014 in season 3 using unused footage of the character.
Emma Bell as Emma Brown (seasons 2–3) daughter of Ann Ewing and Harris Ryland. Her birth name is revealed in season 2 as Emma Judith Ryland. She starts romances with John Ross and Drew Ramos.
Mitch Pileggi as Harris Ryland (seasons 2–3, recurring previously), the head of Ryland Transport and Ann's ex-husband. Ruthless, narcissistic and always eager for more power, he has been shown enjoying tormenting his former wife, as well as trying to blackmail Sue Ellen and suing Bobby.
Kuno Becker as Andreas "Drew" Ramos (season 3; recurring, season 2; main), Elena's troubled brother. He witnessed his father's death, which turned him into an angry juvenile delinquent. He ended up enlisting in the military and after a tour in Iraq straightened him up, he found work on oil rigs all over the globe. He is on the run now after blowing up the Ewing rig.
Juan Pablo Di Pace as Nicolas Treviño (season 3), born Joaquin Reyes, a childhood friend of Elena and Drew Ramos who becomes a powerful self-made billionaire businessman from Mexico. He comes across as a good, genuine guy, even though there are darker parts of his personality which he's hiding.
Recurring cast
Ken Kercheval as Cliff Barnes (seasons 1–3), the long-time rival of J.R., as well as the half-brother of Christopher's adoptive mother and Bobby's first wife, Pamela Barnes Ewing.
Judith Light as Judith Brown-Ryland (seasons 2–3), Harris Ryland's mother, "an authoritative and controlling battleaxe who will fight to the death to protect the people she loves". She is majority stockholder in Ryland Transportation.
Leonor Varela as Veronica Martinez/Marta Del Sol (season 1), a mentally unstable con artist who pretends to be a Mexican heiress.
Callard Harris as Tommy Sutter (season 1), Rebecca's supposed older brother, involved in her plot to extort money from Christopher.
Marlene Forte as Carmen Ramos (seasons 1–3), the faithful Southfork cook and Elena and Drew's mother.
Charlene Tilton as Lucy Ewing (seasons 1–3), niece of J.R. and Bobby and the older cousin of John Ross and Christopher. She is the daughter of Gary and Valene Ewing and was a main character in the original series.
Steve Kanaly as Ray Krebbs (seasons 1–3), Jock's illegitimate son and the half-brother of J.R., Bobby and Gary. A main character in the original series, he was the ranch foreman at Southfork until he left for Europe during season 12.
Kevin Page as Steven "Bum" Jones (seasons 1–3), J.R. Ewing's private investigator, confidant, friend and right-hand man.
Lee Majors as Ken Richards (season 2), an old admirer of Sue Ellen's and a commissioner of T.E.S.H.A., a state agency looking into Christopher's rig explosion.
Steven Weber as Governor Sam McConaughey (seasons 2–3), who joins forces with Cliff Barnes and Harris Ryland to try and take down the Ewings.
Faran Tahir as Frank Ashkani (seasons 1–2), Cliff's menacing right-hand man. Born Rahid Durani in Islamabad, he was taken off the streets by Cliff some 30 years ago. Cliff gave him a proper education, eventually hired him as his private driver, and "adopted" him as a son.
Carlos Bernard as Vicente Cano (seasons 1–2), a Venezuelan businessman who finances J.R. and John Ross' deal with Veronica. When the Ewings fail to hold up their end of the deal, he turns violent. He eventually is sentenced to prison, after federal agents raid his house.
Alex Fernandez as Roy Vickers (season 2), Harris Ryland's right-hand man who is connected to the Mendez-Ochoa drug cartel.
Donny Boaz as Bo McCabe (seasons 2–3), a worker in Southfork Ranch's cattle operations first seen in the episode "A Call to Arms" as a drug pusher where Emma Ryland was scoring painkillers.
Jude Demorest as Candace Shaw (season 3), John Ross' secretary who is also a prostitute connected to Judith Ryland's prostitution ring
Fran Kranz as Hunter McKay (season 3), the grandson of original character Carter McKay. A partying geek entrepreneur who played basketball with John Ross Ewing and Christopher Ewing as a kid. Founder of Get It Games video software company.
Antonio Jaramillo as Luis (season 3), a Mendez-Ochoa cartel enforcer.
Episodes
The first season premiered on June 13, 2012, and introduces the central characters of the show: John Ross Ewing III, Christopher Ewing, Elena Ramos, Rebecca Sutter, Ann Ewing, Bobby Ewing, Sue Ellen Ewing and J.R. Ewing. The main focus of the season 1 is the discovery of oil reserves on Southfork by John Ross and attempts by him and his father, J.R. to wrest the land from Bobby. Other storylines in this season include the love triangle between John Ross, Christopher and Elena, Christopher's marriage to Rebecca, Sue Ellen's plans to run for Governor of Texas and Bobby's health problems.
Production
Prior to Dallas, Cidre was best known for producing and writing episodes of Cane, an American television drama that chronicled the lives and internal power struggles of a powerful and affluent Cuban-American family running an immensely successful rum and sugar cane business in South Florida. In 2010, TNT announced it would order a pilot for the continuation of the Dallas series. The pilot was filmed in and around the city of Dallas in early 2011. Production began in late August 2011 in Dallas on the remaining nine episodes in the first season order, based in studios constructed for the Fox television series The Good Guys.
Executive producer Cynthia Cidre wrote the pilot script, while Michael M. Robin served as the director and executive producer for the pilot. David Jacobs reviewed Cidre's pilot script and gave his blessing to the new series though he has chosen not to participate in its production. A dispute erupted when the opening credits were originally planned to read "Developed by Cynthia Cidre, based on Dallas created by David Jacobs". But upon the determination of the Writers Guild of America's screenwriting credit system, there are currently two separate credits: one listing Jacobs as the show's sole creator and another listing Cidre as the new show's developer.
A sneak preview of the series, including clips from the pilot episode, aired on July 11, 2011, during an episode of TNT's Rizzoli & Isles. Patrick Duffy stated that the new show is "exactly the same [as the old show], but it's 2012. We consider this year 14 of the show. It's exactly as if [viewers] forgot which channel we were on."
Continuity
The new series is a continuation of the old series following a 20-year break, during which the characters and their relationships continued unseen until today when the new series begins. It does not take the events of the reunion TV movies Dallas: J.R. Returns or Dallas: War of the Ewings into account. Instead, we find the characters having evolved over the last 20 years. Cynthia Cidre, show developer, has confirmed that the new series does not pick up from where the TV movies left off because the movies had tried to resolve lingering plotlines in less than two hours. It continues from the events of the 14th season, their development and consequences extrapolated to 2012.
Production crew
Cynthia Cidre, Bruce Rasmussen, Michael M. Robin, Ken Topolsky and Bryan J. Raber served as executive producers for the show. Rasmussen had previously worked as the supervising producer with the hit TV series Roseanne, for which he was awarded the Golden Globe.
In the first two seasons, Jesse Bochco and Michael M. Robin were the most prolific directors, each directing five episodes.
Filming
Unlike the original series, which did limited location shooting in Texas but was filmed primarily in Los Angeles, principal photography for the new series takes place in and around Dallas. The new series also did location shooting at the actual Southfork Ranch in the northern Dallas suburb of Parker.
Opening sequence
The opening sequence features a shortened version of the original theme music, and echoes the original series opening with modernized shots of Dallas in sliding panels. Unlike the original series, the actors are not listed alphabetically and, for seasons 1 and 2, there are no images of the actors seen in the credits. Josh Henderson and Jesse Metcalfe alternate top billing, and the original stars are credited at the end ("with Patrick Duffy", "and Linda Gray", "and Larry Hagman as J.R. Ewing") until Hagman's death in season 2. The Dallas logo scrolls from right to left, rather than zooming upwards as it did on the original series. The sequence ends on a shot with the camera flying towards Southfork similar to the shot in the original titles where the camera flies over the gate towards Southfork. The season 3 titles feature the return of the iconic threeway split-screen opening, similar to those used in the original series for its first 11 years, with moving images of the actors. In addition, the Dallas season 3 logo zooms towards the screen as it did on the original series.
Reception
Advance screening reviews of the series were generally positive from critics on Metacritic. On June 29, 2012, TNT renewed Dallas for a second season consisting of 15 episodes, which premiered on January 28, 2013. The second season received positive notice, with a score of 82/100 from reviews on Metacritic.
Ratings
Home releases
Awards and nominations
References
External links
Official website for Southfork Ranch
2010s American LGBTQ-related drama television series
2012 American television series debuts
2014 American television series endings
American television soap operas
American primetime television soap operas
American English-language television shows
American sequel television series
Serial drama television series
Television series about dysfunctional families
Television series by Warner Horizon Television
Television series by Warner Bros. Television Studios
Television shows set in Dallas
Television shows filmed in Texas
TNT (American TV network) original programming
Television series created by David Jacobs (writer)
Works about petroleum | Dallas (2012 TV series) | Chemistry | 3,618 |
61,594,559 | https://en.wikipedia.org/wiki/Panine%20alphaherpesvirus%203 | Panine alphaherpesvirus 3 (PnHV-3) is a species of virus in the genus Simplexvirus, subfamily Alphaherpesvirinae, family Herpesviridae, and order Herpesvirales.
References
External links
Alphaherpesvirinae | Panine alphaherpesvirus 3 | Biology | 56 |
74,465,682 | https://en.wikipedia.org/wiki/Hawking%20Fellowship | The Professor Stephen Hawking Fellowship is a prestigious annual fellowship of the Cambridge Union Society in the University of Cambridge. Awarded to an individual who has made an exceptional contribution to the STEM fields and social discourse, it is unique amongst comparable accolades in that it is conferred by the students of the University (through the Union), rather than the University itself.
Established to celebrate Hawking’s achievements and the close relationship between him and the students of Cambridge, Professor Hawking accepted the inaugural fellowship and delivered the lecture in his last public appearance before his passing. Each honouree visits the Union to commence their tenure as fellow, delivering what is known as ‘The Hawking Lecture’.
Past Fellows
References
Science and technology awards | Hawking Fellowship | Technology | 143 |
68,415,663 | https://en.wikipedia.org/wiki/Plutonium%20selenide | Plutonium selenide is a binary inorganic compound of plutonium and selenium with the chemical formula PuSe. The compound forms black crystals and does not dissolve in water.
Synthesis
Reaction of diplutonium triselenide and plutonium trihydride:
2 {Pu2Se3} + 2 {PuH3} ->[\text{1600 °C}] 4 {PuSe} + 3 {H2}
Fusion of stoichiometric amounts of pure substances:
{Pu} + {Se} ->[\text{220–1000 °C}] PuSe
Properties
Plutonium selenide forms black crystals of a cubic system, space group Fmm, cell parameters a = 0.57934 nm, Z = 4, structure of the NaCl type.
With increasing pressure, two phase transitions occur: at 20 GPa into the trigonal system and at 35 GPa into the cubic system, a structure of the CsCl type.
Its magnetic susceptibility follows the Curie-Weiss law.
References
Inorganic compounds
Plutonium compounds
Selenides
Rock salt crystal structure | Plutonium selenide | Chemistry | 228 |
44,103,870 | https://en.wikipedia.org/wiki/FeMoco | FeMoco ( cofactor) is the primary cofactor of nitrogenase. Nitrogenase is the enzyme that catalyzes the conversion of atmospheric nitrogen molecules N2 into ammonia (NH3) through the process known as nitrogen fixation. Because it contains iron and molybdenum, the cofactor is called FeMoco. Its stoichiometry is Fe7MoS9C.
Structure
The FeMo cofactor is a cluster with composition Fe7MoS9C. This cluster can be viewed as two subunits composed of one Fe4S3 (iron(III) sulfide) cluster and one MoFe3S3 cluster. The two clusters are linked by three sulfide ligands and a bridging carbon atom. The unique iron (Fe) is anchored to the protein by a cysteine. It is also bound to three sulfides, resulting in tetrahedral molecular geometry. The additional six Fe centers in the cluster are each bonded to three sulfides. These six internal Fe centers define a trigonal prismatic arrangement around a central carbide center. The molybdenum is attached to three sulfides and is anchored to the protein by the imidazole group of a histidine residue. Also bound to Mo is a bidentate homocitrate cofactor, leading to octahedral geometry. Crystallographic analysis of the MoFe protein initially revealed the geometry and chemical composition of FeMoco, later confirmed by extended X-ray absorption fine-structure (EXAFS) studies. The Fe-S, Fe-Fe and Fe-Mo distances were determined to be 2.32, 2.64, and 2.73 Å respectively.
Biosynthesis
Biosynthesis of FeMoco is a complicated process that requires several Nif gene products, specifically those of nifS, nifQ, nifB, nifE, nifN, nifV, nifH, nifD, and nifK (expressed as the proteins NifS, NifU, etc.). FeMoco assembly is proposed to be initiated by NifS and NifU which mobilize Fe and sulfide into small Fe-S fragments. These fragments are transferred to the NifB scaffold and arranged into a Fe7MoS9C cluster before transfer to the NifEN protein (encoded by nifE and nifN) and rearranged before delivery to the MoFe protein. Several other factors participate in the biosynthesis. For example, NifV is the homocitrate synthase that supplies homocitrate to FeMoco. NifV, a protein factor, is proposed to be involved in the storage and/or mobilization of Mo. Fe protein is the electron donor for MoFe protein. These biosynthetic factors have been elucidated and characterized with the exact functions and sequence confirmed by biochemical, spectroscopic, and structural analyses.
Identity of the core atom
The three proteins that play a direct role in the M-cluster synthesis are NifH, NifEN, and NifB. The NifB protein is responsible for the assembly of the Fe-S core of the cofactor; a process that involves stitching together two [4Fe-4S] clusters. NifB belongs to the SAM (S-adenosyl-L-methionine) enzyme superfamily. During the biosynthesis of the FeMo cofactor, NifB and its SAM cofactor are directly involved in the insertion of a carbon atom at the center of the Fe-S complex. An equivalent of SAM donates a methyl group, which becomes the interstitial carbide of the M-cluster. The methyl group of SAM is mobilized by radical removal of an H by a 5’-deoxyadenosine radical (5’-dA·). Presumably, a transient –CH2· radical is formed that is subsequently incorporated into the metal cluster forming a Fe6-carbide species. The interstitial carbon remains associated with the FeMo cofactor after insertion into the nitrogenase, The nature of the central atom in FeMoco as a carbon species was identified in 2011. The approach for the identification relied on a combination of 13C/15N-labeling and pulsed EPR spectroscopy as well as X-ray crystallographic studies at full atomic resolution. Additionally, X-ray diffractometry was used to verify that there was a central carbon atom in the middle of the FeMo cofactor and x-ray emission spectroscopic studies showed that central atom was carbon due to the 2p→1s carbon-iron transition. The use of X-ray crystallography showed that while the FeMo cofactor is not in its catalytic form, the carbon keeps the structure rigid which helps describe the reactivity of nitrogenase.
Electronic properties
According to the analysis by electron paramagnetic resonance spectroscopy, the resting state of the FeMo cofactor has a spin state of S=3/2. Upon one-electron reduction, the cofactor becomes EPR silent. Understanding the process in which an electron is transferred in the protein adduct shows a more precise kinetic model of the FeMo cofactor. Density functional theory calculations as well as spatially resolved anomalous dispersion refinement have suggested that the formal oxidation state is MoIV-2FeII-5FeIII-C4−-H+, but the "true" oxidation states have not been confirmed experimentally.
Substrate binding
The location of substrate attachment to the complex has yet to be elucidated. It is believed that the Fe atoms closest to the interstitial carbon participate in substrate activation, but the terminal molybdenum is also a candidate for nitrogen fixation. X-ray crystallographic studies utilizing MoFe-protein and carbon monoxide (CO), which is isoelectronic to dinitrogen, demonstrated that carbon monoxide is binding to the Fe2-Fe6-edge of FeMoco. Additional studies showed simultaneous binding of two CO-molecules to FeMoco, providing a structural basis for biological Fischer-Tropsch-type chemistry. Se-incorporation studies in combination with time-resolved X-ray crystallography evidenced major structural rearrangements in the FeMoco-structure upon substrate binding events.
Isolation
Isolation of the FeMo cofactor from nitrogenase is done through centrifugal sedimentation of nitrogenase into the MoFe protein and the Fe protein. The FeMo cofactor is extracted by treating the MoFe protein with acids. The first extraction is done with N,N-dimethylformamide and the second by a mixture of N-methylformamide and Na2HPO4 before final sedimentation by centrifugation.
References
Cluster chemistry
Iron–sulfur proteins
Iron(III) compounds
Sulfur compounds
Metalloproteins
Molybdenum(IV) compounds
Cofactors
Molybdenum enzymes | FeMoco | Chemistry | 1,456 |
35,328,973 | https://en.wikipedia.org/wiki/Terrorism%20and%20social%20media | Terrorism, fear, and media are interconnected. Terrorists use the media to advertise their attacks and or messages, and the media uses terrorism events to further aid their ratings. Both promote unwarranted propaganda that instills mass amounts of public fear. The leader of Al-Qaeda, Osama bin Laden, discussed weaponization of media in a letter written after his organization committed the terrorist attacks of 9/11. In that letter, Bin Laden stated that fear was the deadliest weapon. He noted that Western civilization has become obsessed with mass media, quickly consuming what will bring them fear. He further stated that societies are bringing this problem on their own people by giving media coverage an inherent power.
In relation to one's need for media coverage, Al-Qaeda and other militant Jihadi terrorist organizations can be classified as a far-right radical offshoot of mainstream mass media. The Jihad needs to conceptualize their martyrdom by leaving behind manifestos and live videos of their attacks; it is crucially important to them that their ill deeds are being covered by news media.
The components the media looks for to deem the news "worthy" enough to publicize are categorized into ten qualities; terrorists usually exceed half in their attacks. These include: Immediacy, Conflict, Negativity, Human Interest, Photographability, Simple Story Lines, Topicality, Exclusivity, Reliability, and Local Interest. Historically, morality and profitability are two motivations which are not easily weighed when delivering news; recent news coverage has become far more motivated in making money for their parent corporation than serving as a defender of truth, doing true journalistic fact-finding, and shielding the public from news which is sensational, outright untrue, or politically-motivated propaganda.
A study concerning the disparity in coverage of terrorist events took attacks from the tenyear span of 2005–2015 and found that 136 episodes of terrorism occurred in the United States. LexisNexis Academic and CNN were the platforms used to measure the media coverage. It was found that out of other terrorist attacks showed on the news, one's with Muslim perpetrators received more than 357% coverage. In addition to this disparity, attacks also received more coverage when they were targeted at the government, had high fatality rates, and showed arrests being made. These findings were aligned with America's tendency to categorize Muslim people as a threat to national security. Thus, mass media coverage on terrorism is creating fake narratives and an absence of related coverage. For instance, the American public believes that crime rates have been on the rise which in fact they have been on an all-time low. Given that the media often covers crime almost immediately and frequently, suggests that people infer it happening all the time. In reference to the disparity in terror attacks, three attacks were seen to have the least media coverage of all the 136. The Sikh Temple massacre in Wisconsin which had 2.6% coverage, the Kansas synagogue killings which had 2.2%, and the Charleston Church deaths which only resulted in 5.1% coverage. The three events had commonalities worth mentioning in that they all had white perpetrators and were not directed at government intuitions (in fact all targeted minorities). The media's obsession with terror is making people fearful of the wrong things and not attentive enough to the issues that are radically unseen.
Not only are minorities usually not the perpetrators of domestic terrorism, but they are common victims in mass casualties or proximal witnesses to the attacks. In an early 2000s study, 72 Israeli adults were measured pre and posttest for increased anxiety after being exposed to news broadcasts of terrorism attacks. The study found that the group exposed to the broadcasts without any treatment (preparation intervention) had heightened levels of anxiety compared to the group that received the treatment along with viewing the broadcast. Since preparatory intervention is not yet normalized, people in proximity to ongoing coverage of terror events are suffering from the lasting impacts of fear and anxiety. Preparatory Intervention, in this case, was conducted by a group facilitator who introduced a topic concerning terrorism in which participants were instructed to write down feelings to share with the group and later learn to cope with.
A discourse of fear created by mass media presence, but false information is leading people to prepare for the wrong situations. In the early 2000s, police units circulated public schools flooding the idea of Stranger Danger into the minds of adolescents. Children and their parents cautiously separated from strangers while perpetrators in those families' social circles continued to offend under the radar. For myths are becoming common, precedent and real danger is buried beneath the surface. It is these implementations of fear that are falsifying the true narrative which for terrorism is a huge social problem but one that is not resolved through entertainment and mass media production. Mass media like news outlets and even social media platforms are contributing to the growing discourse of fear surrounding terrorism.
Terrorism and social media refers to the use of social media platforms to radicalize and recruit violent and non-violent extremists.
According to some researchers the convenience, affordability, and broad reach of social media platforms such as YouTube, Facebook and Twitter, terrorist groups and individuals have increasingly used social media to further their goals, recruit members, and spread their message. Attempts have been made by various governments and agencies to thwart the use of social media by terrorist organizations.
Despite the risks of making statements, such as enabling governments to locate terror group leaders, terror leaders communicate regularly with video and audio messages which are posted on the website and disseminated on the internet. ISIS uses social media to their advantage when releasing threatening videos of beheadings. ISIS uses this tactic to scare normal people on social media. Similarly, Western domestic terrorists also use social media and technology to spread their ideas.
Traditional media
Many authors have proposed that media attention increases perceptions of risk of fear of terrorism and crime and relates to how much attention the person pays to the news. The relationship between terrorism and the media has long been noted. Terrorist organizations depend on the open media systems of democratic countries to further their goals and spread their messages. To garner publicity for their cause, terrorist organizations resort to acts of violence and aggression that deliberately target civilians. This method has proven to be effective in gathering attention:
While a media organization may not support the goals of terrorist organizations, it is their job to report current events and issues. In the fiercely competitive media environment, when a terrorist attack occurs, media outlets scramble to cover the event. In doing so, the media help to further the message of terrorist organizations:
One notable example of the relationship between terror groups and the media was the release of the Osama bin Laden audio and video recordings. These tapes were sent directly to mainstream Arabic television networks, including Al-Jazeera.
Media can often be the source of discontent for terrorist groups. Irene Kahn says,
Most terrorist groups use social media as a means to bypass the traditional media and spread their propaganda.
Media surveillance
The network described in Michel Foucault's theory of surveillance, panopticism, is a networks of power where all parties are transfixed by the actions of the others in the network. It is especially imperative when major events in the world occur, which is usually the case with terrorism. This model can be transposed on the network of power that media-outlet consumers and producers enter. In a network of power that includes consumers and producers, both parties have fixed gazes' on each other. The consumers transfix their gazes' on the stories that media outlets produce. And, the needs of the consumers, which is in this case their need to be updated regularly, becomes the producers gaze. The producers or media outlets are in competition with other media outlets to supply their constituents with the most up-to-date information. This network of fixed gazes' is both "privileged and imperative" for the system to satisfy the status quo.
Consumers looks to media outlets to provide news on terrorism. If consumers believe terrorism is a threat to their safety, they want to be informed of the threats against them. Media outlets fulfill their viewers' needs, and portray terrorism as a threat because of the cycle that surveillance engenders. As terrorism flourishes as a prominent discourse of fear, consumers want information faster because they feel their safe being is in peril. The idea of total surveillance, as prescribed by Foucault, becomes a cycle where the disruption of power causes scrutiny by various players in system. If the media-outlets are not constantly looking for stories that fulfill consumer needs, then they are scrutinized. In addition to the surveillance aspect of news dissemination, therein is the notion that "needs" drive the network of power: both the media outlets and consumers have needs that are fulfilled by broadcasting the news. It is this idea expressed in the uses and gratifications theory. It stipulates that the active audience and the terrorist "seek to satisfy their various needs" through media transmission. While media outlets know the stories they show have astounding effects on the political and sociological perspective in society, the impetus on economic gains is of greater importance.
Use of social media
In a study by Gabriel Weimann from the University of Haifa, Weimann found that nearly 90% of organized terrorism activities on the internet takes place via social media. According to Weimann, terror groups use social media platforms like Twitter, Facebook, YouTube, and internet forums to spread their messages, recruit members and gather intelligence.
Terror groups take to social media because social media tools are cheap and accessible, facilitate quick, broad dissemination of messages, and allow for unfettered communication with an audience without the filter or "selectivity" of mainstream news outlets. Also, social media platforms allow terror groups to engage with their networks. Whereas previously terror groups would release messages via intermediaries, social media platforms allow terror groups to release messages directly to their intended audience and converse with their audience in real time: Weimann also mentions in "Theater of Terror", that terrorists use the media to promote the theatrical like nature of the premeditated terror.
Terror groups using social media
Al-Qaeda has been noted as being one of the terror groups that uses social media the most extensively. Brian Jenkins, senior advisor for the RAND Corporation, commented on Al-Qaeda's dominant presence on the web:
According to Rob Wainwright, author of "Fighting Crime and Terrorism in the Age of Technology," in order for ISIS to spread its message, they have utilized more than one hundred sites. This shows how vastly social media is used by terrorist groups. Known terrorist group the Islamic State of Iraq and the Levant, also translated to ISIS, uses the widespread of news over social media to their advantage when releasing threatening videos of beheadings. As of November 16, 2014, following the beheading of former U.S. Army Ranger Peter Kassig, there have now been five recorded executions of Westerners taken captive in Syria. James Foley, David Cawthorne Haines, Alan Henning, and Steven Sotloff are also among the men kidnapped and executed by ISIS. The videos of the brutal beheadings are both posted online by ISIS, where they can be viewed by anyone using their own discretion, and sent to government officials as threats. Posting the executions online gives the terrorist groups the power to manipulate viewers and cause havoc among the population viewing them, and the videos have the ability to instill fear within the Western world. The videos are typically high production quality and generally show the entirety of the gruesome act, with the hostage speaking a few words before they are killed on camera.
In the case of U.S. aid worker Peter Kassig, his video did not show the actual beheading act and he did not speak any final words before the execution. His silence and the fact that the actual execution was not included in the video raised question about his video was different than the rest. In response to Kassig's beheading, his family expressed their wish that news media avoid doing what the group wants by refraining from publishing or distributing the video. By refusing to circulate the video of the beheading, it therefore loses the ability to manipulate Americans or further the cause of the terrorist group.
In addition to beheading videos, ISIS has released videos of their members doing nonviolent acts. For example, Imran Awan described one such instance in his article "Cyber-Extremism: Isis and the Power of Social Media" where one video showed members of the Islamic State were seen helping people and visiting hospitals. These videos gave a humanistic nature to the terrorist group members, therefore, contradicting what civilians think terrorist groups should be.
Edgar Jones has mentioned in his article, "The Reception of Broadcast Terrorism: Recruitment and Radicalisation," that ISIS has utilized documentaries and even their own magazine, Dabiq, in order to recruit new members and to get their message out to the public. This illustrates just a couple of the various mediums that ISIS has used.
According to Wainwright, social media is also used by ISIS and other terror groups to recruit foreign people to join the terrorist cause. In some cases, these new recruits are sent back to their home country to carry out terrorist attacks. Others who can not physically move to the terrorist cause have been known to carry out acts of terrorism in their own countries due to the propaganda that they are exposed to online. This exhibits how ISIS can brainwash or expand on ideas that individuals may have.
The Taliban has been active on Twitter since May 2011, and has more than 7,000 followers. Tweeting under the handle @alemarahweb, the Taliban tweets frequently, on some days nearly hourly. This account is currently suspended.
In December 2011, it was discovered that the Somalia-based terror cell Al-Shabaab was using a Twitter account under the name @HSMPress. Since opening on , the account has amassed tens of thousands of followers and tweets frequently.
Shortly after a series of coordinated Christmas bombings in Kono, Nigeria, in 2011, the Nigerian-based terror group Boko Haram released a video statement defending their actions to YouTube. Boko Haram has also used Twitter to voice their opinions.
AQAP and Islamic State (ISIS/ISIL/DAESH)
Islamic State has emerged as one of the most potent users of social media. In many respects, Islamic State learned their propaganda craft from al-Qaeda on the Arabian Peninsula (AQAP). However, IS quickly eclipsed its mentor, deploying a whole range of narratives, images and political proselytizing through various social media platforms. A study by Berger and Morgan estimated that at least 46,000 Twitter accounts were used by ISIS supporters between September and December 2014. However, as ISIS supporters regularly get suspended and then easily create new, duplicate accounts, counting ISIS Twitter accounts over a few months can overestimate the number of unique people represented by 20–30%. In 2019, Storyful discovered that approximately two dozen TikTok accounts were used to post propaganda videos targeting users. Accounts broadcast news from Amaq News Agency, the official news outlet for the Islamic State.
However, as the November 2015 attacks in Paris demonstrate, IS also uses old-fashioned methods of communication and propaganda. Lewis notes that the attacks in Paris represent the sort of 'propaganda in action' which was a method developed by the 19th century anarchists in Europe. The November 2015 IS attacks were perpetrated without prior warning, largely because the operatives met face-to-face and used other non-digital means of communication.
Attempts to thwart the use of social media by terror groups
Some U.S. government officials have urged social media companies to stop hosting content from terror groups. In particular, Joe Lieberman has been especially vocal in demanding that social media companies not permit terror groups to use their tools. In 2008, Lieberman and the United States Senate Committee on Homeland Security and Governmental Affairs issued a report titled "Violent Islamist Extremism, the Internet, and the Homegrown Terrorist Threat". The report stated that the internet is one of the "primary drivers" of the terrorist threat to the United States.
In response to the news that Al-Shabaab was using Twitter, U.S. officials have called for the company to shut down the account. Twitter executives have not complied with these demands and have declined to comment on the case.
In January 2012, Twitter announced changes to their censorship policy, stating that they would now be censoring tweets in certain countries when the tweets risked breaking the local laws of that country. The reason behind the move was stated on their website as follows:
The move drew criticism from many Twitter users who said the move was an affront to free speech. Many of the users threatened to quit tweeting if the policy was not rescinded, including Chinese artist and activist Ai Weiwei.
In December 2010, in response to growing demands that YouTube pull video content from terrorist groups from its servers, the company added a "promotes terrorism" option under the "violent or repulsive content" category that viewers can select to "flag" offensive content. By limiting the terrorists access to conventional mass media and censoring news coverage of terrorist acts and their perpetrators and also minimising the terrorists allowance to manipulate mass media, the mass fear impact that is usually created will decrease.
Effectiveness of suspension
Western governments have been actively trying to surveil and censor IS social media sites. As Jeff Lewis explains, as quickly as platform managers close down accounts, IS and its supporters continually create new IDs which they then use to resurge back with new accounts and sites for propaganda. A case study of an al Shabaab account and a George Washington University white paper found that accounts that resurged did not regain the high number of followers they had had originally. However this picture is complicated as a May 2016 article in the Journal of Terrorism Research found that resurgent accounts acquire an average (median) of 43.8 followers per day, while regular jihadist accounts accrue only 8.37 followers on average per day.
Free speech and terrorism
U.S. Rep. Ted Poe, R-Texas, has said that the U.S. Constitution does not apply to terrorists and that they have given up their rights to free speech. He cited a Supreme Court ruling that anyone providing "material support" to a terrorist organization is guilty of a crime, even if that support only involves speaking and association. He also cited terrorist speech as being like child pornography in that it does harm.
Homeland security subcommittee
On December 6, 2011, the US Committee on Homeland Security's Subcommittee on Counterterrorism and Intelligence held a hearing entitled "Jihadist Use of Social Media - How to Prevent Terrorism and Preserve Innovation."
At the hearing, members heard testimony from Will McCants, an analyst for the Center for Naval Analyses, Aaron Weisburd, director of the Society for Internet Research, Brian Jenkins, senior advisor for the Rand Corporation and Evan Kohlmann, senior partner from Flashpoint Global Partners.
McCants stated that while terror groups were actively using social media platforms to further their goals, research did not support the notion that the social media strategies they adopted were proving effective:
McCants added that he did not believe that closing online user accounts would be effective in stopping radicalization and stated that closing online accounts could even disadvantage U.S. security and intelligence forces:
McCants stressed that not enough research has been conducted on this topic and he would be willing to change his opinion on the matter if there was empirical evidence that proved that social media has a major role in radicalizing youth.
Weisburd stated that any organization that played a part in producing and distributing media for terrorist organizations were in fact supporting terrorism:
Weisburd argued that social media lends an air of legitimacy to content produced by terror organizations and provides terrorist organizations an opportunity to brand their content:
He concluded that the goal of intelligence and security forces should not be to drive all terrorist media offline, but rather to deprive terror groups from the branding power gleaned from social media.
Jenkins stated that the risks associated with al Qaeda's online campaign do not justify an attempt to impose controls on content distributors. Any attempted controls would be costly and would deprive the intelligence officials of a valuable source of information. Jenkins also stated that there was no evidence that attempts to control online content would be possible:
Kohlmann stated U.S. government officials must do more to pressure social media groups like YouTube, Facebook and Twitter to remove content produced by terror groups:
See also
Alt-tech
Online youth radicalization
Social media use by the Islamic State
References
Terrorism | Terrorism and social media | Technology | 4,215 |
45,576,083 | https://en.wikipedia.org/wiki/Boletus%20loyo | Boletus loyo is a species of bolete fungus in the family Boletaceae that is found in South America. It was described as new to science by Carlos Luigi Spegazzini in 1912, who made the first scientifically documented collections in Argentina. The bolete is edible.
See also
List of Boletus species
References
External links
loyo
Edible fungi
Fungi described in 1912
Fungi of South America
Taxa named by Carlo Luigi Spegazzini
Fungus species | Boletus loyo | Biology | 95 |
65,570,746 | https://en.wikipedia.org/wiki/IPhone%2012%20Pro | The iPhone 12 Pro and iPhone 12 Pro Max are smartphones developed and marketed by Apple Inc. They are the flagship smartphones in the fourteenth generation of the iPhone, succeeding the iPhone 11 Pro and iPhone 11 Pro Max, respectively. They were unveiled alongside the iPhone 12 and iPhone 12 Mini at an Apple Special Event at Apple Park in Cupertino, California on October 13, 2020, with the iPhone 12 Pro being released on October 23, 2020, and the iPhone 12 Pro Max on November 13, 2020. They were discontinued on September 14, 2021, along with the iPhone XR, following the announcement of the iPhone 13 and iPhone 13 Pro.
Major upgrades over the iPhone 11 Pro and iPhone 11 Pro Max include the addition of 5G support, the lidar sensor, ProRAW (DNG) allowing high quality lossless 12-bit image capture in the native photos app with the use of the new DNG v1.6 specification, the introduction of the MagSafe wireless charging and accessory system, the Apple A14 Bionic system on a chip (SoC), high-dynamic-range video Dolby Vision 10-bit 4:2:0 4K video recording at 30 or 60 fps, larger 6.1-inch and 6.7-inch displays on the iPhone 12 Pro and iPhone 12 Pro Max, respectively, and the move to a base capacity of 128 GB from the prior base capacity of 64 GB, while retaining the other storage capacities of 256 and 512 GB. The iPhone 12 Pro and iPhone 12 Pro Max, like the iPhone 12 and iPhone 12 Mini, are the first iPhone models from Apple to no longer include a power adapter or EarPods headphones found in prior iPhone models; however, a USB-C to Lightning cable is still included, and this change was retroactively applied to other iPhone models sold by Apple at the time, including the iPhone XR, iPhone 11 and iPhone SE (2nd generation).
History
The iPhone 12 Pro and iPhone 12 Pro Max were officially announced by Apple Inc. on October 13, 2020, during a virtual press event held at the Steve Jobs Theater at Apple Park in Cupertino, California. The event was conducted alongside the announcement of the iPhone 12, iPhone 12 Mini, and HomePod Mini.
Launch and availability
Pre-orders for the iPhone 12 Pro began on October 16, 2020, with the official release taking place on October 23, 2020. This release coincided with the launch of the fourth-generation iPad Air and the standard iPhone 12. The iPhone 12 Pro Max followed a staggered release schedule, with pre-orders commencing on November 6, 2020, and an official release date of November 13, 2020, alongside the iPhone 12 Mini.
The pricing for the iPhone 12 Pro started at $999, while the iPhone 12 Pro Max began at $1099. This marked the first time since the launch of the iPhone XS and iPhone XR in 2018 that multiple iPhone models were announced together but not released simultaneously.
Discontinuation and reintroduction
On September 14, 2021, following the announcement of the iPhone 13 Pro and iPhone 13 Pro Max, the iPhone 12 Pro and iPhone 12 Pro Max were officially discontinued and removed from sale on Apple's website. However, in March 2022, Apple resumed selling refurbished iPhone 12 Pro models starting at USD 759 through its Refurbished and Clearance section. The iPhone 12 Pro Max, however, was not made available as a refurbished option.
The iPhone 12 series introduced several significant technological advancements, such as the inclusion of a Super Retina XDR display, improved camera capabilities, and support for 5G connectivity. It received positive reviews for its design, performance, and new features, making it one of the top-selling smartphone series of its release year.
Design
It is the first major redesign since the iPhone X, similar to that of iPad Pros since 2018 and the 4th-generation iPad Air. The iPhone 12 Pro and 12 Pro Max feature a flat chassis, a design seen with the iPhone 4 through the iPhone 5S and the first generation iPhone SE. The notch size is similar to previous models. The bezels are around 35% thinner than the iPhone 11 Pro and previous models. The new design also comes with Corning Inc’s custom ceramic-hardened i.e glass ceramic front glass, "Ceramic Shield", while the back retains the previous generation of Corning Inc's custom Dual-Ion Exchange strengthened glass. On the back is the same three-camera configuration found on the iPhone 11 Pro, but with larger apertures and an added LiDAR scanner.
The iPhone 12 Pro and 12 Pro Max are available in four colors: Silver, Graphite, Gold, and Pacific Blue. Pacific Blue is a new color replacing Midnight Green, while Graphite is a renamed version of Space Gray and Gold is now updated in new yellow gold introduced with the Apple Watch Series 6.
Specifications
Hardware
The iPhone 12 Pro uses Apple's six-core A14 Bionic processor, which contains a 16-core neural engine. It has three internal storage options: 128, 256, and 512 GB. The iPhone 12 Pro has an IP68 water and dust-resistant rating along with dirt and grime, and is water-resistant up to for 30 minutes. However, the manufacturer warranty does not cover liquid damage to the phone.
The iPhone 12 Pro, like the iPhone 12, is not supplied with EarPods (except in France) or the power adapter included with prior iPhone models. Apple claims this will reduce carbon emissions and that most users already own these items. Apple still supplies the USB-C to Lightning cable that was introduced with the iPhone 11 Pro. In addition to Lightning and Qi wireless charging, the phones introduce MagSafe wireless charging, a new magnet-based charging and accessory system that allow accessories such as chargers and cases to snap onto the back of the phones. MagSafe wireless charging supports up to 15 watts, is fast-charge capable, and is a reimagining of the MagSafe brand that was introduced in 2006 with the original MacBook Pro. The MagSafe Charger can be purchased separately, along with a variety of cases and other accessories.
The iPhone 12 Pro and 12 Pro Max support 5G cellular communications. This allows upload speeds of up to 200 Mbit/s (1 Mbit/s = 1 million bits per second) and download speeds of up to 4 Gbit/s. However, only models sold in the U.S. support the faster mmWave technology; those sold elsewhere in the world, including Canada, only support sub-6 GHz frequency bands. A new feature called Smart Data Mode provides 5G only when necessary to preserve battery life and data usage.
Displays
The iPhone 12 Pro has a 6.06 inch (154 mm) (marketed as 6.1 inch) OLED display with a resolution of 2532 × 1170 pixels (2.9 megapixels) at 460 ppi, while the iPhone 12 Pro Max has a 6.68 inch (170 mm) (marketed as 6.7 inch) OLED display with a resolution of 2778 × 1284 pixels (3.5 megapixels) at 458 ppi. Both models have the Super Retina XDR OLED display with thinner bezels than previous generation iPhones. The iPhone 12 Pro Max features the largest display on any iPhone to date. The phones also introduce a new glass-ceramic covering, named 'Ceramic Shield', which was co-developed with Corning Inc. Apple claims the Ceramic Shield has "4 times better drop performance" and that it is "tougher than any smartphone glass".
Batteries
The iPhone 12 Pro is supplied with a 10.78 Wh (2,815 mAh) battery, a slight decrease from the 11.67 Wh (3,046 mAh) battery found in the iPhone 11 Pro, and is identical to the battery found in the standard iPhone 12. The iPhone 12 Pro Max has a 14.13 Wh (3,687 mAh) battery, another slight decrease from the 15.04 Wh (3,969 mAh) battery found in the iPhone 11 Pro Max. The battery is not user-replaceable.
Chipsets
Both the iPhone 12 Pro and iPhone 12 Pro Max are supplied with the Apple A14 Bionic, the first ARM-based smartphone system-on-a-chip (SoC) manufactured on the 5 nm process node. However, unlike previous years, the iPhone 12 Pro and iPhone 12 Pro Max are not the first Apple devices to receive the newest A-series processor, with the fourth-generation iPad Air being the first device from Apple to contain the A14 Bionic chip. The iPhone 12 Pro and iPhone 12 Pro Max also contain the Apple M14 motion coprocessor. The iPhone 12 Pro and iPhone 12 Pro Max use Qualcomm's X55 5G modem.
Cameras
The iPhone 12 Pro features four cameras: one front-facing camera and three back-facing cameras, including a telephoto, wide, and ultra-wide camera. The iPhone 12 Pro also features a lidar scanner for AR and computer-aided photo enhancement services. The iPhone 12 Pro also adds Night Mode for time-lapse video recording on all four cameras. Unlike the iPhone 11 Pro and iPhone 11 Pro Max where the only difference was the screen size and battery capacity, the iPhone 12 Pro Max adds a 47% larger sensor and sensor-shift image stabilization to the main camera lens, and replaces the f/2.0 aperture 52 mm telephoto camera lens with a f/2.2 aperture 65 mm lens, allowing for a 2.5x optical zoom. The iPhone 12 Pro and iPhone 12 Pro Max are the first smartphones capable of shooting in 10-bit high dynamic range Dolby Vision 4K video at up to 60 frames per second.
Sensors
The iPhone 12 Pro and iPhone 12 Pro Max have largely the same sensors found on prior iPhone models going back to the iPhone X. These include an accelerometer, gyroscope, barometer, proximity sensor, ambient light sensor, and a digital compass. The devices also include the Face ID facial recognition system, this is made up of several sensors: mainly a dot projector, flood illuminator, and an infrared camera, allowing a user's face to be scanned and stored by the Secure Enclave.
A lidar scanner is the new sensor included in the 12 Pro and 12 Pro Max, similar to that of the fourth-generation iPad Pro, permitting additional augmented reality (AR) features to also be supported, such as the ability to measure a user's approximate height from the Measure app.
Software
The feature iOS, Apple's mobile operating system. The user interface of iOS is based on the concept of direct manipulation, using multi-touch gestures. Interface control elements consist of sliders, switches, and buttons. Interaction with the OS includes gestures such as swipe, tap, pinch, and reverse pinch, all of which have specific definitions within the context of the iOS operating system and its multi-touch interface. Internal accelerometers are used by some applications to respond to shaking the device (one common result is the undo command) or rotating it vertically (one common result is switching from portrait to landscape mode).
The iPhone 12 Pro was first supplied with iOS 14.1 alongside the iPhone 12 while the iPhone 12 Pro Max was supplied with iOS 14.2 alongside the iPhone 12 Mini. These phones come with the stock iOS apps, such as Safari, Weather, and Messages, and they also include Siri, the personal assistant included in iOS since iOS 5 with the release of the iPhone 4S.
These phones support the current public release of iOS, which is currently iOS 18.
Reception
The iPhone 12 Pro received generally positive reviews. The Verge called it a "beautiful, powerful, and incredibly capable device", praising the new design reminiscent of the iPhone 5, the speed of the A14 Bionic processor, and its 5G capabilities, but noted the decrease in battery life compared to the iPhone 11 Pro and the low number of upgrades compared to the iPhone 12. Engadget also gave the iPhone 12 Pro a positive review, praising the MagSafe wireless charging and accessory system as well as the improved camera system, but noted the lack of upgrade motivation if users had already purchased a new iPhone in 2019.
Apple was criticized for the continued reliance on Face ID as the sole biometric option to unlock the device, which is incompatible with face masks. This limitation was lifted with the introduction of the fifth revision of the iOS 14, which permits the user to unlock the phone while wearing a face mask, using the paired and password unlocked Apple Watch as its alternative authenticator. The iPhone SE (3rd generation) is the only phone that Apple currently produces that supports Touch ID, an alternative option that is compatible with face masks. All models can still use a passcode to log in.
"OLED-gate"
Within two weeks of its public release, a thread was started at Apple Support describing a problem with pixels on the iPhone 12 and iPhone 12 Pro OLED displays not shutting off completely in black scenes, resulting in what was described as an "ugly glowing"; over 3,500 other users have since clicked the "I have this question too" button in the thread. Additional users have provided photos and videos online that demonstrate the problem; one of whom—whose video amassed over 50,000 views—claims Apple responded that they were working on the problem. However, Apple has not officially acknowledged the problem, which persists despite multiple software updates, leading users and pundits to fear a hardware problem.
Removal of the power adapter and EarPods
Apple, through an "environmental initiative", have removed the EarPods (except in France until January 31, 2022) and power adapter (except in São Paulo) from all new iPhone boxes, including the iPhone 12 and iPhone 12 Pro. According to Apple, removing the power adapter permitted Apple to avoid 180, 000 metric tons of in fiscal year 2021 thanks to a shift in the mode of transport and product weight. Apple now includes a USB-C to Lightning cable, incompatible with the existing USB-A power adapters that Apple previously supplied with its devices. Users can still use their existing USB-A power adapters and Lightning cables to charge and sync, but must purchase or use an existing USB-C power adapter to utilize the included USB-C to Lightning cable. Starting with the iPhone 8, a USB Power Delivery (USB-PD) compliant charger is required to permit fast charging when using the USB-C to Lightning cable, with Apple suggesting the use of a 20W or greater USB-PD compliant charger to fast charge the iPhone 12.
Environmental data
Carbon footprint
The iPhone 12 Pro has a carbon footprint of of CO2 emissions, which is more than the preceding iPhone 11 Pro. The iPhone 12 Pro Max has a footprint of of carbon emissions, a increase compared to the iPhone 11 Pro Max. Of all emissions, 86% and 82% released by producing the iPhone 12 Pro and iPhone 12 Pro Max respectively are caused by device production and primary resource use with the remaining emissions released by means of first use, transportation, and end-of-life processing.
Repairing
Several weeks after its release, it was discovered by iFixit and Australian tech YouTuber Hugh Jeffreys that a number of key components such as the cameras malfunction or display warnings if they are replaced with new or those taken from an otherwise identical donor unit. Internal Apple documents also mention that, beginning with the iPhone 12 and continuing with subsequent models, authorized technicians would have to run the phones through an internal System Configuration tool to reprogram repaired units in order to account for hardware changes. While Apple has yet to comment on the issue, the inability to replace key system components have raised concerns about repairing and planned obsolescence.
See also
Comparison of smartphones
History of the iPhone
List of iPhone models
Timeline of iPhone models
Explanatory notes
References
Discontinued flagship smartphones
Mobile phones introduced in 2020
Mobile phones with 4K video recording
Mobile phones with multiple rear cameras | IPhone 12 Pro | Technology | 3,297 |
39,364,055 | https://en.wikipedia.org/wiki/SACLA | The SPring-8 Angstrom Compact free electron LAser, referred to as SACLA (pronounced さくら (Sa-Ku-Ra)), is an X-ray free-electron laser (XFEL) in Harima Science Garden City, Japan, embedded in the SPring-8 accelerator and synchrotron complex. When it first came into operation 2011, it was the second XFEL in the world and the first in Japan.
Design
Like other XFELs, SACLA uses self-amplified spontaneous emission to achieve extremely high intensities of X-rays. SACLA uses in-vacuum, short-period undulators, which is one of the unique factors in its design that allows it to achieve sub-Ångstrom wavelengths of 0.6 Å at a relatively much shorter distance of 0.7 km, compared to other similar XFELs like LCLS (2 km) or the European XFEL (3.4 km). An 8.5 GeV electron beam is used as the source.
Animated Short Films
SACLA has released a number of animated short films to promote its research capabilities to the public. In July 2013, SACLA released two animated short films titled "Picotopia", which discussed the cellular biology, and "Wasureboshi", which is about conception.
On December 3, 2013, another animated short titled "Mirai Koshi: Harima SACLA" was released to promote the XFEL's ability to detect atoms and molecules.
References
Further reading
SACLA home page
First Light at SACLA
Free-electron lasers
Synchrotron radiation
Synchrotron-related techniques
X-ray instrumentation
Riken
Harima Science Garden City | SACLA | Technology,Engineering | 348 |
8,446,001 | https://en.wikipedia.org/wiki/IEEE%20802.1AE | IEEE 802.1AE (also known as MACsec) is a network security standard that operates at the medium access control layer and defines connectionless data confidentiality and integrity for media access independent protocols. It is standardized by the IEEE 802.1 working group.
Details
Key management and the establishment of secure associations is outside the scope of 802.1AE, but is specified by 802.1X-2010.
The 802.1AE standard specifies the implementation of a MAC Security Entities (SecY) that can be thought of as part of the stations attached to the same LAN, providing secure MAC service to the client. The standard defines
MACsec frame format, which is similar to the Ethernet frame, but includes additional fields:
Security Tag, which is an extension of the EtherType
Message authentication code (ICV)
Secure Connectivity Associations that represent groups of stations connected via unidirectional Secure Channels
Security Associations within each secure channel. Each association uses its own Secure Association Key (SAK). More than one association is permitted within the channel for the purpose of key change without traffic interruption (standard requires devices to support at least two)
A default cipher suite of GCM-AES-128 (Galois/Counter Mode of Advanced Encryption Standard cipher with 128-bit key)
GCM-AES-256 using a 256 bit key was added to the standard 5 years later.
Security tag inside each frame in addition to EtherType includes:
association number within the channel
packet number to provide unique initialization vector for encryption and authentication algorithms as well as protection against replay attack
optional LAN-wide Secure Channel Identifier (SCI), which is not required on point-to-point links.
The IEEE 802.1AE (MACsec) standard specifies a set of protocols to meet the security requirements for protecting data traversing Ethernet LANs.
MACsec allows unauthorized LAN connections to be identified and excluded from communication within the network. In common with IPsec and TLS, MACsec defines a security infrastructure to provide data confidentiality, data integrity and data origin authentication.
By assuring that a frame comes from the station that claimed to have sent it, MACSec can mitigate attacks on Layer 2 protocols.
Publishing history:
2006 – Original publication (802.1AE-2006)
2011 – 802.1AEbn amendment adds the option to use 256 bit keys to the standard. (802.1AEbn-2011)
2013 – 802.1AEbw amendment defines GCM-AES-XPN-128 and GCM-AES-XPN-256 cipher suites in order to extend the packet number to 64 bits. (802.1AEbw-2013)
2017 – 802.1AEcg amendment specifies Ethernet Data Encryption devices. (802.1AEcg-2017)
2018 – 802.1AE-2018
2023 – 802.1AEdk-2023 amendment adding the option to reduce the ability of external observers to correlate user data frames, their sizes, transmission timing and transmission frequency with users’ identities and activities.
See also
Kerberos – using tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner
Virtual LAN (VLAN) – any broadcast domain that is partitioned and isolated in a computer network at the data link layer
IEEE 802.11i-2004 (WPA2)
Wi-Fi Protected Access (WPA)
Wired Equivalent Privacy (WEP)
References
External links
802.1AE-2018
MACsec Toolkit - A source code toolkit implementation of IEEE 802.1X-2010 (MACsec control plane) and IEEE802.1AE (MACsec data plane)
IEEE 802
Computer network technology
Cryptography standards
Networking standards
Link protocols | IEEE 802.1AE | Technology,Engineering | 768 |
6,397,470 | https://en.wikipedia.org/wiki/Runcinated%20120-cells | In four-dimensional geometry, a runcinated 120-cell (or runcinated 600-cell) is a convex uniform 4-polytope, being a runcination (a 3rd order truncation) of the regular 120-cell.
There are 4 degrees of runcinations of the 120-cell including with permutations truncations and cantellations.
The runcinated 120-cell can be seen as an expansion applied to a regular 4-polytope, the 120-cell or 600-cell.
Runcinated 120-cell
The runcinated 120-cell or small disprismatohexacosihecatonicosachoron is a uniform 4-polytope. It has 2640 cells: 120 dodecahedra, 720 pentagonal prisms, 1200 triangular prisms, and 600 tetrahedra. Its vertex figure is a nonuniform triangular antiprism (equilateral-triangular antipodium): its bases represent a dodecahedron and a tetrahedron, and its flanks represent three triangular prisms and three pentagonal prisms.
Alternate names
Runcinated 120-cell / Runcinated 600-cell (Norman W. Johnson)
Runcinated hecatonicosachoron / Runcinated dodecacontachoron / Runcinated hexacosichoron / Runcinated polydodecahedron / Runcinated polytetrahedron
Small diprismatohexacosihecatonicosachoron (acronym: sidpixhi) (George Olshevsky, Jonathan Bowers)
Images
Runcitruncated 120-cell
The runcitruncated 120-cell or prismatorhombated hexacosichoron is a uniform 4-polytope. It contains 2640 cells: 120 truncated dodecahedra, 720 decagonal prisms, 1200 triangular prisms, and 600 cuboctahedra. Its vertex figure is an irregular rectangular pyramid, with one truncated dodecahedron, two decagonal prisms, one triangular prism, and one cuboctahedron.
Alternate names
Runcicantellated 600-cell (Norman W. Johnson)
Prismatorhombated hexacosichoron (Acronym: prix) (George Olshevsky, Jonathan Bowers)
Images
Runcitruncated 600-cell
The runcitruncated 600-cell or prismatorhombated hecatonicosachoron is a uniform 4-polytope. It is composed of 2640 cells: 120 rhombicosidodecahedron, 600 truncated tetrahedra, 720 pentagonal prisms, and 1200 hexagonal prisms. It has 7200 vertices, 18000 edges, and 13440 faces (2400 triangles, 7200 squares, and 2400 hexagons).
Alternate names
Runcicantellated 120-cell (Norman W. Johnson)
Prismatorhombated hecatonicosachoron (Acronym: prahi) (George Olshevsky, Jonathan Bowers)
Images
Omnitruncated 120-cell
The omnitruncated 120-cell or great disprismatohexacosihecatonicosachoron is a convex uniform 4-polytope, composed of 2640 cells: 120 truncated icosidodecahedra, 600 truncated octahedra, 720 decagonal prisms, and 1200 hexagonal prisms. It has 14400 vertices, 28800 edges, and 17040 faces (10800 squares, 4800 hexagons, and 1440 decagons). It is the largest nonprismatic convex uniform 4-polytope.
The vertices and edges form the Cayley graph of the Coxeter group H4.
Alternate names
Omnitruncated 120-cell / Omnitruncated 600-cell (Norman W. Johnson)
Omnitruncated hecatonicosachoron / Omnitruncated hexacosichoron / Omnitruncated polydodecahedron / Omnitruncated polytetrahedron
Great diprismatohexacosihecatonicosachoron (Acronym gidpixhi) (George Olshevsky, Jonathan Bowers)
Images
Models
The first complete physical model of a 3D projection of the omnitruncated 120-cell was built by a team led by Daniel Duddy and David Richter on August 9, 2006 using the Zome system in the London Knowledge Lab for the 2006 Bridges Conference.
Full snub 120-cell
The full snub 120-cell or omnisnub 120-cell, defined as an alternation of the omnitruncated 120-cell, can not be made uniform, but it can be given Coxeter diagram , and symmetry [5,3,3]+, and constructed from 1200 octahedrons, 600 icosahedrons, 720 pentagonal antiprisms, 120 snub dodecahedrons, and 7200 tetrahedrons filling the gaps at the deleted vertices. It has 9840 cells, 35040 faces, 32400 edges, and 7200 vertices.
Related polytopes
These polytopes are a part of a set of 15 uniform 4-polytopes with H4 symmetry:
Notes
References
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 22) H.S.M. Coxeter, Regular and Semi-Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10]
(Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
J.H. Conway and M.J.T. Guy: Four-Dimensional Archimedean Polytopes, Proceedings of the Colloquium on Convexity at Copenhagen, page 38 und 39, 1965
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
Four-dimensional Archimedean Polytopes (German), Marco Möller, 2004 PhD dissertation m55 m62 m60 m64
x3o3o5x - sidpixhi, x3o3x5x - prix, x3x3o5x - prahi, x3x3x5x - gidpixhi
External links
H4 uniform polytopes with coordinates: t03{5,3,3} t013{3,3,5} t013{5,3,3} t0123{5,3,3}
Uniform 4-polytopes | Runcinated 120-cells | Physics | 1,472 |
51,601,592 | https://en.wikipedia.org/wiki/Floral%20scent | Floral scent, or flower scent, is composed of all the volatile organic compounds (VOCs), or aroma compounds, emitted by floral tissue (e.g. flower petals). Other names for floral scent include, aroma, fragrance, floral odour or perfume. Flower scent of most flowering plant species encompasses a diversity of VOCs, sometimes up to several hundred different compounds. The primary functions of floral scent are to deter herbivores and especially folivorous insects (see Plant defense against herbivory), and to attract pollinators. Floral scent is one of the most important communication channels mediating plant-pollinator interactions, along with visual cues (flower color, shape, etc.).
Biotic interactions
Perception by flower visitors
Flower visitors such as insects and bats detect floral scents thanks to chemoreceptors of variable specificity to a specific VOC. The fixation of a VOC on a chemoreceptor triggers the activation of an antennal glomerulus, further projecting on an olfactory receptor neuron and finally triggering a behavioral response after processing the information (see also Olfaction, Insect olfaction). The simultaneous perception of various VOCs may cause the activation of several glomeruli, but the output signal may not be additive due to synergistic or antagonistic mechanisms linked with inter-neuronal activity. Therefore, the perception of a VOC within a floral blend may trigger a different behavioral response than when perceived isolated. Similarly, the output signal is not proportional to the amount of VOCs, with some VOCs in low amounts in the floral blend having major effects on pollinator behavior. A good characterization of floral scent, both qualitative and quantitative, is necessary to understand and potentially predict flower visitors' behavior.
Flower visitors use floral scents to detect, recognize and locate their host species and even discriminate among flowers of the same plant. This is made possible by the high specificity of floral scent, where both diversity of VOCs and their relative amount may characterize the flowering species, an individual plant, a flower of the plant, and the distance of the plume from the source.
To make the best use of this specific information, flower visitors rely on long-term and short-term memory that allows them to efficiently choose their flowers. They learn to associate the floral scent of a plant with a reward such as nectar and pollen, and have different behavioral responses to known scents versus unknown ones. They are also able to react similarly to slightly different odor blends.
Mediated biotic interactions
A primary function of floral scent is to attract pollinators and ensure the reproduction of animal-pollinated plants.
Some families of VOCs presented in floral scents have likely evolved as herbivore repellents. However, these plant defenses are also used by herbivores themselves to locate a plant resource, similar to pollinators attracted by the floral scent. Therefore, flower traits can be subject to antagonistic selection pressures (positive selection by pollinators and negative selection by herbivores).
Plant-plant communications
Plants have an array of volatile compounds they can release to signal other plants. By unleashing these cues, plants learn more about their environment and sufficiently respond. However, there are still many factors about plant scents scientists are still trying to understand. Scientists have studied how many of the volatile compounds released by plants are from a floral source. A study concluded that floral cues are as important as other volatile compounds and are pertinent for plant-to-plant communication. Further research found that plants which receive the floral volatiles have higher fitness than other volatile cues because floral cues are the only compounds released by plants that indicate their kind of mating environment. Plants are able to respond to these mating cues and change adjustable floral phenotypes that can affect plant pollination and mating. Floral volatiles can ward off or attract pollinators/mates all at once. Depending on the number of floral signals released by a plant can control the level of attracting/repelling the plant wants. The composition of floral compounds and the rate of their release are the potential factors that control attraction/repellence. These two elements can be in response to ecological cues like high plant density and temperature. For instance, in sexually deceptive orchids, floral scents emitted after pollination reduce the flower's attractiveness to pollinators. This mechanism acts as a signal to pollinators to visit unpollinated flowers.
Environmental conditions can affect plant communication and signaling. Signal factors include temperature and plant density. Environmentally high temperatures increase the rate of releasing floral compounds, which can increase the amount of signal released and thus its ability to reach more plants. When plant density increases, plant communication increases as well, since plants would be near each other and have signals reach many neighboring plants. This can also increase the signal's reliability and lowering the chance the signal will degrade before it can reach other plants.
Biosynthesis of floral VOCs
Most floral VOCs belong to three main chemical classes. VOCs in the same chemical class are synthesized from a shared precursor, but the biochemical pathway is specific for each VOC and often varies from one plant species to another.
Terpenoids (or isoprenoids) are derived from isoprene and synthesized via the mevalonate pathway or the erythritol phosphate pathway. They represent the majority of floral VOCs and are often the most abundant compounds in floral scent blends.
The second chemical class is composed of the fatty acid derivatives synthesized from acetyl-CoA, most of which are known as green leaf volatiles, because they are also emitted by vegetative parts (i.e.: leaves and stems) of plants, and sometimes higher in abundance than from floral tissue.
The third chemical class is composed of benzenoids/phenylpropanoids, also known as aromatic compounds; they are synthesized from phenylalanine.
Regulation of emissions
Floral scent emissions of most flowering plants vary predictably throughout the day, following a circadian rhythm. This variation is controlled by light intensity. Maximal emissions coincide with peaks of the highest activity of visiting pollinators. For instance, snapdragon flowers, mostly pollinated by bees, have the highest emissions at noon, whereas nocturnally-visited tobacco plants have the highest emissions at night.
Floral scent emissions also vary along with floral development, with the highest emissions at anthesis, i.e. when the flower is fecund (highly fertile), and reduced emissions after pollination, probably due to mechanisms linked with fecundation. In tropical orchids, floral scent emission is terminated immediately following pollination, reducing the expenditure of energy on fragrance production. In petunia flowers, ethylene is released to stop the synthesis of benzenoid floral volatiles after successful pollination.
Abiotic factors, such as temperature, atmospheric concentration, hydric stress, and soil nutrient status also impact the regulation of floral scent. For instance, increased temperatures in the environment can increase the emission of VOCs in flowers, potentially altering communication between plants and pollinators.
Finally, biotic interactions may also affect the floral scent. Plant leaves attacked by herbivores emit new VOCs in response to the attack, the so-called herbivore-induced plant volatiles (HIPVs). Similarly, damaged flowers have a modified floral scent compared to undamaged ones. Micro-organisms present in nectar may alter floral scent emissions as well.
Measurement
Measuring floral scent both qualitatively (identification of VOCs) and quantitatively (absolute and/or relative emission of VOCs) requires the use of analytical chemistry techniques. It requires collecting floral VOCs, and then analyzing them.
VOCs sampling
The most popular methods rely on adsorbing floral VOCs on an adsorbent material such as SPME fibers or cartridges by pumping air sampled around inflorescences through the adsorbent material.
It is also possible to extract chemicals stocked in petals by immersing them in a solvent and then analyze the liquid residue. This is more adapted to the study of heavier organic compounds, and/or VOCs that are stored in floral tissue before being emitted into air.
Sample analysis
Desorption
Thermal desorption: the adsorbent material is flash-heated so that all adsorbed VOCs are carried away from the adsorbent and injected into the separation system. This is how work injectors in gas chromatography machines, which literally volatilize introduced samples. For VOCs adsorbed on bigger amount of adsorbent material such as cartridges, thermal desorption may require the use of a specific machine, a thermal desorber, connected to the separation system.
Desorption by solvent: VOCs adsorbed on the adsorbent material are carried away by a small quantity of solvent which is volatilized and injected in the separation system. Most commonly used solvents are very volatile molecules, such as methanol, to avoid co-elution with slightly heavier VOCs
Separation
Gas chromatography (GC) is ideal to separate volatilized VOCs due to their low molecular weight. VOCs are carried by a gas vector (helium) through a chromatographic column (the solid phase) on which they have different affinities, which allows to separate them.
Liquid chromatography may be used for liquid extractions of floral tissue.
Detection and identification
Separation systems are coupled with a detector, that allows the detection and identification of VOCs based on their molecular weight and chemical properties. The most used system for the analysis of floral scent samples is GC-MS (gas chromatography coupled with mass spectrometry).
Quantification
Quantification of VOCs is based on the peak area measured on the chromatogram and compared to the peak area of a chemical standard:
Internal calibration: a known quantity of a specific chemical standard is injected together with the VOCs, the measured area on the chromatogram is proportional to the injected quantity. Because the chemical properties of VOCs alter their affinity to the solid phase (the chromatographic column) and subsequently the peak area on the chromatogram, it is best to use several standards that reflect the best chemical diversity of the floral scent sample. This method allows a more robust comparison among samples.
External calibration: calibration curves (quantity vs. peak area) are established independently by the injection of a range of quantities of chemical standard. This method is best when the relative and absolute amount of VOCs in floral scent samples varies from sample to sample and from VOC to VOC and when the chemical diversity of VOCs in the sample is high. However, it is more time-consuming and may be a source of errors (e.g. matrix effects due to solvent or very abundant VOCs compared to trace VOCs).
Specificity of floral scent analysis
Floral scent is often composed of hundreds of VOCs, in very variable proportions. The method used is a tradeoff between accurately detecting quantifying minor compounds and avoiding detector saturation by major compounds. For most analysis methods routinely used, the detection threshold of many VOCs is still higher than the perception threshold of insects, which reduces our capacity to understand plant-insect interactions mediated by floral scent.
Further, the chemical diversity in floral scent samples is challenging. The time of analysis is proportional to the range in molecular weight of VOCs present in the sample, hence a high diversity will increase analysis time. Floral scent may also be composed of very similar molecules, such as isomers and especially enantiomers, which tend to co-elute and then to be very hardly separated. Unambiguously detecting and quantifying them is of importance though, as enantiomers may trigger very different responses in pollinators.
References
Chemical ecology
Flowers
Pollination | Floral scent | Chemistry,Biology | 2,440 |
15,868,806 | https://en.wikipedia.org/wiki/Specific%20force | Specific force (SF) is a mass-specific quantity defined as the quotient of force per unit mass.
It is a physical quantity of kind acceleration, with dimension of length per time squared and units of metre per second squared (m·s−2).
It is normally applied to forces other than gravity, to emulate the relationship between gravitational acceleration and gravitational force.
It can also be called mass-specific weight (weight per unit mass), as the weight of an object is equal to the magnitude of the gravity force acting on it.
The g-force is an instance of specific force measured in units of the standard gravity (g) instead of m/s², i.e., in multiples of g (e.g., "3 g").
Type of acceleration
The (mass-)specific force is not a coordinate acceleration, but rather a proper acceleration, which is the acceleration relative to free-fall. Forces, specific forces, and proper accelerations are the same in all reference frames, but coordinate accelerations are frame-dependent. For free bodies, the specific force is the cause of, and a measure of, the body's proper acceleration.
The acceleration of an object free falling towards the earth depends on the reference frame (it disappears in the free-fall frame, also called the inertial frame), but any g-force "acceleration" will be present in all frames. This specific force is zero for freely-falling objects, since gravity acting alone does not produce g-forces or specific forces.
Accelerometers on the surface of the Earth measure a constant 9.8 m/s^2 even when they are not accelerating (that is, when they do not undergo coordinate acceleration). This is because accelerometers measure the proper acceleration produced by the g-force exerted by the ground (gravity acting alone never produces g-force or specific force). Accelerometers measure specific force (proper acceleration), which is the acceleration relative to free-fall, not the "standard" acceleration that is relative to a coordinate system.
Hydraulics
In open channel hydraulics, specific force () has a different meaning:
where Q is the discharge, g is the acceleration due to gravity, A is the cross-sectional area of flow, and z is the depth of the centroid of flow area A.
See also
Acceleration
Proper acceleration
References
Physical quantities
Hydraulic engineering
Acceleration | Specific force | Physics,Mathematics,Engineering,Environmental_science | 499 |
2,379,141 | https://en.wikipedia.org/wiki/5S%20%28methodology%29 | 5S (Five S) is a workplace organization method that uses a list of five Japanese words: , , , , and . These have been translated as 'sort', 'set in order', 'shine', 'standardize', and 'sustain'. The list describes how to organize a work space for efficiency and effectiveness by identifying and sorting the items used, maintaining the area and items, and sustaining the new organizational system. The decision-making process usually comes from a dialogue about standardization, which builds understanding among employees of how they should do the work.
In some quarters, 5S has become 6S, the sixth element being safety (safe).
Other than a specific stand-alone methodology, 5S is frequently viewed as an element of a broader construct known as visual control, visual workplace, or visual factory. Under those (and similar) terminologies, Western companies were applying underlying concepts of 5S before publication, in English, of the formal 5S methodology. For example, a workplace-organization photo from Tennant Company (a Minneapolis-based manufacturer) quite similar to the one accompanying this article appeared in a manufacturing-management book in 1986.
Origins
5S was developed in Japan and was identified as one of the techniques that enabled just-in-time manufacturing.
Two major frameworks for understanding and applying 5S to business environments have arisen, one proposed by Takahashi and Osada, the other by Hiroyuki Hirano.
Hirano provided a structure to improve programs with a series of identifiable steps, each building on its predecessor.
Before this Japanese management framework, a similar "scientific management" was proposed by Alexey Gastev and the USSR Central Institute of Labour (CIT) in Moscow.
Each S
There are five 5S phases. They can be translated to English as 'sort', 'set in order', 'shine', 'standardize', and 'sustain'. Other translations are possible.
Sort ( )
is sorting through all items in a location and removing all unnecessary items from the location.
Goals:
Reduce time loss looking for an item by reducing the number of unnecessary items.
Reduce the chance of distraction by unnecessary items.
Simplify inspection.
Increase the amount of available, useful space.
Increase safety by eliminating obstacles.
Implementation:
Check all items in a location and evaluate whether or not their presence at the location is useful or necessary.
Remove unnecessary items as soon as possible. Place those that cannot be removed immediately in a 'red tag area' so that they are easy to remove later on.
Keep the working floor clear of materials except for those that are in use for production.
Set in order ( )
(Sometimes shown as Straighten)
is putting all necessary items in the optimal place for fulfilling their function in the workplace.
Goal:
Make the workflow smooth and easy.
Implementation:
Arrange work stations in such a way that all tooling/equipment is in close proximity, in an easy to reach spot and in a logical order adapted to the work performed. Place components according to their uses, with the frequently used components being nearest to the workplace.
Arrange all necessary items so that they can be easily selected for use. Make it easy to find and pick up necessary items.
Assign fixed locations for items. Use clear labels, marks or hints so that items are easy to return to the correct location and so that it is easy to spot missing items.
Shine ( )
is sweeping or cleaning and inspecting the workplace, tools and machinery on a regular basis.
Goals:
Improves the production process efficiency and safety, reduces waste, prevents errors and defects.
Keep the workplace safe and easy to work in.
Keep the workplace clean and pleasing to work in.
When in place, anyone not familiar to the environment must be able to detect any problems within in 5 seconds.
Implementation:
Clean the workplace and equipment on a daily basis, or at another appropriate (high frequency) cleaning interval.
Inspect the workplace and equipment while cleaning.
Sustaining hygiene ( )
is to maintain hygiene and cleanliness. This is the actual translation. It was often misrepresented as "standardize" simply to suit the 5S acronym. The original concept had nothing to do with standardize or uniformity (To seek uniformity in a setting where the tasks are not uniform is simply absurd).
Goal:
Establish procedures and schedules to ensure the cleanliness of workplace.
Implementation:
Develop a work structure that will support the new practices and make it part of the daily routine.
Ensure everyone knows their responsibilities of performing the sorting, organizing and cleaning.
Use photos and visual controls to help keep everything as it should be.
Review the status of 5S implementation regularly using audit checklists.
Sustain/self-discipline ( )
or sustain is the developed processes by self-discipline of the workers. Also translates as "do without being told".
Goal:
Ensure that the 5S approach is followed.
Implementation:
Organize training sessions.
Perform regular audits using 5s audit checklist to ensure that all defined standards are being implemented and followed.
Implement improvements whenever possible. Worker inputs can be very valuable for identifying improvements.
6S
The 6S methodology represents an advanced form of the 5S methodology, incorporating Safety as a key element. This change positions Safety at the forefront, stressing its critical role in operational settings. Safety's integration fundamentally alters the approach to organizing workplaces, ensuring that safety considerations are integral from the outset. The 6S model promotes a comprehensive strategy where safety and operational processes are interlinked and equally important. This adaptation underscores the importance of active hazard prevention and a robust safety culture in improving overall workplace efficiency and employee health. To successfully implement the 6S Lean method in your workplace, you'll require:
A deep understanding and prior experience with the 5S methodology.
A mechanism for hazard identification and reporting.
Industry-specific safety training for heightened safety awareness.
A commitment to conduct regular discussions with employees about the principles and practices of 5S/6S.
Endorsement and continuous support from senior management, including the allocation of necessary resources.
7S
The 7S methodology incorporates 6S safety, and adds a new element for oversight.
Variety of applications
5S methodology has expanded from manufacturing and is now being applied to a wide variety of industries including health care, education, and government. Visual management and 5S can be particularly beneficial in health care because a frantic search for supplies to treat an in-trouble patient (a chronic problem in health care) can have dire consequences.
Although the origins of the 5S methodology are in manufacturing, it can also be applied to knowledge economy work, with information, software, or media in the place of physical product.
In lean product and process development
The output of engineering and design in a lean enterprise is information, the theory behind using 5S here is "Dirty, cluttered, or damaged surfaces attract the eye, which spends a fraction of a second trying to pull useful information from them every time we glance past. Old equipment hides the new equipment from the eye and forces people to ask which to use".
See also
Japanese aesthetics
Just-in-time manufacturing
Kaikaku
Kaizen
Kanban
Lean manufacturing
Muda
Gogyo (traditional Japanese philosophy)
References
Methodology
Japanese business terms
Lean manufacturing
Occupational safety and health | 5S (methodology) | Engineering | 1,463 |
25,013 | https://en.wikipedia.org/wiki/Pi%20Day | Pi Day is an annual celebration of the mathematical constant (pi). Pi Day is observed on March 14 (the 3rd month) since 3, 1, and 4 are the first three significant figures of , and was first celebrated in the United States. It was founded in 1988 by Larry Shaw, an employee of a science museum in San Francisco, the Exploratorium. Celebrations often involve eating pie or holding pi recitation competitions. In 2009, the United States House of Representatives supported the designation of Pi Day. UNESCO's 40th General Conference designated Pi Day as the International Day of Mathematics in November 2019.
Other dates when people celebrate pi include Pi Approximation Day on July 22 (22/7 in the day/month format, an approximation of ) and June 28 (6.28, an approximation of 2 or tau).
History
In 1988, the earliest known official or large-scale celebration of Pi Day was organized by Larry Shaw at the San Francisco Exploratorium, where Shaw worked as a physicist, with staff and public marching around one of its circular spaces, then consuming fruit pies. The Exploratorium continues to hold Pi Day celebrations.
On March 12, 2009, the U.S. House of Representatives passed a non-binding resolution (111 H. Res. 224), recognizing March 14, 2009, as National Pi Day. For Pi Day 2010, Google presented a Google Doodle celebrating the holiday, with the word Google laid over images of circles and pi symbols; and for the 30th anniversary in 2018, it was a Dominique Ansel pie with the circumference divided by its diameter.
Some observed the entire month of March 2014 (3/14) as "Pi Month". In the year 2015, March 14 was celebrated as "Super Pi Day". It had special significance, as the date is written as 3/14/15 in month/day/year format. At 9:26:53, the date and time together represented the first ten digits of , and later that second, "Pi Instant" represented all of 's digits.
Observance
Pi Day has been observed in many ways, including eating pie, throwing pies and discussing the significance of the number , due to a pun based on the words "pi" and "pie" being homophones in English (), and the coincidental circular shape of many pies. Many pizza and pie restaurants offer discounts, deals, and free products on Pi Day. Also, some schools hold competitions as to which student can recall pi to the highest number of decimal places.
The Massachusetts Institute of Technology has often mailed its application decision letters to prospective students for delivery on Pi Day. Starting in 2012, MIT has announced it will post those decisions (privately) online on Pi Day at exactly 6:28 pm, which they have called "Tau Time", to honor the rival numbers pi and tau equally. In 2015, the regular decisions were put online at 9:26 am, following that year's "pi minute", and in 2020, regular decisions were released at 1:59 pm, making the first six digits of pi.
Princeton, New Jersey, hosts numerous events in a combined celebration of Pi Day and Albert Einstein's birthday, which is also March 14. Einstein lived in Princeton for more than twenty years while working at the Institute for Advanced Study. In addition to pie eating and recitation contests, there is an annual Einstein look-alike contest.
In 2024, the recreational mathematician Matt Parker and a team of hundreds of volunteers at City of London School spent six days calculating 139 correct digits of pi by hand, in what Parker claimed was "the biggest hand calculation in a century". On 15 August 2024, the main-belt asteroid 314159 Mattparker was named in his honor. The citation highlights Parker's biennial "Pi Day challenges", stating that they have helped to popularize mathematics.
Alternative dates
Pi Day is frequently observed on March 14 (3/14 in the month/day date format), but related celebrations have been held on alternative dates.
Pi Approximation Day is observed on July 22 (22/7 in the day/month date format), since the fraction is a common approximation of , which is accurate to two decimal places and dates from Archimedes. In Indonesia, a country that uses the DD/MM/YYYY date format, some people celebrate Pi Day every July 22.
Tau Day, also known as Two-Pi Day, is observed on June 28 (6/28 in the month/day format). The number , denoted by the Greek letter tau, is the ratio of a circle's circumference to its radius; it equals 2, a common multiple in mathematical formulae, and approximately equals 6.28. Some have argued that is the clearer and more fundamental constant and that Tau Day should be celebrated alongside or instead of Pi Day. Celebrants of this date jokingly suggest eating "twice the pie".
Some also celebrate pi on November 10, since it is the 314th day of the year (in leap years, on November 9).
Gallery
See also
Lists of holidays
National Mathematics Day (India)
Mole Day
Sequential time
Square Root Day
Doomsday rule
Notes
References
External links
Exploratorium's Pi Day Web Site
Official website of the International Day of Mathematics
UNESCO page on the International Day of Mathematics
NPR provides a "Pi Rap" audiovideo
Pi Day
Professor Lesser's Pi Day page
数学漫谈 (A Tour of Mathematics), a public lecture (in Chinese) delivered by Professor Ya-xiang Yuan (President of International Council for Industrial and Applied Mathematics) on 14 March 2020, the first International Day of Mathematics (slides)
1988 establishments in California
March observances
July observances
Observances about science
Pi
Recurring events established in 1988
Unofficial observances | Pi Day | Mathematics | 1,194 |
9,465,910 | https://en.wikipedia.org/wiki/Zero%20one%20infinity%20rule | The Zero one infinity (ZOI) rule is a rule of thumb in software design proposed by early computing pioneer Willem van der Poel. It argues that arbitrary limits on the number of instances of a particular type of data or structure should not be allowed. Instead, an entity should either be forbidden entirely, only one should be allowed, or any number of them should be allowed. Although various factors outside that particular software could limit this number in practice, it should not be the software itself that puts a hard limit on the number of instances of the entity.
Examples of this rule may be found in the structure of many file systems' directories (also known as folders):
0 – The topmost directory has zero parent directories; that is, there is no directory that contains the topmost directory.
1 – Each subdirectory has exactly one parent directory (not including shortcuts to the directory's location; while such files may have similar icons to the icons of the destination directories, they are not directories at all).
Infinity – Each directory, whether the topmost directory or any of its subdirectories, according to the file system's rules, may contain any number of files or subdirectories. Practical limits to this number are caused by other factors, such as space available on storage media and how well the computer's operating system is maintained.
Authorship
Van der Poel confirmed that he was the originator of the rule, but Bruce MacLennan has also claimed authorship (in the form "The only reasonable numbers are zero, one and infinity."), writing in 2015 that:
See also
Magic number (programming)#Unnamed numerical constants
References
Software engineering folklore
Programming principles | Zero one infinity rule | Engineering | 354 |
46,947,608 | https://en.wikipedia.org/wiki/Colletotrichum%20axonopodi | Colletotrichum axonopodi is a falcate-spored graminicolous plant pathogenic fungi species, first isolated from warm-season grasses.
References
Further reading
Hyde, K. D., et al. "Colletotrichum—names in current use." Fungal Diversity39.1 (2009): 147–182.
Crouch, J. A., and L. A. Beirn. "Anthracnose of cereals and grasses." Fungal Diversity 39 (2009): 19.
External links
MycoBank
axonopodi
Fungi described in 2009
Fungus species | Colletotrichum axonopodi | Biology | 128 |
46,772,079 | https://en.wikipedia.org/wiki/Networks%20and%20Heterogeneous%20Media | Networks and Heterogeneous Media is a peer-reviewed academic journal published bimonthly by AIMS Press, an operation of the American Institute of Mathematical Sciences. The journal was established in 2006 and focuses on networks, heterogeneous media, and related fields. The editor-in-chief is Benedetto Piccoli (Rutgers University).
Abstracting and indexing
The journal is abstracted and indexed in Zentralblatt MATH, MathSciNet, Scopus, Current Contents/Physical, Chemical & Earth Sciences, and Science Citation Index Expanded. According to the Journal Citation Reports, the journal has a 2013 impact factor of 0.952.
References
External links
Academic journals established in 2006
Quarterly journals
English-language journals
Academic journals published by learned and professional societies
Graph theory journals | Networks and Heterogeneous Media | Mathematics | 161 |
4,814,080 | https://en.wikipedia.org/wiki/Primary%20cyclic%20group | In mathematics, a primary cyclic group is a group that is both a cyclic group and a p-primary group for some prime number p.
That is, it is a cyclic group of order p, C, for some prime number p, and natural number m.
Every finite abelian group G may be written as a finite direct sum of primary cyclic groups, as stated in the fundamental theorem of finite abelian groups:
This expression is essentially unique: there is a bijection between the sets of groups in two such expressions, which maps each group to one that is isomorphic.
Primary cyclic groups are characterised among finitely generated abelian groups as the torsion groups that cannot be expressed as a direct sum of two non-trivial groups. As such they, along with the group of integers, form the building blocks of finitely generated abelian groups.
The subgroups of a primary cyclic group are linearly ordered by inclusion. The only other groups that have this property are the quasicyclic groups.
Finite groups
Abelian group theory
Refererences | Primary cyclic group | Mathematics | 214 |
13,755,028 | https://en.wikipedia.org/wiki/Bioversity%20International | Bioversity International is a global research-for-development organization that delivers scientific evidence, management practices and policy options to use and safeguard agricultural biodiversity to attain global food- and nutrition security, working with partners in low-income countries in different regions where agricultural biodiversity can contribute to improved nutrition, resilience, productivity and climate change adaptation. In 2019, Bioversity International joined with the International Center for Tropical Agriculture (as the Alliance of Bioversity International and CIAT) to "deliver research-based solutions that harness agricultural biodiversity and sustainably transform food systems to improve people's lives". Both institutions are members of the CGIAR, a global research partnership for a food-secure future.
The organization is highly decentralized, with about 300 staff working around the world with regional offices located in Central and South America, West and Central Africa, East and Southern Africa, Central and South Asia, and South-east Asia. In the summer of 2021 Bioversity International's office in Maccarese was moved to the Aventine Hill near the FAO in Rome, Italy and serves as the Alliance of Bioversity International and CIAT's global headquarters.
Background
Bioversity International is a research-for-development organization focused on safeguarding and using agricultural biodiversity to help meet challenges such as adaptation to climate change and increased sustainable production.
The organization takes the view that diversity offers opportunities not only through breeding (of plants and of animals) but also by delivering many other benefits. Some are direct, such as the better nutrition and greater sustainability that come with locally adapted crops. Others are indirect, like the ecosystem services delivered by healthy populations of pollinators, biological control agents, and soil microbes. Agricultural biodiversity will also be absolutely essential to cope with the predicted impacts of climate change, not simply as a source of traits but as the underpinnings of more resilient farm ecosystems.
Governance
Bioversity International is governed by a Board of Trustees, including one Trustee nominated by the host country (Italy) and one nominated by FAO. The Board also appoints the Director General who manages the operation of the various programs. The current Director General is Juan Lucas Restrepo.
History
In 2014, Bioversity International marked 40 years of operations. Bioversity International was originally established by the CGIAR as the International Board for Plant Genetic Resources (IBPGR) in 1974. In October 1993, IBPGR became the International Plant Genetic Resources Institute (IPGRI) and in 1994 IPGRI began independent operation as one of the centers of the CGIAR. At the request of the CGIAR, in 1994 IPGRI took over the governance and administration of the International Network for the Improvement of Banana and Plantain (INIBAP). In 2006, IPGRI and INIBAP became a single organization and subsequently changed their operating name to Bioversity International. Bioversity International still maintains the world's largest banana gene bank, the Bioversity International Musa Germplasm Transit Centre, that is hosted at the Katholieke Universiteit Leuven (KU Leuven) in Leuven, Belgium, and manages ProMusa - a platform that shares knowledge about bananas and plantains. In 2002, the Global Crop Diversity Trust was established by Bioversity International on behalf of the CGIAR and the UN Food and Agriculture Organization, through a Crop Diversity Endowment Fund.
Publications
Bioversity International and its predecessors have published occasional papers under the title Issues in Genetic Resources. In 2017, the organization published Mainstreaming Agrobiodiversity In Sustainable Food Systems - Scientific Foundations for an Agrobiodiversity Index, a book that put the spotlight on the importance of agrobiodiversity as the foundation of our food supplies.
Notable former member Board of Trustees
Prof. Emeritus Chin Hoong Fong, IBGR (1987-1992), Honorary Fellow (1997-2018)
Notes
External links
Global Crop Diversity Trust
European Cooperative Programme for Plant Genetic Resources
Agricultural research institutes
International research institutes
International environmental organizations
Biodiversity
Plant genetics
Sustainable agriculture
Environmental organizations established in 1974
Agricultural organisations based in Italy
Organisations based in Rome | Bioversity International | Biology | 845 |
26,849,255 | https://en.wikipedia.org/wiki/Turosteride | Turosteride (FCE-26,073) is a selective inhibitor of the enzyme 5α-reductase which was under investigation by GlaxoSmithKline for the treatment of benign prostatic hyperplasia (BPH), but was never marketed. Similarly to finasteride, turosteride is selective for the type II isoform of 5α-redcutase, with about 15-fold selectivity for it over type I isoform of the enzyme. In animal studies it has been shown to inhibit prostate size and retard tumor growth. It may also be useful for the treatment of acne and hair loss.
See also
5α-Reductase inhibitor
FCE 28,260
References
5α-Reductase inhibitors
Delta-lactams
Ureas
Abandoned drugs | Turosteride | Chemistry | 169 |
69,498,203 | https://en.wikipedia.org/wiki/HD%20139319 | HD 139319 is a ternary system composed of the binary Algol variable star known as TW Draconis, and a main-sequence companion star at a separation of 3 arcseconds. The system lies in the constellation of Draco about away.
System
The primary star is an eclipsing, semi-detached binary, the brighter component of which is a pulsating star of Delta Scuti type. Its pulsation frequency is 17.99 cycles per day. Mass transfer between stars is ongoing in the system with a transfer rate of 6.8×10−7/year. The 2.8 day period of the Algol binary is cyclically variable with a period 116.04 years, possibly due to gravitational influence of the distant companion HD 139319B. Another three stars in the system are suspected.
References
3
Draco (constellation)
J15335104+6354257
BD+64 1077
076196
139319
G-type main-sequence stars
K-type giants
F-type main-sequence stars
Algol variables
Draconis, TW | HD 139319 | Astronomy | 230 |
25,733,832 | https://en.wikipedia.org/wiki/Lyman-break%20galaxy | Lyman-break galaxies are star-forming galaxies at high redshift that are selected using the differing appearance of the galaxy in several imaging filters due to the position of the Lyman limit. The technique has primarily been used to select galaxies at redshifts of z = 3–4 using ultraviolet and optical filters, but progress in ultraviolet astronomy and in infrared astronomy has allowed the use of this technique at lower and higher redshifts using ultraviolet and near-infrared filters.
The Lyman-break galaxy selection technique relies upon the fact that radiation at higher energies than the Lyman limit at 912 Å is almost completely absorbed by neutral gas around star-forming regions of galaxies. In the rest frame of the emitting galaxy, the emitted spectrum is bright at wavelengths longer than 912 Å, but very dim or imperceptible at shorter wavelengths. This is known as a "dropout", or "break", and can be used to find the position of the Lyman limit. Light with a wavelength shorter than 912 Å is in the far-ultraviolet range, and is blocked by Earth's atmosphere, but for very distant galaxies, the wavelengths of light are stretched considerably because of the expansion of the universe. For a galaxy at redshift z = 3, the Lyman break will appear to be at wavelengths of about 3600 Å, which is long enough to be detected by ground- or space-based telescopes.
Candidate galaxies around redshift z = 3 can then be selected by looking for galaxies which appear in optical images (which are sensitive to wavelengths greater than 3600 Å), but do not appear in ultraviolet images (which are sensitive to light at wavelengths shorter than 3600 Å). The technique may be adapted to look for galaxies at other redshifts by choosing different sets of filters; the method works as long as images may be taken through at least one filter above and below the wavelength of the redshifted Lyman limit. In order to confirm the redshift estimated by the color selection, follow-up spectroscopy is performed. Although spectroscopic measurements are necessary to obtain a high-precision redshift, spectroscopy is typically much more time-consuming than imaging, so the selection of candidate galaxies via the Lyman-break technique greatly improves the efficiency of high-redshift galaxy surveys.
The issue of their far-infrared emission is still central to the study of Lyman-break galaxies to better understand their evolution and to estimate their total star-formation rate. So far, only a small sample has been detected in far-infrared. Most of the individual results rely upon information that is gathered from lensed Lyman-break galaxies or from the rest-frame ultraviolet, or from a few objects detected by the Herschel satellite or using the stacking technique that allows researchers to obtain averaged values for individually undetected Lyman-break galaxies.
But, recently, the stacking techniques on about galaxies allowed, for the first time, to collect some statistical information on the dust properties of LBGs.
In February 2022, astronomers reported the discovery of two Lyman break galaxies, named HD1 and HD2, at based upon studies that used the Lyman technique. Also note GLASS-z12, a distant galaxy that was discovered by the James Webb Space Telescope in July 2022, and JADES-GS-z14-0, a Lyman-break galaxy at which was also found by the JWST, and is the most distant object currently known.
See also
Balmer break
BzK galaxy
Damped Lyman-alpha system
Lyman-alpha blob
Lyman-alpha emitter
Lyman-alpha forest
Lyman series
References
Physical cosmology
Galaxies | Lyman-break galaxy | Physics,Astronomy | 740 |
1,544,179 | https://en.wikipedia.org/wiki/Technogaianism | Technogaianism (a portmanteau word combining "techno-" for technology and "gaian" for Gaia philosophy) is a bright green environmentalist stance of active support for the research, development and use of emerging and future technologies to help restore Earth's environment. Technogaianists argue that developing safe, clean, alternative technology should be an important goal of environmentalists and environmentalism.
Philosophy
This point of view is different from the default position of radical environmentalists and a common opinion that all technology necessarily degrades the environment, and that environmental restoration can therefore occur only with reduced reliance on technology. Technogaianists argue that technology gets cleaner and more efficient with time. They would also point to such things as hydrogen fuel cells to demonstrate that developments do not have to come at the environment's expense. More directly, they argue that such things as nanotechnology and biotechnology can directly reverse environmental degradation. Molecular nanotechnology, for example, could convert garbage in landfills into useful materials and products, while biotechnology could lead to novel microbes that devour hazardous waste.
While many environmentalists still contend that most technology is detrimental to the environment, technogaianists point out that it has been in humanity's best interests to exploit the environment mercilessly until fairly recently. This sort of behavior follows accurately to current understandings of evolutionary systems, in that when new factors (such as foreign species or mutant subspecies) are introduced into an ecosystem, they tend to maximize their own resource consumption until either, a) they reach an equilibrium beyond which they cannot continue unmitigated growth, or b) they become extinct. In these models, it is completely impossible for such a factor to totally destroy its host environment, though they may precipitate major ecological transformation before their ultimate eradication. Technogaianists believe humanity has currently reached just such a threshold, and that the only way for human civilization to continue advancing is to accept the tenets of technogaianism and limit future exploitive exhaustion of natural resources and minimize further unsustainable development or face the widespread, ongoing mass extinction of species. The destructive effects of modern civilization can be mitigated by technological solutions, such as using nuclear power. Furthermore, technogaianists argue that only science and technology can help humanity be aware of, and possibly develop counter-measures for, risks to civilization, humans and planet Earth such as a possible impact event.
Sociologist James Hughes mentions Walter Truett Anderson, author of To Govern Evolution: Further Adventures of the Political Animal, as an example of a technogaian political philosopher; argues that technogaianism applied to environmental management is found in the reconciliation ecology writings such as Michael Rosenzweig's Win-Win Ecology: How The Earth's Species Can Survive In The Midst of Human Enterprise; and considers Bruce Sterling's Viridian design movement to be an exemplary technogaian initiative.
The theories of English writer Fraser Clark may be broadly categorized as technogaian. Clark advocated "balancing the hippie right brain with the techno left brain". The idea of combining technology and ecology was extrapolated at length by a South African eco-anarchist project in the 1990s. The Kagenna Magazine project aimed to combine technology, art, and ecology in an emerging movement that could restore the balance between humans and nature.
George Dvorsky suggests the sentiment of technogaianism is to heal the Earth, use sustainable technology, and create ecologically diverse environments. Dvorsky argues that defensive counter measures could be designed to counter the harmful effects of asteroid impacts, earthquakes, and volcanic eruptions. Dvorksky also suggest that genetic engineering could be used to reduce the environmental impact humans have on the earth.
Methods
Environmental monitoring
Technology facilities the sampling, testing, and monitoring of various environments and ecosystems. NASA uses space-based observations to conduct research on solar activity, sea level rise, the temperature of the atmosphere and the oceans, the state of the ozone layer, air pollution, and changes in sea ice and land ice.
Geoengineering
Climate engineering is a technogaian method that uses two categories of technologies- carbon dioxide removal and solar radiation management. Carbon dioxide removal addresses a cause of climate change by removing one of the greenhouse gases from the atmosphere. Solar radiation management attempts to offset the effects of greenhouse gases by causing the Earth to absorb less solar radiation.
Earthquake engineering is a technogaian method concerned with protecting society and the natural and man-made environment from earthquakes by limiting the seismic risk to acceptable levels.
Another example of a technogaian practice is an artificial closed ecological system used to test if and how people could live and work in a closed biosphere, while carrying out scientific experiments. It is in some cases used to explore the possible use of closed biospheres in space colonization, and also allows the study and manipulation of a biosphere without harming Earth's. The most advanced technogaian proposal is the "terraforming" of a planet, moon, or other body by deliberately modifying its atmosphere, temperature, or ecology to be similar to those of Earth in order to make it habitable by humans.
Genetic engineering
S. Matthew Liao, professor of philosophy and bioethics at New York University, claims that the human impact on the environment could be reduced by genetically engineering humans to have, a smaller stature, an intolerance to eating meat, and an increased ability to see in the dark, thereby using less lighting. Liao argues that human engineering is less risky than geoengineering.
Genetically modified foods have reduced the amount of herbicide and insecticide needed for cultivation. The development of glyphosate-resistant (Roundup Ready) plants has changed the herbicide use profile away from more environmentally persistent herbicides with higher toxicity, such as atrazine, metribuzin and alachlor, and reduced the volume and danger of herbicide runoff.
An environmental benefit of Bt-cotton and maize is reduced use of chemical insecticides. A PG Economics study concluded that global pesticide use was reduced by 286,000 tons in 2006, decreasing the environmental impact of herbicides and pesticides by 15%. A survey of small Indian farms between 2002 and 2008 concluded that Bt cotton adoption had led to higher yields and lower pesticide use. Another study concluded insecticide use on cotton and corn during the years 1996 to 2005 fell by of active ingredient, which is roughly equal to the annual amount applied in the EU. A Bt cotton study in six northern Chinese provinces from 1990 to 2010 concluded that it halved the use of pesticides and doubled the level of ladybirds, lacewings and spiders and extended environmental benefits to neighbouring crops of maize, peanuts and soybeans.
Examples of implementation
Related environmental ethical schools and movements
See also
Appropriate technology
Digital public goods
Eco-innovation
Ecological modernization
Ecomodernism
Ecosia
Ecotechnology
Environmental ethics
Green development
Green nanotechnology
List of environmental issues
Open-source appropriate technology
Solarpunk
Ten Technologies to Fix Energy and Climate
References
External links
Green Progress
Viridian Design Movement
WorldChanging
Bright green environmentalism
Green politics
Environmentalism
Transhumanism | Technogaianism | Technology,Engineering,Biology | 1,449 |
75,773,862 | https://en.wikipedia.org/wiki/NGC%206622 | NGC 6622 is an interacting spiral galaxy in the constellation Draco. It is located around 313 million light-years away, and it was discovered by Edward D. Swift and Lewis A. Swift on June 2, 1885. NGC 6622 interacts with NGC 6621, with their closest approach having taken place about 100 million years before the moment seen now. NGC 6622 and NGC 6621 are included in the Atlas of Peculiar Galaxies as Arp 81 in the category "spiral galaxies with large high surface brightness companions".
NGC 6622 is the smaller of the two, and is a very disturbed galaxy. The encounter has left NGC 6622 very deformed, as it was once a spiral galaxy. The collision has also triggered extensive star formation between the two galaxies. The most intense star formation takes place in the region between the two nuclei, where a large population of luminous clusters, also known as super star clusters, has been observed. At this region is observed the most tidal stress. The brightest and bluest clusters are less than 100 million years old, with the youngest being less than 10 million years old. The side of the galaxy further from the companion features noticeably less star formation activity.
References
External links
More about NGC 6622
Spiral galaxies
Interacting galaxies
Draco (constellation)
Astronomical objects discovered in 1885
Galaxies discovered in 1885
Discoveries by Edward Swift
Discoveries by Lewis Swift
61579
61579
11175 S
6622 | NGC 6622 | Astronomy | 287 |
20,362,448 | https://en.wikipedia.org/wiki/Demecolcine | Demecolcine (INN; also known as colcemid) is a drug used in chemotherapy. It is closely related to the natural alkaloid colchicine with the replacement of the acetyl group on the amino moiety with methyl, but it is less toxic. It depolymerises microtubules and limits microtubule formation (inactivates spindle fibre formation), thus arresting cells in metaphase and allowing cell harvest and karyotyping to be performed.
During cell division, demecolcine inhibits mitosis at metaphase by inhibiting spindle formation. Medically, demecolcine has been used to improve the results of cancer radiotherapy by synchronising tumour cells at metaphase, the radiosensitive stage of the cell cycle.
In animal cloning procedures, demecolcine makes an ovum eject its nucleus, creating space for insertion of a new nucleus.
Mechanism of action
Demecolcine is a microtubule-depolymerizing drug like vinblastine. It acts by two distinct mechanisms. At very low concentration it binds to microtubule plus end to suppress microtubule dynamics. Recent study has found at higher concentration demecolcine can promote microtubule detachment from microtubule organizing center. Detached microtubules with unprotected minus end depolymerize with time. Cytotoxicity of the cells seems to correlate better with microtubule detachment. Lower concentration affects microtubule dynamics and cell migration.
Research use
Demecolcine is used for scientific research in cells. It is used in a variety of ways, however, until recently, was used mostly for the study of mitosis in cells. For example, microtubules are necessary for the splitting of cells. More importantly, the movement of chromosomes during the M phase. Demecolcine inhibition of microtubules causes aneuploidy in mitotic cells where the microtubules fall apart or are suppressed before they can complete their function of pulling chromosomes into the daughter cell, also known as nondisjunction of chromosomes. Demecolcine, depending on dose, has also been found to cause DNA fragmentation of chromosomes in micronuclei when nondisjunction occurs.
References
Pyrogallol ethers
Microtubule inhibitors
Benzodihydroheptalenes
Tropolones
Amines | Demecolcine | Chemistry | 501 |
75,620,299 | https://en.wikipedia.org/wiki/Caloplaca%20astonii | Caloplaca astonii is a rare species of crustose lichen in the family Teloschistaceae. Described in 2007, is known for its distinct appearance and very limited distribution in Australia. The lichen has a thin thallus measuring 3–8 mm wide, with confluent spots that are thicker and cracked in the centre, showing a dull rose-orange or dull brown-orange colour, and apothecia that transition from being immersed in the thallus to raised above it, revealing a bright reddish-brown .
Taxonomy
The lichen was first formally described in 2007 by lichenologists Sergey Kondratyuk and Ingvar Kärnefelt. The type material was found in Northwest New South Wales about south-southwest of Kayrunners and roughly west of White Cliffs. In this location, a glaring white quartz stone plain, it is common on stones.
This species is akin to Caloplaca montisfracti, but is distinguished by its apothecia with a very thin hymenium and small with attenuated tips. Among Australian Caloplaca species, Caloplaca astonii is unique due to its thin hypothallus, a dull pink thallus, and lecanorine apothecia with a bright red or pink-red . It is further characterised by a large , a lax palisade cortex, and a loose medulla. The species is named in honour of Helen Aston, who collected the type material in 1966.
Description
Caloplaca astonii features a thallus with a width of 3–8 mm, consisting of confluent spots. It is crustose, very thin, and closely adheres to the substrate, especially at the periphery. The thallus is thicker and cracked in the central part and has a dull rose-orange or dull brown-orange colour. The lecanorine apothecia are initially immersed in the thallus and become raised as they mature, revealing a bright reddish-brown disc.
The is extremely thin in the peripheral zone, expanding to 0.5–1.5 mm wide and up to 100 μm thick. The central part of the thallus features areoles measuring 0.6–1.3 mm wide and 0.3–0.4 mm thick, with cracks that are not the naked rock surface, ranging from 25 to 50 (up to 75) μm wide. In section, areoles are 220–350 μm thick, with numerous vertical, lax bundles of hyphae. The is 30–35 μm thick, composed of large, rounded cells, and the is dispersed and discontinuous. The medulla, consisting of loose short hyphae, reaches a thickness of 70–100 μm.
Apothecia are 0.3–1.0 mm in diameter, initially immersed and then raised as they mature, with a flat, , and bright reddish-brown . Each areole typically contains 1–5 apothecia. The thalline margin is quite thick, and the disc has an uneven surface. The is 30–35 μm thick, with elongated hyphae. The hymenium is 40–45 μm high, and the is 60–100 μm thick. Ascospores are very small, distinctly widened at the septum, and attenuated towards the tips, typically measuring 8–9 by 4.5–6 μm with a septum thickness of 2.5–3 μm.
Habitat and distribution
Caloplaca astonii occurs on quartzite rocks and is considered a very rare inland species. At the time of its original publication, it had only been recorded from the type collection in New South Wales, Australia.
See also
List of Caloplaca species
References
astonii
Lichen species
Lichens described in 2007
Lichens of Australia
Taxa named by Sergey Kondratyuk
Taxa named by Ingvar Kärnefelt
Species known from a single specimen | Caloplaca astonii | Biology | 814 |
275,935 | https://en.wikipedia.org/wiki/Flexible%20AC%20transmission%20system | A Flexible Alternating Current Transmission System (FACTS) is a family of Power-Electronic based devices designed for use on an Alternating Current (AC) Transmission System to improve and control Power Flow and support Voltage. FACTs devices are alternatives to traditional electric grid solutions and improvements, where building additional Transmission Lines or Substation is not economically or logistically viable.
In general, FACTs devices improve power and voltage in three different ways: Shunt Compensation of Voltage (replacing the function of capacitors or inductors), Series Compensation of Impedance (replacing series capacitors) or Phase-Angle Compensation (replacing Generator Droop-Control or Phase-Shifting Transformers). While other traditional equipment can accomplish all of this, FACTs devices utilize Power Electronics that are fast enough to switch sub-cycle opposed to seconds or minutes. Most FACTs devices are also dynamic and can support voltage across a range rather than just on and off, and are multi-quadrant, i.e. they can both supply and consume Reactive Power, and even sometimes Real Power. All of this give them their "flexible" nature and make them well-suited for applications with unknown or changing requirements.
The FACTs family initially grew out of the development of High-Voltage Direct-Current (HVDC) conversion and transmission, which used Power Electronics to convert AC to DC to enable large, controllable power transfers. While HVDC focused on conversion to DC, FACTs devices used the developed technology to control power and voltage on the AC system. The most common type of FACTs device is the Static VAR Compensator (SVC), which uses Thyristors to switch and control shunt capacitors and reactors, respectively.
History
When AC won the War of Currents in the late 19th century, and electric grids began expanding and connecting cities and states, the need for reactive compensation became apparent. While AC offered benefits with transformation and reduced current, the alternating nature of voltage and current lead to additional challenges with the natural capacitance and inductance of transmission lines. Heavily loaded lines consumed reactive power due to the line's inductance, and as transmission voltage increased throughout the 20th century, the higher voltage supplied capacitive reactive power. As operating a transmission line only at it surge impedance loading (SIL) was not feasible, other means to manage the reactive power was needed.
Synchronous Machines were commonly used at the time for generators, and could provide some reactive power support, however were limited due to the increase in losses it caused. They also became less effective as higher voltage transmissions lines moved loads further from sources. Fixed, shunt capacitor and reactor banks filled this need by being deployed where needed. In particular, shunt capacitors switched by circuit breakers provided an effective means to managing varying reactive power requirements due to changing loads. However, this was not without limitations.
Shunt capacitors and reactors are fixed devices, only able to be switched on and off. This required either a careful study of the exact size needed, or accepting less than ideal effects on the voltage of a transmission line. The need for a more dynamic and flexible solution was realized with the mercury-arc valve in the early 20th century. Similar to a vacuum tube, the mercury-arc valve was a high-powered rectifier, capable of converting high AC voltages to DC. As the technology improved, inverting became possible as well and mercury valves found use in power systems and HVDC ties. When connected to a reactor, different switching pattern could be used to vary the effective inductance connected, allow for more dynamic control. Arc valves continued to dominate power electronics until the rise of solid-state semiconductors in the mid 20th century.
As semiconductors replaced vacuum tubes, the thyristor created the first modern FACTs devices in the Static VAR Compensator (SVC). Effectively working as a circuit breaker that could switch on in milliseconds, it allowed for quickly switching capacitor banks. Connected to a reactor and switched sub-cycle allowed the effective inductance to be varied. The thyristor also greatly improved the control system, allowing an SVC to detect and react to faults to better support the system. The thyristor dominated the FACTs and HVDC world until the late 20th century, when the IGBT began to match its power ratings.
Theory
The basic theory for how FACTs devices affect the AC system is based on analyzing how power transfers between two points in an AC system. This is particularly relevant to how an AC electrical grid functions, as the grid has numerous nodes (substations) that lack sources (generators) or loads. Power flow must be calculated and controlled at each node (substation bus) to ensure the grid design and topology itself does not prevent generated electricity from reaching loads, as when Transmission Lines reach dozens to hundreds of miles in length, they add significant impedance and voltage drop to the system.
Given two buses, each with their own voltage magnitude and phase angle, and connected by a Transmission Line with an impedance, the current flowing between them is given by
Apparent Power flow, and thus real and reactive power, is then given by
Combining these two equations gives the real and reactive power flow as a function of voltages and impedance. This can be done relatively easily, and is done in load-flow and power analysis programs, but results in equations that are not intuitive to understand. Two approximations can be made to simplify things: assume a lossless Transmission Line (a decent assumption as very low resistance conductor is typically used) and neglecting any capacitance on the line (a fair assumption for 200kV lines and lower). This reduces the Line impedance to just a reactance, and results in the real and reactive power being
where
is the magnitude of the Sending-End Voltage, at the first bus
is the magnitude of the Receiving-End Voltage, at the second bus
is the reactance of the Transmission Line between the buses
is the phase angle difference between the sending-end and receiving end voltages
From the above equations, it can be seen that there are three variables that affect real and reactive power flow on a Transmission Line: the voltage magnitudes at either bus, the line reactance between the buses, and the voltage phase-angle difference between the buses. All FACTs devices operation on the fundamental principal that changing one or more of these variables will change the real and reactive power flow on the transmission line. Some FACTs devices will just change a single variable, while others will control all three.
It should be noted, and will be made more explicit below, that FACTs devices do not create or add real power to the system, they simply affect the circuit parameters between two points to affect how and when power flows.
Types of FACTs devices
Given that FACTs device can change up to three parameters to affect power flow (voltage, impedance, and/or phase-angle), they are often categorized by what parameter they are controlling. As the conventional devices for controlling voltage (shunt capacitors and shunt inductors) and impedance (series capacitors and load-flow reactors) are so common, FACTs devices targeting voltage and impedance parameters are categorized as shunt and series devices, respectively.
Shunt Compensation Devices
The goal shunt compensation is to connect a device in parallel with the system that will improve voltage and enable larger power flow. This is traditionally done using shunt capacitors and inductors (reactors), much like Power Factor Correction.
The most common shunt compensation device is the Static VAR Compensator (SVC). SVCs use power electronics, generally Thyristors, to switch fixed capacitors and reactors. These are referred to as Thyristor Switched Capacitor (TSC) and Thyristor Switched Reactor (TSR), respectively. Thyristors are fast enough that they can be switched sub-cycle, and can switch a reactor at different points each cycle to control the vars the reactor produces. When arranged to do this, the TSR is referred to as a Thyristor Controlled Reactor (TCR). TCRs produce large amounts of harmonics and require Filter Banks to prevent adverse effects to the system.
Another type of shunt compensation is the Static Synchronous Compensator or STATCOM. Power electronics are combined in series with a reactor to form a Voltage-Sourced Converter (VSC), which when connected to an AC system forms a STATCOM. A VSC use the same principal of power flow on a transmission line; measuring the system voltage its connected to and varying the voltage of the power electronics to cause reactive power flow in or out of the VSC. Early STATCOMs used thyristors as their power electronics and Pulse-Width Modulation (PWM) to control reactive power, but with advances in semiconductor technologies, Insulated-Gate Bipolar Transistors (IGBT) have replaced them.
Series Compensation Devices
Series Compensation devices change the impedance of the Transmission Line to increase or decrease power flow. Power flow is increased by adding a series capacitor to offset line inductance or decreased by adding a series load-flow reactor to add to the line inductance.
One type of series compensation is the Thyristor-Controlled Series Capacitor (TCSC), which combines the TCR from an SVC in parallel with a traditional fixed series capacitor. As using power electronics to switch capacitors sub-cycle is not feasible due to concerns of stored charge, a TCR is used to create a variable inductance to offset the capacitor. A TCSC can be used to dynamically vary the power flow on a transmission line.
A VSC can also be used as a series compensation device if it's connected across the secondary winding of a series-connected transformer. This arrangement is referred to as a Static Synchronous Series Compensator (SSSC), and offers the benefits of a smaller reactor than in a TCSC, and the lower harmonic production of a VSC (or Voltage-Source Inverter - VSI when used in a SSSC) compared to a TCR.
Phase Angle Compensation
Power will only flow between two points on an AC system if there is a phase angle difference between the buses. Traditionally this is controlled by generators, however in large grids this becomes ineffective for managing power flow between distant buses. Phase-Shifting Transformers (PST) are generally used for these applications and can just be a Phase-Angle Regulator (PAR) or control both phase-angle and voltage.
The most straightforward phase angle compensation device would be to replace the tap changer on PAR with thyristors to switch portions of the winding in and out, forming a Thyristor-Controlled Phase-Shifting Transformer (TCPST). However, this is generally not done as a TCPST would be considerably more expensive than a PAR. Instead, this idea is expanded to replace a Quadrature Booster with a device referred to as a Thyristor-Controlled Phase-Angle Regulator (TCPAR), also known as a Static Phase-Shifter (SPS). From the schematic, it can be seen that a TCPAR is just a Quadrature Booster with the mechanical portions of the excitor and booster transformers replaced with Power Electronics, typically Thyristors.
Another way to form a TCPAR is to separate the Excitor and Booster transformer and control their secondaries with separate sets of Power Electronics. By linking the two sets of Power Electronics through a DC bus, typically by using GTO Thyristors or IGBTs, a TCPAR can be formed. While doing this may initially seem unnecessary, by looking at the shunt and series transformers and their electronics separately, it is apparent that the shunt portion is a STATCOM, and the series portion is a SSSC. With the DC bus providing power from the shunt portion to the series portion, the device functions as a Phase-Angle regulator, however with the DC bus isolating the two side, the STATCOM can control the shunt voltage or the SSSC can control line impedance. This gives the device its name, the Unified Power Flow Controller (UPFC), as it can control all three parameters that affect Power Control.
See also
Static VAR Compensator (SVC)
Static Synchronous Compensator (STATCOM)
Thyristor-Controlled Series Capacitor (TCSC)
Static Synchronous Series Compensator (SSSC)
Thyristor-Controlled Phase Angle Regulator (TCPAR)
Unified Power Flow Controller (UPFC)
High-Voltage DC (HVDC)
References
Electric power transmission
Power engineering | Flexible AC transmission system | Engineering | 2,603 |
1,303,885 | https://en.wikipedia.org/wiki/C/1961%20R1%20%28Humason%29 | Comet Humason, formally designated C/1961 R1 (a.k.a. 1962 VIII and 1961e), was a non-periodic comet discovered by Milton L. Humason on 1 September 1961. Its perihelion was well beyond the orbit of Mars, at 2.133 AU. The outbound orbital period is about 2,516 years.
Physical properties
It was a "giant" comet, much more active than a normal comet for its distance to the Sun, with an absolute magnitude of 1.35−3.5, and a nucleus diameter estimated at . It could have been up to a hundred times brighter than an average new comet. It had an unusually disrupted or "turbulent" appearance. It was also unusual in that the spectrum of its tail showed a strong predominance of the ion CO+, a result previously seen unambiguously only in C/1908 R1 (Morehouse).
See also
C/1729 P1 (Sarabat)
C/1995 O1 (Hale–Bopp)
C/ (Bernardinelli–Bernstein)
References
External links
Non-periodic comets
1961 in science
Astronomical objects discovered in 1961 | C/1961 R1 (Humason) | Astronomy | 240 |
76,916,676 | https://en.wikipedia.org/wiki/West%20African%20Conservation%20Network | West African Conservation Network (WACN) is a non-profit organization established by Patrick Ogbonnia Egwu, with a primary mission of preserving and restoring wilderness areas.
Location
WACN has its headquarters on the 7th Floor of Mulliner Towers, Alfred Rewane Road Lagos, 101233, Nigeria and Kemp House, City Road, London, EC1V 2NX, United Kingdom.
Overview
WACN was established in August 2020. With an emphasis on collaboration with governmental entities, the organization seeks to address the decline of wilderness regions, aspiring to transform them into sustainable ecosystems. The organization endeavors to promote sustainability within these habitats through strategic initiatives, aiming to ensure their long-term viability for future generations. WACN's mission centers on the restoration and protection of depleted wilderness areas. This mission entails establishing agreements with governmental authorities governing the respective jurisdictions of these wilderness areas. Utilizing meticulous protection measures and strategic reintroduction efforts, WACN endeavors to reverse the decline of biodiversity within these regions, to restore them to their original levels of ecological integrity.
Projects
Kainji Lake National Park
On the 27th of October 2023, WACN signed a 31 years memorandum of understanding with the Nigeria National Park Service to restore, secure and manage the Kainji Lake National Park which is a 2000+ square mile (5000+ square kilometers) wilderness area straddling Niger and Kwara states, in the northwestern part of the country. It was one of several MOUs signed by the National Park Service with various organisations to strengthen conservation efforts.
References
Non-profit corporations
Ecological restoration
Organizations based in Africa
Nature conservation in Africa
Nature conservation organizations based in Africa | West African Conservation Network | Chemistry,Engineering | 340 |
4,991,374 | https://en.wikipedia.org/wiki/Bridging%20ligand | In coordination chemistry, a bridging ligand is a ligand that connects two or more atoms, usually metal ions. The ligand may be atomic or polyatomic. Virtually all complex organic compounds can serve as bridging ligands, so the term is usually restricted to small ligands such as pseudohalides or to ligands that are specifically designed to link two metals.
In naming a complex wherein a single atom bridges two metals, the bridging ligand is preceded by the Greek letter mu, μ, with a subscript number denoting the number of metals bound to the bridging ligand. μ2 is often denoted simply as μ. When describing coordination complexes care should be taken not to confuse μ with η ('eta'), which relates to hapticity. Ligands that are not bridging are called terminal ligands.
List of bridging ligands
Virtually all ligands are known to bridge, with the exception of amines and ammonia. Common bridging ligands include most of the common anions.
Many simple organic ligands form strong bridges between metal centers. Many common examples include organic derivatives of the above inorganic ligands (R = alkyl, aryl): , , , (imido), (phosphido, note the ambiguity with the preceding entry), (phosphinidino), and many more.
Examples
Bonding
For doubly bridging (μ2-) ligands, two limiting representation are 4-electron and 2-electron bonding interactions. These cases are illustrated in main group chemistry by and . Complicating this analysis is the possibility of metal–metal bonding. Computational studies suggest that metal-metal bonding is absent in many compounds where the metals are separated by bridging ligands. For example, calculations suggest that lacks an iron–iron bond by virtue of a 3-center 2-electron bond involving one of three bridging CO ligands.
Bridge-terminal exchange
The interchange of bridging and terminal ligands is called bridge-terminal exchange. The process is invoked to explain the fluxional properties of metal carbonyl and metal isocyanide complexes. Some complexes that exhibit this process are cobalt carbonyl and cyclopentadienyliron dicarbonyl dimer:
Co2(μ-CO)2(CO)6 Co2(μ-CO)2(CO)4(CO)2
(C5H5)2Fe2(μ-CO)2(CO)2 (C5H5)2Fe2(μ-CO)2(CO)2
These dynamic processes, which are degenerate, proceed via an intermediate where the CO ligands are all terminal, i.e. .
Polyfunctional ligands
Polyfunctional ligands can attach to metals in many ways and thus can bridge metals in diverse ways, including sharing of one atom or using several atoms. Examples of such polyatomic ligands are the oxoanions and the related carboxylates, , and the polyoxometalates. Several organophosphorus ligands have been developed that bridge pairs of metals, a well-known example being .
See also
Bridging carbonyl
References
Coordination chemistry | Bridging ligand | Chemistry | 664 |
38,549,774 | https://en.wikipedia.org/wiki/Nonlinear%20Dirac%20equation | See Ricci calculus and Van der Waerden notation for the notation.
In quantum field theory, the nonlinear Dirac equation is a model of self-interacting Dirac fermions. This model is widely considered in quantum physics as a toy model of self-interacting electrons.
The nonlinear Dirac equation appears in the Einstein–Cartan–Sciama–Kibble theory of gravity, which extends general relativity to matter with intrinsic angular momentum (spin). This theory removes a constraint of the symmetry of the affine connection and treats its antisymmetric part, the torsion tensor, as a variable in varying the action. In the resulting field equations, the torsion tensor is a homogeneous, linear function of the spin tensor. The minimal coupling between torsion and Dirac spinors thus generates an axial-axial, spin–spin interaction in fermionic matter, which becomes significant only at extremely high densities. Consequently, the Dirac equation becomes nonlinear (cubic) in the spinor field, which causes fermions to be spatially extended and may remove the ultraviolet divergence in quantum field theory.
Models
Two common examples are the massive Thirring model and the Soler model.
Thirring model
The Thirring model was originally formulated as a model in (1 + 1) space-time dimensions and is characterized by the Lagrangian density
where is the spinor field, is the Dirac adjoint spinor,
(Feynman slash notation is used), is the coupling constant, is the mass, and are the two-dimensional gamma matrices, finally is an index.
Soler model
The Soler model was originally formulated in (3 + 1) space-time dimensions. It is characterized by the Lagrangian density
using the same notations above, except
is now the four-gradient operator contracted with the four-dimensional Dirac gamma matrices , so therein .
Other models
Besides the Soler model, extensive work has been done where nonlinear versions of Dirac’s equation are used to describe purely classical, nonlinear particle-like solutions (PLS) in (3 + 1) space-time dimensions. Rañada has given a review of the subject. Although a more recent review specifically devoted to purely classical, nonlinear PLS has apparently not appeared, pertinent references are available in various more recent publications.
The models reviewed by Rañada are meant to be entirely classical in nature and should properly be regarded as having nothing to do with quantum mechanics, but the dependent variable in the Dirac equation is still typically taken as a spinor. When a purely classical model of this nature is to be considered, the use of a spinor as the dependent variable seems inappropriate.
If a minor modification of the underlying Dirac equation is used, the problem can be avoided in a relatively straightforward way. Instead of using the usual column vector as the dependent variable in Dirac’s equation, one can use a 4 × 4 matrix. When there is no transformation of coordinates, the leftmost column of the matrix is used in Dirac’s equation in the usual manner, but when there is to be a transformation in space-time, the four components of the dependent variable are sometimes allowed to appear in various different positions in the 4 × 4 matrix.
The result can be understood in terms of a Clifford algebra since the dependent variable in Dirac’s equation can be represented as a 4 dimensional left ideal of a Clifford algebra. In this case one simply allows the dependent variable to lie in a different left ideal when there is a transformation in space-time.
Einstein–Cartan theory
In Einstein–Cartan theory the Lagrangian density for a Dirac spinor field is given by ()
where
is the Fock–Ivanenko covariant derivative of a spinor with respect to the affine connection, is the spin connection, is the determinant of the metric tensor , and the Dirac matrices satisfy
The Einstein–Cartan field equations for the spin connection yield an algebraic constraint between the spin connection and the spinor field rather than a partial differential equation, which allows the spin connection to be explicitly eliminated from the theory. The final result is a nonlinear Dirac equation containing an effective "spin-spin" self-interaction,
where is the general-relativistic covariant derivative of a spinor, and is the Einstein gravitational constant, . The cubic term in this equation becomes significant at densities on the order of .
See also
Dirac equation
Dirac equation in the algebra of physical space
Dirac–Kähler equation
Gross–Neveu model
Higher-dimensional gamma matrices
Nonlinear Schrödinger equation
Pokhozhaev's identity for the stationary nonlinear Dirac equation
Soler model
Thirring model
References
Quantum field theory
Dirac equation | Nonlinear Dirac equation | Physics | 977 |
44,020,445 | https://en.wikipedia.org/wiki/Howard%20W.%20Bergerson | Howard William Bergerson (July 29, 1922 – February 19, 2011) was an American writer and poet, noted for his mastery of palindromes and other forms of wordplay.
Work
Bergerson's first volume of poetry, The Spirit of Adolescence, was published in 1950, and earned him the state's nomination as Oregon Poet Laureate in 1957. However, he declined the nomination for political reasons, and the position instead went to Ethel R. Fuller.
By 1961, Bergerson's interests had shifted to wordplay and constrained writing. He became fascinated with palindromes and set out to write a coherent, full-length palindromic poem. The result, the 1034-letter "Edna Waterfall", was for some time listed by the Guinness Book of World Records as the longest palindrome in English.
In 1969, Bergerson became editor of Word Ways: The Journal of Recreational Linguistics, though stepped down a year later when Greenwood Periodicals dropped the publication. However, he continued to contribute material to Word Ways for several decades, including memorable articles on palindromes, anagrams, panalphabetic windows, pangrammatic windows, self-referencing acrostics, and vocabularyclept poetry. He also published games and puzzles in Reader's Digest and other magazines.
His 1973 book Palindromes and Anagrams was influential among wordplay enthusiasts, and has been hailed by critics as a "sine qua non for all serious logologists" and the greatest ever book on palindromes. He is often cited, along with Leigh Mercer and J. A. Lindon, as one of the greatest palindromists of all time.
Personal life
Bergerson was born in Minneapolis, Minnesota on July 29, 1922. His mother, Margaret Jeske, later married Ludvick Bergerson, who became his adopted father. Bergerson's youth was spent in the mill towns of the Pacific Northwest. After serving in the US Army in the Guadalcanal Campaign of World War II, he moved to Sweet Home, Oregon, down the road from the mill where he worked as a shingle weaver for over 50 years. In 1967 he met and married Nellie Wilson (née McLaughlin) and adopted her three youngest children; the marriage lasted until Nellie's death in 1987. His subsequent marriage, to Christine Stamm, lasted three years.
In 2010 Bergerson moved from Sweet Home to Woodinville, Washington. He died the following year in Kirkland, Washington.
Bibliography
Howard W. Bergerson. The Spirit of Adolescence. Little Press, 1950.
———. Palindromes and Anagrams. Dover Publications, 1973. .
———. Posterity Is You. 1977.
———. The Cosmic Sieve Hypothesis. Greenwood Periodicals, 1986.
———. Earth: The Crossroads of the Cosmos. 1990.
References
External links
Howard Bergerson Interviews – Mathematics and Poetry
Text of "Edna Waterfall"
American male poets
1922 births
2011 deaths
Word Ways people
Writers from Minneapolis
Writers from Oregon
Poets from Oregon
Palindromists
Shingle weavers
Anagrammatists
20th-century American poets
20th-century American male writers
People from Sweet Home, Oregon
United States Army personnel of World War II | Howard W. Bergerson | Physics | 667 |
25,076,759 | https://en.wikipedia.org/wiki/Digital%20morphogenesis | Digital morphogenesis is a type of generative art in which complex shape development, or morphogenesis, is enabled by computation. This concept is applicable in many areas of design, art, architecture, and modeling. The concept was originally developed in the field of biology, later in geology, geomorphology, and architecture.
In architecture, it describes tools and methods for creating forms and adapting them to a known environment.
Developments in digital morphogenesis have allowed construction and analysis of structures in more detail than could have been put into a blueprint or model by hand, with structure at all levels defined by iterative algorithms. As fabrication techniques advance, it is becoming possible to produce objects with fractal or other elaborate structures.
Notable persons
Alan Turing
Neri Oxman
Rivka Oxman
See also
Bionics, Biomimicry
Digital architecture, Blobitecture
Generative art, Evolutionary art
Evolutionary computation
References
Reading
Burry, Jane, et al. (2005). 'Dynamical Structural Modeling: A Collaborative Design Exploration', International Journal of Architectural Computing, 3, 1, pp. 27–42
Colabella, Enrica and Soddu, Celestino (1992). http://www.artscience-ebookshop.com/GenerativeArtDesign.htm GENERATIVE ART & DESIGN Theory, Methodology and Projects. Environmental Design of MORPHOGENESIS, Genetic Codes of Artificial (English Version, Argenia Pub. 2020); Il Progetto Ambientale di Morfogenesi (italian version) (Bologna: Progetto Leonardo)
De Landa, Manuel (1997). A Thousand Years of Nonlinear History (New York: Zone Books)
Feuerstein, Günther (2002). Biomorphic Architecture: Human and Animal Forms in Architecture (Stuttgart; London: Axel Menges)
Frazer, John H. (1995). An Evolutionary Architecture , Themes VII (London: Architectural Association)
Hensel, Michael and Achim Menges (2008). 'Designing Morpho-Ecologies: Versatility and Vicissitude of Heterogeneous Space', Architectural Design, 78, 2, pp. 102–111
Hensel, Michael, Achim Menges, and Michael Weinstock, eds (2004). Emergence: Morphogenetic Design Strategies, Architectural Design (London: Wiley)
Hensel, Michael and Achim Menges (2006). 'Material and Digital Design Synthesis', Architectural Design, 76, 2, pp. 88–95
Hensel, Michael and Achim Menges (2006). 'Differentiation and Performance: Multi-Performance Architectures and Modulated Environments', Architectural Design, 76, 2, pp. 60–69
Hingston, Philip F., Luigi C. Barone, and Zbigniew Michalewicz, eds (2008). Design by Evolution: Advances in Evolutionary Design (Berlin; London: Springer)
Kolarevic, Branko (2000). ' Digital Morphogenesis and Computational Architectures', in Proceedings of the 4th Conference of Congreso Iberoamericano de Grafica Digital, SIGRADI 2000 - Construindo (n)o Espaço Digital (Constructing the Digital Space), Rio de Janeiro (Brazil) 25–28 September 2000, ed. by José Ripper Kós, Andréa Pessoa Borde and Diana Rodriguez Barros, pp. 98–103.
Leach, Neil (2009). 'Digital Morphogenesis', Architectural Design, 79, 1, pp. 32–37
Lynn, Greg (1999). Animate Form (New York: Princeton Architectural Press)
Lynn, Greg (1998). Folds, Bodies & Blobs: Collected Essays (Bruxelles: La Lettre volée)
Menges, Achim (2007). 'Computational Morphogenesis: Integral Form Generation and Materialization Processes', in Proceedings of Em‘body’ing Virtual Architecture: The Third International Conference of the Arab Society for Computer Aided Architectural Design (ASCAAD 2007), 28–30 November 2007, Alexandria, Egypt, ed. by Ahmad Okeil, Aghlab Al-Attili and Zaki Mallasi, pp. 725–744
Menges, Achim (2006). 'Polymorphism', Architectural Design, 76, 2, pp. 78–87
Ottchen, Cynthia (2009). 'The Future of Information Modelling and the End of Theory: Less is Limited, More is Different', Architectural Design, 79, 2, pp. 22–27
Prusinkiewicz, Przemyslaw, and Aristid Lindenmayer (2004). The Algorithmic Beauty of Plants (New York: Springer-Verlag)
Sabin, Jenny E. and Peter Lloyd Jones (2008). 'Nonlinear Systems Biology and Design: Surface Design', in Proceedings of the 28th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Silicon + Skin: Biological Processes and Computation, Minneapolis 16–19 October 2008, ed. by Andrew Kudless, Neri Oxman and Marc Swackhamer, pp. 54–65
Sevaldson, Birger (2005). Developing Digital Design Techniques: Investigations on Creative Design Computing (PhD, Oslo School of Architecture)
Sevaldson, Birger (2000). 'Dynamic Generative Diagrams', in Promise and Reality: State of the Art versus State of Practice in Computing for the Design and Planning Process. 18th eCAADe Conference Proceedings, ed. by Dirk Donath (Weimar: Bauhaus Universität), pp. 273–276
Steadman, Philip (2008). The Evolution of Designs: Biological Analogy in Architecture and the Applied Arts (New York: Routledge)
Tierney, Therese (2007). Abstract Space: Beneath the Media Surface (Oxon: Taylor & Francis), p. 116
Weinstock, Michael (2006). 'Self-Organisation and the Structural Dynamics of Plants', Architectural Design, 76, 2, pp. 26–33
Weinstock, Michael (2006). 'Self-Organisation and Material Constructions', Architectural Design, 76, 2, pp. 34–41
External links
The 28th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), Silicon + Skin: Biological Processes and Computation
Architectural design
Architectural theory | Digital morphogenesis | Engineering | 1,318 |
39,914,312 | https://en.wikipedia.org/wiki/List%20of%20Triangulum%27s%20suspected%20satellite%20galaxies | The Triangulum subgroup is made up of the Triangulum Galaxy (M33) and its satellites. Although the Triangulum Galaxy does not have any proven satellite galaxies, a number of galaxies are suspected of being in the system.
Andromeda XXII
Andromeda XXII is located much closer in projection to M33 than M31 (42 kiloparsecs (140 kly) vs. 224 kiloparsecs (730 kly)). This fact suggests that it might be the first Triangulum (M33) satellite ever discovered. However, it is currently catalogued as a satellite of Andromeda (M31).
Pisces Dwarf
Pisces Dwarf is an irregular dwarf galaxy that is part of the Local Group. The galaxy is also suspected of being a satellite galaxy of the Triangulum Galaxy.
Andromeda II
Andromeda II is a dwarf spheroidal galaxy about 2.22 Mly away in the constellation Andromeda. It is part of the Local group of galaxies and is a satellite galaxy of the Andromeda Galaxy (M31) but it is also situated closely to the Triangulum Galaxy (M33), it is not quite clear if it is a satellite of the one or the other galaxy.
Pisces VII
Pisces VII, also known as Triangulum III, is an ultra-faint dwarf galaxy in the constellation of Pisces. Its distance has been found to be 2.99 Mly, meaning it is likely associated with the Triangulum Galaxy.
Data
References
Local Group
Triangulum | List of Triangulum's suspected satellite galaxies | Astronomy | 329 |
51,606,548 | https://en.wikipedia.org/wiki/NGC%20212 | NGC 212 is a lenticular galaxy located approximately 369 million light-years from the Solar System in the constellation Phoenix. It was discovered on October 28, 1834 by John Herschel.
See also
Lenticular galaxy
List of NGC objects (1–1000)
Phoenix (constellation)
References
External links
SEDS
0212
2417
Lenticular galaxies
Phoenix (constellation) | NGC 212 | Astronomy | 73 |
17,320,405 | https://en.wikipedia.org/wiki/Harmon%20Craig | Harmon Craig (March 15, 1926 – March 14, 2003) was an American geochemist who worked briefly for the University of Chicago (1951-1955) before spending the majority of his career at Scripps Institution of Oceanography (1955-2003).
Craig was involved in numerous research expeditions, which visited the Great Rift Valley of East Africa, the crater of Loihi (now known as Kamaʻehuakanaloa), the Afar Depression of Ethiopia, Greenland's ice cores, and Yellowstone's geysers, among many others. This led to him being described as "the Indiana Jones of the Earth sciences", someone "whose overriding impulse was to get out and see the world they were studying".
Craig made many significant discoveries in geochemistry. He is credited with establishing the field of carbon isotope geochemistry by characterizing carbon's stable isotopic signatures in various natural materials. This had immediate applications in radiocarbon dating. By studying stable and radioactive carbon isotopes in the biosphere and air-sea system, he derived the atmospheric residence time of carbon dioxide with respect to oceanic uptake. His work laid the foundation for isotopic studies of the carbon cycle, and was fundamental to understanding carbon sequestering in the oceanic and the terrestrial biosphere and the modulation of global warming. In addition, from 1969 to 1989, Harmon Craig served as an editor for Earth and Planetary Science Letters.
Family and early life
Harmon Craig was born in Manhattan, in New York City, to John Richard Craig, Jr. and Virginia (Stanley) Craig. He was named after his uncle, Harmon Bushnell Craig (1895-1917), but does not use his middle name.
Harmon Craig's grandparents on his father's side were actors, directors and producers. During World War I, John Craig (1868-1931) and his wife, actress Mary Young, led the first professional American stock theater company to travel to France and entertain troops at the front. While they entertained the troops, their sons Harmon Bushnell Craig (1895-1917) and John Richard Craig, Jr. (1896-1945) served in the war effort.
John Craig, Jr. received a French Croix de Guerre for his efforts as a second lieutenant of artillery, working with French 75s.
Harmon Bushnell Craig died serving with an ambulatory corps run by the American Field Service, and was posthumously awarded the French Croix de Guerre.
In November 1924, John Craig, Jr. married Virginia Stanley of Wichita, Kansas. They had three children.
Harmon Craig's mother, Virginia Stanley, was descended from Quakers who helped found schools for freed slaves. His mother's involvement with the Quakers was a strong influence on Harmon Craig.
University of Chicago
Harmon Craig studied geology and chemistry at the University of Chicago. In 1944, he joined the U.S. Navy, serving as a communications and radar officer during World War II. After the war, he continued his education at University of Chicago, working with Nobel Laureate Harold Urey. Craig credits Urey with giving him valuable advice on how to choose scientific problems: "If you go into a project, it's got to be a scientific problem that has rooms that continue into other rooms."
Craig earned his Ph.D. in 1951, with The geochemistry of the stable carbon isotopes, a thesis on carbon isotope geochemistry.
Craig created his thesis to find the measurement of ancient sea temperature. Craig used the carbon dioxide released from calcium carbonate fossils as a basis for future researches involving the carbon system. The masses of carbon dioxide that are produced by 18O and 16O were used to calculate respective masses. Craig's study of the carbon isotope produced corrections that deal with mass fractionation and radiocarbon ages. Craig's thesis work is considered a foundational accomplishment for its studies of 13 C and 12 C in a wide range of natural materials, including everything from ocean water to the atmosphere; volcanic gases; plants, coal, diamonds, and petroleum; sediments, igneous rocks and meteorites. His theory has been applied to applications as varied as determining food chains and the identifying the sources of stone for ancient statues. Karl Turekian has stated that "Craig's 35-year-old dissertation is still the measure of all subsequent work in the field."
Craig joined the Enrico Fermi Institute at the University of Chicago as a research associate in 1951.
In 1953, Urey and Craig published results showing that chondrites, meteors from the Solar System, did not have a single fixed composition, as had been assumed. After carrying out analyses of the chemical composition of hundreds of different meteorites, they reported that chondrites fell into two distinguishable groups, high iron (H) and low iron (L) chondrites. Their work "underscored the value of reliable chemical data" and led to significant improvements in data analysis in the field. It led to a better understanding of the materials and processes involved in forming planets.
Scripps Institution of Oceanography
In 1955 Harmon Craig was recruited to Scripps Institution of Oceanography by Roger Revelle. His laboratory at Scripps eventually contained five mass spectrometers, one of them a portable unit.
As a professor of geochemistry and oceanography at Scripps, Craig developed new methods in radiocarbon dating and applied radioisotope and isotope distribution to various topics in marine-, geo-, and cosmochemistry. Craig produced fundamental findings about how the deep earth, oceans and atmosphere work.
During the 1950s Craig measured variations in the concentrations of hydrogen and oxygen isotopes in natural waters. In 1961, Craig identified the global meteoric water line, a linear relationship describing the occurrence of hydrogen and oxygen isotopes in terrestrial waters.
Craig also established the oxygen isotope shift in geothermal and volcanic fluids, demonstrating that the water is meteoric. His discovery outlined the relation between rocks and water in geothermal systems.
In 1963, Craig received a Guggenheim Fellowship, using it to spend a year at the Istituto de Geologia Nucleare, Pisa, Italy.
He described a framework for studying the isotopic composition of the hydrosphere, discussing kinetics, equilibrium, and the use of isotopes for paleoenvironmental reconstructions.
The work he presented with Louis I. Gordon on isotopic fractionation of the phase changes in water is known as the Craig-Gordon Model.
The model is applied to problems in watershed and ecosystem studies such as the calculation of evaporation.
It has been called "a corner stone of isotope geochemistry."
During the Nova Expedition of 1967, Craig and colleagues W. Brian Clarke (1937–2002) and M.A. Beg from McMaster University in Canada observed the Kermadec Trench in the Pacific Ocean.
They found unexpectedly high proportions of the helium-3 isotope in the ocean waters. Craig concluded that the isotope was present within the Earth's mantle and theorized that it was leaking into sea water through cracks in the sea floor.
Craig and coworkers studied the isotopic composition of atmospheric and dissolved oxygen in the composition of dissolved gases, where he discovered the biochemical oxygen demand and the intake in the ocean mixed layer. Craig determined by measuring that the element, 210Pb is rapidly scavenged by sinking particulate matter.
In 1970, Craig teamed up with colleagues at Scripps, Columbia University's Lamont–Doherty Earth Observatory and the Woods Hole Oceanographic Institution to direct the GEOSECS Programme (geochemical ocean sections study) to investigate the chemical and isotopic properties of the world's oceans. GEOSECS produced the most complete set of ocean chemistry data ever collected.
In 1971, as part of the Antipode Expedition, Craig and his colleagues gathered hydrographic casts and other data, and discovered a benthic front separating the South Pacific deep and bottom water.
During the 1970s Craig examined the relationship of gases such as radon and helium to earthquake prediction, developing a monitoring network at thermal springs and wells near major fault lines in southernmost California. In 1979, he detected an increase in radon and helium as a precursor to an earthquake near Big Bear Lake, California.
In a long-term project, Harmon Craig and Valerie Craig (his wife) used carbon and oxygen isotopes to identify the sources of the marble used in ancient Greek sculptures and temples.
Craig discovered submarine Hydrothermal vents by measuring helium 3 and radon emitted from seafloor spreading centers. He made 17 dives to the bottom of the ocean in the ALVIN submersible, including the first descent into the Mariana Trough. There he discovered hydrothermal vents nearly 3700m deep.
Craig proved that there was excess 3He instead of 4He, affecting the understanding for ocean circulation and seafloor spreading.
Craig led 28 oceanographic expeditions and traveled to the East African Rift Valley, The Dead Sea, Tibet, Yunnan (China) and many other places to sample volcanic rocks and gases. He visited all the major volcanic island chains of the Pacific Ocean and Indian Ocean to collect lava samples. He identified 16 mantle hotspots where volcanic plumes rise from the Earth's outer core through the deep mantle by measuring their helium 3 to helium 4 ratio, identifying the higher helium 3 content present in the hotspots as primordial helium, trapped in the Earth's core when it was first formed.
Craig was one of the earliest people to analyze the gases trapped in the glacier ice. Craig reported that the methane in the atmosphere had increased twice due to human day-to-day activities in the last 300 years.
Awards and honors
Craig was elected to the National Academy of Sciences in 1979.
Craig won the VM Goldschmidt Medal of the Geochemical Society in 1979, the National Science Foundation's Special Creativity Award in Oceanography in 1982 and the Arthur L. Day Prize and Lectureship of the National Academy of Sciences in 1987.
He shared the Vetlesen Prize with Wallace S. Broecker in 1987.
In 1998 he was awarded the Balzan Prize for Geochemistry, from the International Balzan Foundation of Milan, Italy.
The Foundation commended him as "a pioneer in earth sciences who uses the varied tools of isotope geochemistry to solve problems of fundamental scientific importance and immediate relevance in the atmosphere, hydrosphere and solid earth." It was the first time that the prize had gone to a geochemist. Craig was quoted as saying "The Prize's most significant effect was to establish that Geochemistry, especially Isotope Geochemistry, which began in 1947, had come of age and is a mature science. This was much more important than the specific person chosen for the award."
He received an honorary degree from the University of Paris.
Death
Craig died at Thornton Hospital in La Jolla, California on 14 March 2003 from a massive heart attack a day before his seventy-seventh birthday.
References
External links
Oral history interview transcript with Harmon Craig on 29 April 1996, American Institute of Physics, Niels Bohr Library & Archives
1926 births
2003 deaths
American geochemists
Members of the United States National Academy of Sciences
American climatologists
Mass spectrometrists
University of Chicago alumni
Recipients of the V. M. Goldschmidt Award
Vetlesen Prize winners | Harmon Craig | Physics,Chemistry | 2,294 |
2,589,204 | https://en.wikipedia.org/wiki/Halogenated%20ether | Halogenated ethers are a subcategory of ethers—organic chemicals that contain an oxygen atom connected to two alkyl groups or similar structures. An example of an ether is the solvent diethyl ether. Halogenated ethers differ from other ethers because there are one or more halogen atoms—fluorine, chlorine, bromine, or iodine—as substituents on the carbon groups. . Examples of commonly used halogenated ethers include isoflurane, sevofluorane and desflurane.
History
An ideal inhaled anesthetic wasn't found until 1950. Volatile substances like diethyl ether, which have severe risks of nausea, were used before. Diethyl ether has the unfortunate disadvantage of being extremely flammable, especially in the presence of enriched oxygen mixtures.
James Young Simpson, an obstetrics surgeon, used ethers to help women relieve their labor pains but ultimately deemed them unsuitable due to their drawbacks. Simpson and his friends tested the halogenated hydrocarbon, chloroform, as a substitute inhalation agent during a house party. They woke from unconsciousness pleasantly surprised with its effectiveness. This was the first recorded successful use of halogenated hydrocarbons as anesthetics.
Applications
Anesthesia
Inhaled agents like diethyl ether are critical in anesthesia. Diethyl ether initially replaced non-flammable (but more toxic) halogenated hydrocarbons like chloroform and trichloroethylene. Halothane is a halogenated hydrocarbon anesthetic agent that was introduced into clinical practice in 1956. Due to its ease of use and improved safety profile with respect to organ toxicity, halothane quickly replaced chloroform and trichloroethylene.
The anesthesia practice was significantly improved later in the 1950s with the introduction of halogenated ethers, like isoflurane, enflurane, and sevoflurane. Since its introduction in the 1980s, isoflurane has been widely used due to its decreased risk of hepatotoxicity and better hemodynamic stability when compared to halothane. The 1990s saw the development of sevoflurane, which was especially helpful in pediatric anesthesia because it provided even faster induction and recovery profiles.
All inhalation anesthetics in current clinical use are halogenated ethers, except for halothane (which is a halogenated hydrocarbon or haloalkane), nitrous oxide, and xenon.
Inhalation anesthetics are vaporized and mixed with other gases prior to their inhalation by the patient before or during surgery. These other gases always include oxygen or air, but may also include other gases such as nitrous oxide or helium. In most surgical situations, other drugs such as opiates are used for pain and skeletal muscle relaxants are used to cause temporary paralysis. Additional drugs such as midazolam may be used to produce amnesia during surgery. Although newer intravenous anesthetics (such as propofol) have increased the options of anesthesiologists, halogenated ethers remain a mainstay of general anesthesia.
Polymers
Perfluorinated epoxides can be used as used as comonomers for the production of polytetrafluoroethylene (PTFE).
Perfluorinated epoxides are a class of epoxides where all the hydrogen atoms on a carbon chain are replaced with fluorine atoms. The fluorine ensures compatibility with PTFE, while the epoxy group enables chemical bonding during polymerization. When used as comonomers, they can alter the microstructure of PTFE, reducing crystallinity and improving flexibility and toughness. This makes the polymer more suitable for applications like seals and gaskets, which require resilience under stress. Furthermore, perfluorinated epoxides enable the tailoring of specific functional properties, such as low surface energy, which is essential for applications requiring non-stick or low-friction surfaces.
Flame Retardant
Halogenated ethers play a significant role in enhancing the thermal stability and fire resistance of polymers. When applied to materials, they are effective in preventing items from catching fire because of the chemical's resistance to decomposition and effective flame suppression properties.
Most halogenated ethers contain bromine or chlorine. Brominated compounds are particularly effective because they release bromine radicals when exposed to heat. These radicals interrupt the combustion process by reacting with free radicals in the flame, thereby suppressing fire propagation. Chlorinated ethers can also function similarly by releasing chlorine radicals. Both types of halogens contribute to the flame-retardant properties, but brominated ethers are often favoured for their higher efficiency and lower required concentrations compared to their chlorinated counterparts.
Decabromodiphenyl ether (deca-BDE), a type of Polybrominated diphenyl ether (PBDEs), is a brominated flame retardant. It was widely used in polystyrene, acrylonitrile butadiene styrene (ABS), flexible polyurethane foam, textile coatings, wire/cable insulation, electrical connectors, and other interior parts. Decabromodiphenyl ether is one of many halogenated flame retardants that are now are heavily regulated or banned in many regions because of bioaccumulation and potential toxicity hazards. Most industries are now transitioning to alternative, less hazardous flame retardants. However, because of the widespread use of these chemicals in many products, it is anticipated that they will continue to persist in the environment.
Tetrabromobisphenol A bis(2,3-dibromopropyl) ether (TBBPA-DBPE) is another type of brominated flame retardant. It is widely used in electronic casings and circuit boards due to its high efficiency in reducing flammability. TBBPA-DBPE is also a flame retardant in plastics, paper, and textiles, and as plasticizer in adhesives and coatings.
Common halogenated ethers
Toxicology
Respiratory Depression
Halogenated ethers can cause respiratory depression by reducing the body's response to carbon dioxide and hypoxia, which affects breathing rates and depth. Some, like desflurane and isoflurane, are also known for causing airway irritation. This can cause coughing, breath-holding, or laryngospasm, particularly during inhalational induction of anesthesia. Sevoflurane has minimal airway irritation and is generally preferred for induction, particularly in children or those with sensitive airways.
Environmental Impact
Greenhouse Gas Emissions
Halogenated ethers are greenhouse gases and contribute to global warming. Compounds like desflurane and isoflurane have high global warming potentials (GWP), which measure their heat-trapping abilities relative to carbon dioxide (CO₂). The GWP of a halogenated anesthetic is up to 2,000 times greater than CO₂. The use of these anesthetics in healthcare is a significant contributor to hospital-related greenhouse gas emissions. There is a growing focus on identifying lower-GWP alternatives or enhancing recovery and recycling technologies for anesthetic gases.
Persistence and Bioaccumulation
Halogenated ethers can persist in the atmosphere for years due to their stability. These compounds do not readily degrade and thus remain in circulation long after their release, adding to the atmospheric burden of greenhouse gases. They are generally not bioaccumulative due to its high volatility and low tendency to dissolve in water or adhere to biological tissues, but the persistent nature of these compounds raises concerns for long-term environmental effects. This is especially concerning in areas surrounding healthcare facilities where they may be routinely released.
See also
Anesthesia
Ether
Halogen
Halogenation
Hydrocarbon
References
General anesthetics
Ethers
Organohalides
GABAA receptor positive allosteric modulators
NMDA receptor antagonists | Halogenated ether | Chemistry | 1,687 |
20,037,433 | https://en.wikipedia.org/wiki/Temperature-programmed%20reduction | Temperature-programmed reduction is a technique for the characterization of solid materials and is often used in the field of heterogeneous catalysis to find the most efficient reduction conditions, an oxidized catalyst precursor is submitted to a programmed temperature rise while a reducing gas mixture is flowed over it. It was developed by John Ward Jenkins whilst developing heterogeneous catalysts for Shell Oil company, but was never patented.
Process description
A simple container (U-tube) is filled with a solid or catalyst. This sample vessel is positioned in a furnace with temperature control equipment. A thermocouple is placed in the solid for temperature measurement. The air originally present in the container is flushed out with an inert gas (nitrogen, argon). Flow controllers are used to add hydrogen (for example, 10% hydrogen in nitrogen). The composition of the gaseous mixture is measured at the exit of the sample container with appropriate detectors (thermal conductivity detector, mass spectrometer). Now, the sample in the oven is heated up on predefined values. Heating rates are usually between 1 K/min and 20 K/min. If a reduction takes place at a certain temperature, hydrogen is consumed, which is recorded by the detector. In practice the production of water is a more accurate way of measuring the reduction. This is due to the potential for varying hydrogen concentrations at the inlet, so the decrease in this number may not be precise, however as the starting concentration of water will be zero, any increase can be measured more accurately.
See also
Thermal desorption spectroscopy
References
External links
Temperature-programmed reduction and oxidation experiments with V2O5 catalysts
High-Pressure Temperature-Programmed Reduction of Sulfided Catalysts
Lecture slides on Temperature Programmed Reduction and Oxidation
Hydrogen technologies
Analytical chemistry
Materials science
Surface science
Catalysis | Temperature-programmed reduction | Physics,Chemistry,Materials_science,Engineering | 372 |
73,061,584 | https://en.wikipedia.org/wiki/Ulrike%20Meier%20Yang | Ulrike Meier Yang (born 1959) is a German-American applied mathematician and computer scientist specializing in numerical algorithms for scientific computing. She directs the Mathematical Algorithms & Computing group in the Center for Applied Scientific Computing at the Lawrence Livermore National Laboratory, and is one of the developers of the Hypre library of parallel methods for solving linear systems.
Education and career
Meier Yang did her undergraduate studies in mathematics at Ruhr University Bochum in Germany, and worked in the Central Institute of Applied Mathematics of the Forschungszentrum Jülich in Germany from 1983 to 1985 and at the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign from 1985 to 1995. She completed her doctorate through the University of Illinois in 1995 with the dissertation A Family of Preconditioned Iterative Solvers for Sparse Linear Systems, supervised by Kyle Gallivan.
She joined the Lawrence Livermore National Laboratory research staff in 1998.
As of January 1st, 2023 Yang took office as a member of the SIAM Board of Trustees.
Recognition
She is a SIAM Fellow, in the 2024 class of fellows, elected for "pioneering work on parallel algebraic multigrid and software, and broad impact on high-performance computing".
References
External links
1959 births
Living people
German mathematicians
German women mathematicians
German computer scientists
German women computer scientists
American mathematicians
American women mathematicians
American computer scientists
American women computer scientists
Applied mathematicians
Ruhr University Bochum alumni
University of Illinois Urbana-Champaign alumni
Lawrence Livermore National Laboratory staff
Fellows of the Society for Industrial and Applied Mathematics | Ulrike Meier Yang | Mathematics | 315 |
1,272,800 | https://en.wikipedia.org/wiki/Bencao%20Gangmu | The Bencao gangmu, known in English as the Compendium of Materia Medica or Great Pharmacopoeia, is an encyclopedic gathering of medicine, natural history, and Chinese herbology compiled and edited by Li Shizhen and published in the late 16th century, during the Ming dynasty. Its first draft was completed in 1578 and printed in Nanjing in 1596. The Compendium lists the materia medica of traditional Chinese medicine known at the time, including plants, animals, and minerals that were believed to have medicinal properties.
Li compiled his entries not only from hundreds of earlier works in the bencao medical tradition, but from literary and historical texts. He reasoned that a poem might have better value than a medical work and that a tale of the strange could illustrate a drug's effects. The Ming dynasty emperors did not pay too much attention to his work, and it was ignored.
Li's work contained errors and mistakes due to his limited scientific knowledge at the time. For example, Li claimed that all otters were male and that quicksilver (mercury) was not toxic.
Name
The title, translated as "Materia Medica, Arranged according to Drug Descriptions and Technical Aspects", uses two Chinese compound words. Bencao (Pen-ts'ao; "roots and herbs; based on herbs, pharmacopeia, materia medica") combines ben ( 'origin, basis') and cao ( 'grass, plant, herb'). Gangmu (Kang-mu; 'detailed outline; table of contents') combines gang (kang; 'main rope, hawser; main threads, essential principles') and mu ( 'eye, look; category, division').
The characters and were later used as 'class' and 'order', respectively, in biological classification.
History
Li Shizhen travelled widely for field study, combed through more than 800 works of literature, and compiled material from the copious historical bencao literature. He modelled his work on a Song dynasty compilation, especially its use of non-medical texts. He worked for more than three decades, with the help of his son, Li Jianyuan, who drew the illustrations. He finished a draft of the text in 1578, the printer began to carve the blocks in 1593, but it was not published until 1596, three years after Li died. Li Jianyuan presented a copy to the Ming dynasty emperor, who saw it but did not pay much attention. Further editions were then published in 1603, 1606, 1640, and then in many editions, with increasing numbers of illustrations, down to the 21st century.
Contents
The text consists of 1,892 entries, each entry with its own name called a gang. The mu in the title refers to the synonyms of each name.
The Compendium has 53 volumes in total:
The opening table of contents lists entries, including 1,160 hand drawn diagrams and illustrations.
Volume 1 to 4 – an index () and a comprehensive list of herbs to treat the most common sicknesses ().
Volume 5 to 53 – the main text, contains 1,892 distinct herbs, of which 374 were added by Li Shizhen. There are 11,096 side prescriptions to treat common illness (8,160 of which are compiled in the text).
The text is written in almost 2 million Chinese characters, classified into 16 divisions and 60 orders. For every herb there are entries on their names, a detailed description of their appearance and odor, nature, medical function, side effects, recipes, etc.
Errors
The text contains information that was proven to be wrong due to Li's limited scientific and technical knowledge. For example, it is claimed that quicksilver (mercury) and lead were not toxic. Li also claimed that otters are always male and that the Moupin langur is tall, has backwards feet and can be caught when it draws its upper lip over its eyes.
Evaluation
The British historian Joseph Needham writes about the Compendium in his Science and Civilisation in China.
The text provided classification of how traditional medicine was compiled and formatted, as well as biology classification of both plants and animals.
The text corrected some mistakes in the knowledge of herbs and diseases at the time. Several new herbs and more details from experiments were also included. It also has notes and records on general medical data and medical history.
The text includes information on pharmaceutics, biology, chemistry, geography, mineralogy, geology, history, and even mining and astronomy.
Translations
See also
Chinese herbology
List of traditional Chinese medicines
Medical cannibalism
Mellified man
Pharmacognosy
Traditional Chinese medicine
Traditional Chinese medicines derived from the human body
Yaoxing lun
References
Bibliography
(Review, Edward B. Jelks)
.
External links
Chinese source text at zh.wikisource.org
Pen ts'ao kang mu (The Great Herbal), page from 1672 edition, National Library of Medicine
Ming dynasty literature
Chinese medical texts
Pharmacological classification systems
1578 books
Memory of the World Register
Pharmacopoeias
Memory of the World Register in China | Bencao Gangmu | Chemistry | 1,060 |
12,953,358 | https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Computer-Aided%20Design%20of%20Integrated%20Circuits%20and%20Systems | IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (sometimes abbreviated IEEE TCAD or IEEE Transactions on CAD) is a monthly peer-reviewed scientific journal covering the design, analysis, and use of computer-aided design of integrated circuits and systems. It is published by the IEEE Circuits and Systems Society and the IEEE Council on Electronic Design Automation (Institute of Electrical and Electronics Engineers). The journal was established in 1982 and the editor-in-chief is Rajesh K. Gupta (University of California at San Diego). According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.9.
Past editors-in-chief
Rajesh K. Gupta (2018-2022)
Vijaykrishnan Narayanan (2014-2018)
Sachin Sapatnekar (2010-2014)
See also
Electronic design automation
References
External links
Transactions on Computer-Aided Design
Monthly journals
English-language journals
Academic journals established in 1982
1982 establishments in the United States
Electrical and electronic engineering journals | IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | Engineering | 202 |
28,829,670 | https://en.wikipedia.org/wiki/I-Jet%20Media | i-Jet Media is a Russian distribution network and publisher of social games on web portals and social networks. It was founded in 2005 and published its first game, Maffia News, in 2007. The company has foreign offices in Silicon Valley, United States and Beijing, China.
History
In 2005, i-Jet Media was founded as a developer of browser-based games. The company's first game, Maffia New, was developed during the next two years. After the launch, Maffia New used to gain more than 500,000 monthly users on Rambler. In 2007, Maffia New was published on Rambler.ru as well as Time to Enforce, Real Wars and Steel Giants were launched. Alexey Kostarev, CEO and co-founder of i-Jet Media, opened an office in Silicon Valley, United States. After that, the company started preparing to launch social games in Russia.
In 2009, the company launched a cooperative project with Chinese developer Elex: i-Jet Media published Elex's games in Russian social networks — VKontakte, first of all.
In April 2009, i-Jet Media published Happy Harvest on the Russian Internet, and the game succeeded to collect about 10,000,000 unique active users and earn US$20 million during less than one year. Happy Harvest was awarded with the Goggle Trend prize as the best game of 2009. In 2009, i-Jet Media also opened its office in Beijing.
In 2010, i-Jet Media games were the first ones ever published in the Odnoklassniki.ru network. The company started exporting them to European social networks: Estonia, Finland, Germany, and Poland. In May 2010, the company published Farm Frenzy, in cooperation with an international casual game publisher and distributor Alawar Entertainment. In summer 2010, i-Jet Media started to publish games on Russian Facebook.
i-Jet Media also publishes mobile versions of games on social networks of Russia. In July 2010, i-Jet Media and mobile social network Spaces.ru established a game section, which managed to get more than 25,000 visitors during its first day online.
In September 2010, i-Jet Media and Playdom announced that i-Jet Media would publish social games by Playdom in European social networks. They also agreed to combat piracy on the Russian social games market.
As of 2011, i-Jet Media works with 40 game developers and has published 80 social games on 30 social networks, and has several dozen social games with an audience of 70 million users.
i-Jet Connect
On August 15, 2011, during the Social Games Summit at Game Developers Conference Europe, i-Jet Media revealed its new technological platform, i-Jet Connect; the platform is intended to consolidate social game development solutions and features the ability to quickly publish titles to both global and local social networks, manage traffic, and incorporate brand advertisements. i-Jet Media has been conducting a closed beta test of i-Jet Connect since August 15 expected to be finished in November 2011. Meanwhile, social game developers could register for the beta on the company's web site.
References
Further reading
Q&A: Russian Publisher Talks Eastern European Expansion For Social Games
Russian Social Game Publisher i-Jet Selling 25% Of Company
i-Jet Media Looks for Western Social Game Developers to Distribute in Europe, Russia
Deal Radar 2010: i-Jet Media
Beyond Facebook: Global Social Game Opportunities
Russian social network games a booming business
From Germany to Russia: A Model for Facebook’s Worldwide Growth
Mashable Awards: Announcing The Full List of Finalists
LiveJournal (Finally) Gets Its Game On
Russia’s I-Jet Media launches new social game platform (exclusive)
External links
(archived link, 2011-01-23)
Social software
Video game publishers
Video game companies established in 2005
Internet properties established in 2005
Privately held companies of Russia
Entertainment companies of Russia
Video game companies of Russia | I-Jet Media | Technology | 795 |
30,709,676 | https://en.wikipedia.org/wiki/NanoGagliato | NanoGagliato is an invitational gathering of scientists, physicians, business leaders, artists, and researchers to discuss the most current challenges and opportunities in the fields of nanomedicine and the nanosciences, from a multisciplinary perspective. This series of events takes place each year, at the end of July, in the town of Gagliato, Calabria, Italy.
Format
During two days of intense scientific exchanges, the participants to NanoGagliato address the challenges of translating research to the clinic by deploying technological advances born in the field of nanotechnology. On the last two days of the event, the group goes on excursions of renowned localities in Calabria and neighboring regions.
Public session
The culminating event of NanoGagliato is a town-hall style meeting, traditionally attended by hundreds of people of all ages from Gagliato and neighboring towns. The event is organized in concert with local citizens' associations and public institutions, and is held in Gagalito's town square. Highlights of the attending scientists' research are presented. Time is reserved for an open Q&A session, where members of the public are encouraged to ask about the impact of the presented research, the promise for new treatments for disease, and any ethical concerns.
Sessions
Founding session
The first NanoGagliato was convened in 2008 by Mauro Ferrari, Ph.D, with the help of his wife, Paola, and hosted at his summer residence in Gagliato. Attendees were asked to provide their own transportation. Several countries were represented, including Japan, the United Kingdom, Portugal, the United States, and France. Hospitality was kindly provided by local private residents.
Establishment of L'Accademia di Gagliato delle Nanoscienze
At the end of the first NanoGagliato, the participants agreed to establish a non profit association, named L'Accademia di Gagliato delle Nanoscienze, and its children's chapter, La Piccola Accademia. These associations are now the organizers of the events of NanoGagliato, and are supported by donations of individuals and corporations.
Successive sessions
During the 2010 session, four scholarships were awarded to four young Italian researchers in the field of biomedical engineering, in honor of Prof. Salvatore Venuta, the late Magnifico Rettore of the Magna Græcia University of Catanzaro. As part of the award, the finalists were invited to join the NanoGagliato events alongside the scientists.
Children's activities
La Piccola Accademia di Gagliato is the children's chapter of the L' Accademia di Gagliato. The first series of educational activities for schoolchildren was launched in the summer of 2010. Inspired by the NanoDays developed by the NISE (Nanoscale Informal Science Education Network), La Piccola Accademia organized a lively and very successful program including games, presentations, trading cards, and a Q&A session with the scientists. Approximately fifty children from Gagliato and nearby towns attended the first session.
Recognition of Gagliato
In recognition of the unique role that the town of Gagliato has come to play as an international magnet for global leaders in nanotechnology, and as host of the NanoGagliato events, Gagliato has received the official appellation of “Paese delle NanoScienze”, town of Nanosciences, attributed by the City Council.
References
External links
NanoGagliato - Official website
Correre della Sera - Il paese dove si immagina il futuro - September 2010 (Italian)
Biofutur - NanoGagliato : la technoscience autrement - October 2010 (French)
Piccola Accademia di Gagliato - Official website
Science conferences
Nanotechnology institutions | NanoGagliato | Materials_science | 777 |
37,855,738 | https://en.wikipedia.org/wiki/Finishing%20oil | A finishing oil is a vegetable oil used for wood finishing.
These finishes are a historical finish for wood, primarily as means of making it weather or moisture resistant. Finishing oils are easily applied, by wiping with a cloth. They are also simply made, by extraction from plant sources with relatively simple processing. Historically, both of these were considerable advantages over varnishes, that depended upon exotic imported plant resins, complex preparation and careful application with expensive brushes.
The two most important finishing oils, to this day, are linseed oil and tung oil.
Linseed oil is extracted from the seeds of the flax plant, already extensively grown for the production of linen. The raw oil may be used, but it cures poorly and leaves a sticky surface. Normally boiled linseed oil is used. This has been prepared beforehand by boiling with lead, in the form of lead oxide, or with manganese salts. Modern boiled oils use a lead-free metallic drier added cold, such as cobalt resinate. Old linseed oil finishes yellow with age, owing to oxidation with the air. Linseed oil was also widely used for the production of oilcloth, a waterproof covering and rainwear material, formed by coating linen or cotton fabrics with the boiled oil.
Tung oil is pressed from the nuts of the tung tree. Raw tung cures better than raw linseed and so it is often used in this form. As tung oil yellows with age less than linseed, it is favoured for high quality and furniture work.
Most modern finishing oils use a blend of oil and a thinning agent such as white spirit. Raw oils tend to be applied too thickly, leading to a thick layer that cannot cure effectively and so remains sticky. A thinned oil is easier and more reliable to apply. Such commercial mixtures also contain metallic driers to improve their performance.
There are also mixtures sold as finishing oils. These are classed as 'long oils', predominantly oil with some varnish added or as 'short oils' which are predominantly varnish, with some oil. Danish oil is a popular long oil finishing oil. Spar varnish is a short oil varnish, used for added flexibility and elasticity.
References
Wood finishing materials
Varnishes
Vegetable oils | Finishing oil | Chemistry | 464 |
14,160,015 | https://en.wikipedia.org/wiki/Malgrange%E2%80%93Ehrenpreis%20theorem | In mathematics, the Malgrange–Ehrenpreis theorem states that every non-zero linear differential operator with constant coefficients has a Green's function. It was first proved independently by and
.
This means that the differential equation
where is a polynomial in several variables and is the Dirac delta function, has a distributional solution . It can be used to show that
has a solution for any compactly supported distribution . The solution is not unique in general.
The analogue for differential operators whose coefficients are polynomials (rather than constants) is false: see Lewy's example.
Proofs
The original proofs of Malgrange and Ehrenpreis were non-constructive as they used the Hahn–Banach theorem. Since then several constructive proofs have been found.
There is a very short proof using the Fourier transform and the Bernstein–Sato polynomial, as follows. By taking Fourier transforms the Malgrange–Ehrenpreis theorem is equivalent to the fact that every non-zero polynomial has a distributional inverse. By replacing by the product with its complex conjugate, one can also assume that is non-negative. For non-negative polynomials the existence of a distributional inverse follows from the existence of the Bernstein–Sato polynomial, which implies that can be analytically continued as a meromorphic distribution-valued function of the complex variable ; the constant term of the Laurent expansion of at is then a distributional inverse of .
Other proofs, often giving better bounds on the growth of a solution, are given in , and .
gives a detailed discussion of the regularity properties of the fundamental solutions.
A short constructive proof was presented in :
is a fundamental solution of , i.e., , if is the principal part of ,
with , the real numbers are pairwise different, and
References
Differential equations
Theorems in analysis
Schwartz distributions | Malgrange–Ehrenpreis theorem | Mathematics | 374 |
1,633,875 | https://en.wikipedia.org/wiki/Feedwater%20heater | A feedwater heater is a power plant component used to pre-heat water delivered to a steam generating boiler. Preheating the feedwater reduces the irreversibilities involved in steam generation and therefore improves the thermodynamic efficiency of the system. This reduces plant operating costs and also helps to avoid thermal shock to the boiler metal when the feedwater is introduced back into the steam cycle.
In a steam power plant (usually modeled as a modified Rankine cycle), feedwater heaters allow the feedwater to be brought up to the saturation temperature very gradually. This minimizes the inevitable irreversibilities associated with heat transfer to the working fluid (water). See the article on the second law of thermodynamics for a further discussion of such irreversibilities.
Cycle discussion and explanation
The energy used to heat the feedwater is usually derived from steam extracted between the stages of the steam turbine. Therefore, the steam that would be used to perform expansion work in the turbine (and therefore generate power) is not utilized for that purpose. The percentage of the total cycle steam mass flow used for the feedwater heater is termed the extraction fraction and must be carefully optimized for maximum power plant thermal efficiency since increasing this fraction causes a decrease in turbine power output.
Feedwater heaters can also be "open" or "closed" heat exchangers. An open heat exchanger is one in which extracted steam is allowed to mix with the feedwater. This kind of heater will normally require a feed pump at both the feed inlet and outlet since the pressure in the heater is between the boiler pressure and the condenser pressure. A deaerator is a special case of the open feedwater heater which is specifically designed to remove non-condensable gases from the feedwater.
Closed feedwater heaters are typically shell and tube heat exchangers where the feedwater passes throughout the tubes and is heated by turbine extraction steam. These do not require separate pumps before and after the heater to boost the feedwater to the pressure of the extracted steam as with an open heater. However, the extracted steam (which is most likely almost fully condensed after heating the feedwater) must then be throttled to the condenser pressure, an isenthalpic process that results in some entropy gain with a slight penalty on overall cycle efficiency:
Many power plants incorporate a number of feedwater heaters and may use both open and closed components. Feedwater heaters are used in both fossil- and nuclear-fueled power plants.
Economizer
An economizer serves a similar purpose to a feedwater heater, but is technically different as it does not use cycle steam for heating. In fossil-fuel plants, the economizer uses the lowest-temperature flue gas from the furnace to heat the water before it enters the boiler proper. This allows for the heat transfer between the furnace and the feedwater to occur across a smaller average temperature gradient (for the steam generator as a whole). System efficiency is therefore further increased when viewed with respect to actual energy content of the fuel.
Most nuclear power plants do not have an economizer. However, the Combustion Engineering System 80+ nuclear plant design and its evolutionary successors, (e.g. Korea Electric Power Corporation's APR-1400) incorporate an integral feedwater economizer. This economizer preheats the steam generator feedwater at the steam generator inlet using the lowest-temperature primary coolant.
Testing
A widely use Code for the procedures, direction, and guidance for determining the thermo-hydraulic performance of a closed feedwater heater is the ASME PTC 12.1 Feedwater Heater Standard.
See also
Fossil fuel power plant
Thermal power plant
ASME Codes
The American Society of Mechanical Engineers (ASME), publishes the following Code:
PTC 4.4 Gas Turbine Heat Recovery Steam Generators
References
External links
Power plant diagram
High pressure feedwater heaters
Mechanical engineering
Chemical process engineering
ru:Экономайзер (энергетика) | Feedwater heater | Physics,Chemistry,Engineering | 847 |
4,205,544 | https://en.wikipedia.org/wiki/World%20Wide%20Molecular%20Matrix | The World Wide Molecular Matrix (WWMM) was a proposed electronic repository for unpublished chemical data. First introduced in 2002 by Peter Murray-Rust and his colleagues in the chemistry department at the University of Cambridge in the United Kingdom, WWMM provided a free, easily searchable database for information about thousands of complicated molecules, data that would otherwise remain inaccessible to scientists.
Murray-Rust, a chemical informatics specialist, has estimated that 80% of the results produced by chemists around the world is never published in scientific journals. Most of this data is not ground-breaking, yet it could conceivably be of use to scientists doing related projects—if they could access it. The WWMM was proposed as a solution to this problem. It would house the results of experiments on over 100,000 molecules in physical chemistry, organic chemistry, biochemistry and medicinal chemistry.
In other scientific fields, the need for a similar depository to house inaccessible information could be more acute. In a presentation at the "CERN Workshop on Innovations in Scholarly Communications (OAI4)", Murray-Rust said that chemistry actually leads other fields in published data. He estimated that the majority of the data in some scientific fields never reaches publication.
Although scientific in nature, the WWMM was part of the broader open archives and open source movements, pushes to make more and more information freely available to any user via the Internet or World Wide Web. In his CERN presentation, Murray-Rust stated that the WWMM was a "response to the expense of [scientific] journals", and he asked the rhetorical question, "Can we win the war to make data open, or will it be absorbed into the publishing and pseudo-publishing world?" Murray-Rust and his colleagues are also responsible for the development of the Chemical Mark-up Language (CML), a variant of XML intended for chemists.
See also
The open archives initiative (OAI)
The science of Informatics
Chemical Mark-up language (CML)
References
External links
The home page of Dr. Peter Murray-Rust at the University of Cambridge
The Cambridge Center for molecular informatics
An outline of the WWMM
CERN Workshop on Innovations in Scholarly Communication (OAI4)
Data management | World Wide Molecular Matrix | Technology | 458 |
53,261,402 | https://en.wikipedia.org/wiki/NGC%201222 | NGC 1222 is an early-type lenticular galaxy located in the constellation of Eridanus. The galaxy was discovered on 5 December 1883 by the French astronomer Édouard Stephan. John Louis Emil Dreyer, the compiler of the New General Catalogue, described it as a "pretty faint, small, round nebula" and noted the presence of a "very faint star" superposed on the galaxy.
NGC 1222's morphological type of S0− would suggest that it should have a mostly smooth profile and a very dull appearance. However, the galaxy was imaged by the Hubble Space Telescope in 2016, and the image showed that there were several bright blue star forming regions, as well as dark reddish areas of interstellar dust. NGC 1222 is currently interacting with and swallowing two dwarf galaxies that are supplying the gas and dust needed to become a starburst galaxy.
One supernova has been observed in NGC 1222: SN 2024any (type Ia, mag. 17.59).
See also
NGC 1275, another starburst galaxy
References
External links
1222
011774
Lenticular galaxies
Starburst galaxies
Peculiar galaxies
Eridanus (constellation)
Discoveries by Édouard Stephan
Markarian galaxies | NGC 1222 | Astronomy | 246 |
2,376,842 | https://en.wikipedia.org/wiki/BT%20Monocerotis | BT Monocerotis (Nova Monocerotis 1939) was a nova, which lit up in the constellation Monoceros in 1939. It was discovered on a spectral plate by Fred L. Whipple on December 23, 1939. BT Monocerotis is believed to have reached mag 4.5, which would have made it visible to the naked eye, but that value is an extrapolation; the nova was not observed at peak brightness Its brightness decreased after the outbreak by 3 magnitudes in 182 days, making it a "slow nova". The light curve for the eruption had a long plateau period.
Photographic plates taken for 30 years prior to the eruption show that BT Monocerotis remained visible during that period. Prior to 1933, BT Monocerotis had an average magnitude of 15.52 with a variation of 1.2 magnitudes. It retained the same magnitude until the eruption, showing a variation of 0.9 magnitudes. Thus it did not show a pre-eruption rise in brightness.
This is an interacting binary star system consisting of a 1.04±0.06 white dwarf primary star and a 0.87±0.06 main sequence star with a stellar classification of G8V. The orbit has a period of 0.33381379 days and an inclination of 88.2° to the line of sight to the Earth, resulting in an eclipsing binary. The nova eruption is believed to have been driven by mass transferred from the secondary star to the white dwarf. It remains uncertain whether the white dwarf has an accretion disk formed by this material. Matter outflowing from the system has a line of sight velocity of 450 km s−1, but may be moving at up to 3,200 km s−1 if the flow is strictly bipolar.
References
External links
https://web.archive.org/web/20050905163215/http://www.tsm.toyama.toyama.jp/curators/aroom/var/nova/1930.htm
Novae
Monoceros
1939 in science
Monocerotis, BT | BT Monocerotis | Astronomy | 436 |
11,382,726 | https://en.wikipedia.org/wiki/Immunofixation | Immunofixation permits the detection and typing of monoclonal antibodies or immunoglobulins in serum or urine. It is of great importance for the diagnosis and monitoring of certain blood related diseases such as myeloma.
Principle
The method detects by precipitation: when a soluble antigen (Ag) is brought in contact with the corresponding antibody, precipitation occurs, which may be visible with the naked eye or microscope.
Immunofixation first separates antibodies in a mixture as a function of their specific electrophoretic mobility. For the purpose of identification, antisera are used that are specific for the targeted antibodies.
Specifically, immunofixation allows the detection of monoclonal antibodies representative of diseases such as myeloma or Waldenström macroglobulinemia.
Technique
The technique consists of depositing a serum (or urine which has been previously concentrated) sample on a gel. After application of an electric current that allows the separation of proteins according to their size, antibodies specific for each type of immunoglobulin are laid upon the gel. It thus appears to be more or less narrow bands on the gel, which are at different immunoglobulins.
Immunofixation as immunoelectrophoresis, takes place in two steps:
The first step is identical for both techniques. It consists in depositing the immunoglobulins contained in the serum or urine on a gel and then separating the immunoglobulins according to their electrophoretic mobility by making them migrate under the effect of an electric field. This migration depends on the mass and charge of the antigen. Once the immunoglobulins are separated, we can move to the next step.
The second step is based on the technique used. Immunofixation requires electrophoresis to migrate serum proteins in replicate. Then, specific anti-immunoglobulin antisera are used to treat each replicate. For this, the antisera are not placed in a channel, as in electrophoresis, but they are added individually to each migration lane. The presence of a monoclonal immunoglobulin results in the appearance of a narrow band after staining complex precipitates. For example, in the case of an IgG lambda, there will be a narrow band, both on the track on which was deposited anti-G and on which has been deposited with the anti-lambda.
Merits
Immunofixation tends to replace protein electrophoresis because :
it is faster (results within three hours);
it is somewhat more sensitive. Immunofixation may reveal an immunoglobulin missed out by protein electrophoresis, especially at low concentrations (less than 1 gram/litre);
it can be partially automated and can be used in more laboratories;
it is more easily read and interpreted.
Demerits
Immunofixation is however only sensitive to immunoglobulins and is more expensive than protein electrophoresis.
See also
Immunoelectrophoresis
References
Sources
:fr:Immunofixation
External links
- "Immunofixation - serum"
- "Immunofixation - urine"
Glycoproteins
Immunologic tests | Immunofixation | Chemistry,Biology | 680 |
15,307,810 | https://en.wikipedia.org/wiki/CIE%201960%20color%20space | The CIE 1960 color space ("CIE 1960 UCS", variously expanded Uniform Color Space, Uniform Color Scale, Uniform Chromaticity Scale, Uniform Chromaticity Space) is another name for the chromaticity space devised by David MacAdam.
The CIE 1960 UCS does not define a luminance or lightness component, but the Y tristimulus value of the XYZ color space or a lightness index similar to W* of the CIE 1964 color space are sometimes used.
Today, the CIE 1960 UCS is mostly used to calculate correlated color temperature, where the isothermal lines are perpendicular to the Planckian locus. As a uniform chromaticity space, it has been superseded by the CIE 1976 UCS.
Background
Judd determined that a more uniform color space could be found by a simple projective transformation of the CIEXYZ tristimulus values:
(Note: What we have called "G" and "B" here are not the G and B of the CIE 1931 color space and in fact are "colors" that do not exist at all.)
Judd was the first to employ this type of transformation, and many others were to follow. Converting this RGB space to chromaticities one finds
MacAdam simplified Judd's UCS for computational purposes:
The Colorimetry committee of the CIE considered MacAdam's proposal at its 14th Session in Brussels for use in situations where more perceptual uniformity was desired than the (x,y) chromaticity space, and officially adopted it as the standard UCS the next year.
Relation to CIE XYZ
U, V, and W can be found from X, Y, and Z using:
Going the other way:
We then find the chromaticity variables as:
We can also convert from u and v to x and y:
Relation to CIE 1976 UCS
References
External links
Free Windows utility to generate chromaticity diagrams. Delphi source included.
Color space | CIE 1960 color space | Mathematics | 413 |
733,904 | https://en.wikipedia.org/wiki/Chaos%20Communication%20Congress | The Chaos Communication Congress is an annual hacker conference organized by the Chaos Computer Club. The congress features a variety of lectures and workshops on technical and political issues related to security, cryptography, privacy and online freedom of speech. It has taken place regularly at the end of the year since 1984, with the current date and duration (27–30 December) established in 2005. It is considered one of the largest events of its kind, alongside DEF CON in Las Vegas.
History
The congress is held in Germany. It started in 1984 in Hamburg, moved to Berlin in 1998, and back to Hamburg in 2012, having exceeded the capacity of the Berlin venue with more than attendees. Since then, it attracts an increasing number of people: around attendees in 2012, over in 2015, and more than in 2017. From 2017 to 2019 it has taken place at the Trade Fair Grounds in Leipzig, since the Hamburg venue was closed for renovation in 2017 and the existing space was not enough for the growing congress. The congress moved back to Hamburg in 2023, after the renovation of CCH was finished.
A large range of speakers are featured. The event is organized by volunteers called Chaos Angels. The non-members entry fee for four days was €100 in 2016, and was raised to €120 in 2018 to include a public transport ticket for the Leipzig area.
An important part of the congress are the assemblies, semi-open spaces with clusters of tables and internet connections for groups and individuals to collaborate and socialize in projects, workshops and hands-on talks. These assembly spaces, introduced at the 2012 meeting, combine the hack center project space and distributed group spaces of former years.
From 1997 to 2004 the congress also hosted the annual German Lockpicking Championships. 2005 was the first year the Congress lasted four days instead of three and lacked the German Lockpicking Championships.
2020 was the first year where the Congress did not take place at a physical location due to the COVID-19 pandemic, giving way to the first Remote Chaos Experience (rC3).
The Chaos Computer Club announced to return to the now newly renovated Congress Center Hamburg for the 37th edition of the Chaos Communication Congress. The announcement confirms the usual date of 27-30 December, notably omitting the year it will be held. On 18 October 2022, they confirmed that the congress will indeed not be held in 2022. On 6 October 2023, the CCC announced that 37C3 will take place again on the usual dates in 2023.
Congresses from 1984 to today
Gallery
See also
Chaos Communication Camp
SIGINT
DEF CON
Notes
References
External links
Overview of Chaos Communication Congresses
Blog for the Chaos Communication Congress (and other events)
Cryptography
Freedom of speech
Free-software events
Hacker conventions
Privacy
Recurring events established in 1984
Security
1984 establishments in West Germany
Culture in Berlin
20th century in Hamburg
21st century in Hamburg
Information technology in Germany | Chaos Communication Congress | Mathematics,Engineering | 582 |
10,549,030 | https://en.wikipedia.org/wiki/Mechanical%20arm | A mechanical arm is a machine that usually mimics the action of a human arm. Mechanical arms are composed of multiple beams connected by hinges powered by actuators. One end of the arm is attached to a firm base while the other has a tool. They can be controlled by humans either directly or over a distance. A computer-controlled mechanical arm is called a robotic arm. However, a robotic arm is just one of many types of different mechanical arms.
Mechanical arms can be as simple as tweezers or as complex as prosthetic arms. In other words, if a mechanism can grab an object, hold an object, and transfer an object just like a human arm, it can be classified as a mechanical arm.
Recent advancements have been brought about to lead future improvements in the medical field with prosthetics and with the mechanical arm in general. When mechanical engineers build complex mechanical arms, the goal is for the arm to perform a task that ordinary human arms can not complete.
History
Robotic Arms
Researchers have classified the robotic arm by its industrial applications, medical applications, and technology, etc. It was first introduced in the late 1930s by William Pollard and Harold A. Roseland, where they developed a "sprayer" that had about five degrees of freedom and an electric control system. Pollard's was called “first position controlling apparatus.” William Pollard never designed or built his arm, but it was a base for other inventors in the future.
In 1961 the Unimate was invented, evolving to the PUMA arm. In 1963, the Rancho arm was designed, along with many others in the future. Even though Joseph Engelberger marketed Unimate, George Devol invented the robotic arm. It focused on using Unimate for tasks harmful to humans. In 1959, a 2700-pound Unimate prototype was installed at the General Motors die-casting plant in Trenton, New Jersey. The Unimate 1900 series became the very first produced robotic arm for die-casting. During a very short period of time, had been produced at least 450 robotic arms which were being used. It still remains one of the most significant contributions in the last one hundred years. As years went by, technology evolved, helping to build better robotic arms. Not only did companies invent different robotic arms, but so did colleges. In 1969, Victor Scheinman from Stanford University invented the Stanford arm, where it had electronically powered arms that could move through six axes. Marvin Minsky, from MIT, built a robotic arm for the office of Naval Research, possibly for underwater explorations. This arm had twelve single degree freedom joints in this electric- hydraulic- high dexterity arm. Robots were initially created to perform a series of tasks that humans found boring, harmful, and tedious.
Prosthetics
Before the Modern Era
The history of prosthetic limbs came to be by such great inventors. The world's first and earliest functioning prosthetic body parts are two toes from Ancient Egypt. Because of their unique functionality, these toes are an example of a true prosthetic device. These toes carry at least forty percent of the body's weight. Most prosthetic limbs would be produced after there is intensive studying of a human being's form by using modern equipment. Prosthetic limbs were used during the war too, including during the late 1480s. A German knight, who served with the Holy Roman Emperor Charles V, was injured during the war. Even though prosthetic limbs were expensive, this particular limb was manufactured by an armor specialist. Soldiers were allowed to continue their career because of prosthetics. The fingers could grasp a shield, hold reins to horses, and even a quill when drafting an important document.
Modern Era
As time passed, limb design started to focus on people's specialties as well. For example, a pianist would need a different type of mechanical arm than others. Their limbs would be widespread and their middle and ring fingers would be smaller than normal. In addition, an arm design of padded tips on the thumb and little finger would allow a pianist to span a series of notes while playing their instrument.
Technology for the prosthetic limbs kept evolving after World War I. After the war, laborers would return to work, using either legs or the arms because of its ability to grip objects. This is one of the designs that remains unchanged over the past century. People with such prosthetics would do everyday things like driving a car, eating food, and much more.
Arms for Automotive Manufacturing
Without the mechanical arm, the production of cars would be extremely difficult. This problem was first solved in 1962 when the first mechanical arm was used in a “General Motors” factory. Using this mechanical arm, also known as an industrial robot, engineers were able to achieve difficult welding tasks. In addition, the removal of die-castings was another important step in improving the abilities of a mechanical arm. With such technology, engineers were able to easily remove unneeded metal underneath mold cavities. Stemming off these uses, welding started to become increasingly popular for mechanical arms.
In 1979, the company Nachi refined the first motor-driven robot to perform spot welding. Spot welding is a very important process used in the creation of cars to join separate surfaces together. Soon enough, mechanical arms were being passed down to additional car companies.
As constant improvements were being made, the National University of Singapore (NUS) decided to make even further advancements by inventing a mechanical arm that can lift up to 80 times its original weight. Not only did this arm expand its lift strength, but the arm could also extend to five times its original length. These advancements were first introduced in 2012 and car companies can greatly benefit from this new scientific knowledge.
Surgical Arms
Surgical arms were first used in 1985 when a neurosurgical biopsy was performed. In 1990, the FDA allowed endoscopic surgical procedures to be done by the Automated Educational Substitute Operator (AESOP) system. This was not the only improvement the FDA made, however. While the AESOP system was more of a Computer Motion system, the first surgery system came about in 2000, when the da Vinci Surgical System became the first robotic surgery system approved by the FDA.
Types
Prosthetic arms
Prosthetics may not seem like a mechanical arm, but they are. It uses hinges and a wire harness to allow an incapable being to perform everyday functions. They have started creating arms that take a structure of a human arm and even though it looks like a skeletal metal arm, it moves like a normal arm and hand. This arm was made by Johns Hopkins University in 2015. It has 26 joints (way more than the old outdated arms) and is capable of lifting up to 45 pounds. This arm has 100 sensors that connect to the human mind. These sensors allow the person with the arm to move the arm like it was just another part of his or her body. People who have used this new prosthetic can say that they have actually been able to feel the texture, ultimately making prosthetics a huge part of the mechanical arm category.
Rover arms
In space, NASA used a mechanical arm for new planetary discoveries. One of these discoveries came from sending a rover to another planet and collecting samples from this planet. With a rover, NASA can just keep the rover on its designated planet and explore all they want. Mechanical arms are also attached to the ships that are acting as satellite stations in Earth's atmosphere because they help grab debris that might cause damage to other satellites. Not only that, but they also keep astronauts safe when they have to go make a repair to the ship or satellite. Now, space isn't where all the rover's with mechanical arms are. Even the SWAT team and other special forces use these rovers to go into a building or unsafe area to defuse a bomb, plant a bomb or repair vehicles.
Everyday Mechanical Arms
Every day a person might be using a type of mechanical arm. Many mechanical arms are used for very ordinary things like being able to grab an out of reach object with the pincer mechanical arm. A simple system of 3 joints squeezes and releases motion causing the pincer to close and finally grab a desired object. Even the objects that might seem super simplistic like tweezers can be classified as a mechanical arm. This simple object is being used millions of times daily all thanks to the help of an engineer making a simple, but great design.
Modifications and Advancements
Muscle Tissue for Mechanical Arms
The National University of Singapore has started making artificial muscle tissue to be able to be placed in mechanical arms to be able to help people pick up heavy loads. This artificial tissue can pick up to 500 times its own weight. Depending on how much of the tissue engineers place in the mechanical arm, the greater lift strength the arm has. A regular human well-grown adult weighs around 160 to 180 pounds. Now, a person weighing that much could be able to lift an object that weighs around 80,000 pounds. This would make construction sites a lot safer being able to just walk up with the construction supplies instead of using a crane that can collapse due to harsh weather. Soon, utility vehicles for construction may be a thing of the past.
Sensor Mechanical Arms
New mechanical arms being used for prosthetics are starting to gain sensors that, with the help of a chip attached to one's spinal cord, allows a person to move the arm. Since sensors can easily be programmed to have a higher sensitivity to anything the sensor touches, people with prosthetic arms will also be able to feel the object they are touching. With this, a person could feel even the slightest vibration. This could be a danger and a good thing. It can danger the human because if dealt with to much pressure the person with the prosthetic can suffer severe pain. Besides actually obtaining the sense of touch back, one could also sense more awareness of incoming danger.
Lifelike Mechanical Arms
Lifelike mechanical arms, along with ordinary human arms, are so similar that it may be hard to distinguish between the two. The reason for this is because a spray, that places a coat on the prosthetic arm, makes the arm look real. This futuristic fantasy is beginning to become more of a reality. Scientists are even starting to create sleeve type artificial skins to keep a prosthetic arm looking like a normal arm. This will allow people with prosthetics to not feel self-conscious of their robotic arm.
See also
Robotic arm
Excavator
Articulated robot
Mechanical Engineering
Mars Curiosity Rover - Robotic Arm
Leonardo da Vinci
References
Mechanisms (engineering) | Mechanical arm | Engineering | 2,152 |
1,565,386 | https://en.wikipedia.org/wiki/HD%20108147 | HD 108147, also known as Tupã, is a 7th magnitude star in the constellation of Crux in direct line with and very near to the bright star Acrux or Alpha Crucis. It is either a yellow-white or yellow dwarf (the line is arbitrary and the colour difference is only from classification, not real), slightly brighter and more massive than the Sun. The spectral type is F8 V or G0 V. The star is also younger than the Sun. Due to its distance, about 126 light years, it is too dim to be visible with unaided eye; with binoculars it is an easy target. However, due to its southerly location it is not visible in the northern hemisphere except for the tropics.
An extrasolar planet was detected orbiting it in 2000 by the Geneva Extrasolar Planet Search Team. This exoplanet is "a gas giant smaller than Jupiter that screams around its primary [star] in 11 days at only 0.1 AU." This is much closer than the orbit of Mercury in the Solar System.
In December 2019, the International Astronomical Union announced the star will bear the name Tupã, after the God of the Guarani peoples of Paraguay. The name was a result of a contest ran in Paraguay by the Centro Paraguayo de Informaciones Astronómicas, along with the IAU100 NameExoWorlds 2019 global contest.
It should not be confused with HD 107148, which also has an extrasolar planet discovered in 2006 in the Virgo constellation.
See also
List of extrasolar planets
References
External links
F-type main-sequence stars
G-type main-sequence stars
108147
060644
Crux
Planetary systems with one confirmed planet
Durchmusterung objects | HD 108147 | Astronomy | 363 |
12,297,789 | https://en.wikipedia.org/wiki/Wildlife%20of%20Mauritius | The wildlife of Mauritius consists of its flora and fauna. Mauritius is located in the Indian Ocean to the east of Madagascar. Due to its isolation, it has a relatively low diversity of wildlife; however, a high proportion of these are endemic species occurring nowhere else in the world. Many of these are now threatened with extinction because of human activities including habitat destruction and the introduction of non-native species. Some have already become extinct, most famously the dodo which disappeared in the 17th century.
At the 16th U.S.-Africa Business Summit, held May 6–9, 2024, Mauritius was held up as a model for African ecosystem conservation at a presentation by the Saint Brandon Conservation Trust in Dallas, Texas, at the international Corporate Council on Africa meetings that included six heads of state and government, 80 U.S. government officials, 16 African delegations and over 1,000 U.S. & African CEOs, investors and entrepreneurs.
Fauna
Mammals
Prehistorically, due to its isolated Indian Ocean location to the east of Madagascar, Mauritius had no endemic terrestrial mammals. The only mammals that could find their way to the island were bats and marine mammals.
The vast majority of mammalian species on the island have been introduced, either inadvertently or intentionally, by humans, such as the crab-eating macaque, rats, mice, Asian house shrew, small Indian mongoose, tailless tenrec, Javan rusa deer, wild boar, Indian hares as well feral dogs and cats and farm livestock, such as domestic ruminants and goats.
These introduced mammals have had a varied impact on the island's pristine fauna. Given that they were free from natural predators, they rapidly grew to large numbers and were soon preying on and competing with the local fauna.
Bats
There were once three native species of fruit bats on the island, two of which were endemic to Mauritius. Only the Mauritian flying fox remains on the island. The Rodrigues flying fox is now only found on the nearby island of Rodrigues, and the small Mauritian flying fox has gone extinct due to human related factors. Two insectivorous microbats are also present, the Mauritian tomb bat (Taphozous mauritianus) and the Natal free-tailed bat (Mormopterus acetabulosus).
On 7 November 2015, the government introduced a law authorising the culling of around 18,000 Mauritian fruit bats, despite protests, and despite the species' formal, legal protection and being ranked as a vulnerable species by the International Union for Conservation of Nature (IUCN). According to the IUCN, blaming the fruit bats for the "high" levels of damage caused to commercial fruit plantations is not substantiated, based on observations and research results. By July 2018, the IUCN again ranked the fruit bat, only this time as an endangered species, following the previous years' (2015–2017) government-sanctioned killings. Despite this elevated concern status, and still being afforded legal protection, October 2018 saw a reinstatement of the cull; this most recent cull called for all but 20% of the fruit bat population to be killed, leaving approximately 13,000 (of the estimated 65,000) fruit bats.
Birds
Over 100 bird species have been recorded in Mauritius. There are seven or eight surviving endemic species on the main island depending on taxonomy. The Mauritius grey white-eye is the most common of these, being widespread across the island including in man-made habitats. The others are less common and are mainly restricted to the Black River Gorges National Park in the south-west of the island. The Mauritius kestrel, Mauritius parakeet and pink pigeon all came close to extinction but are now increasing due to intensive conservation efforts.
Rodrigues has two further endemic species, the Rodrigues warbler and Rodrigues fody. Many small islands are named after birds, although some have seen their seabird colonies reduced or driven extinct by threats such as logging, poachers, or introduced species. The only two places you can find the Rred-footed booby in Mauritius is Rodrigues and St Brandon.
St Brandon islands are home to vast numbers of seabirds (Feare, 1984; Gardiner, 1907; Strauss in litt., 9.7.84). Staub and Gueho (1968) found a total of 26 species including the red-footed booby. Blue-faced boobies (Sula dactylatra) are found on Serpent Island and Ile du Nord. Large populations of sooty terns (Sterna fuscata) and white terns (Gygis alba) occur on Albatros, Ile Raphael and Siren islands. In 2010, a survey of seabirds of St Brandon was undertaken. "We estimated that 1 084 191 seabirds comprising seven breeding species and excluding non-breeders were present at the archipelago. ... Analyses of 30 different islets that make up the atoll showed that the seabird species mostly partitioned their use of islets based on islet size, with four species preferring larger islets and two species preferring smaller islets."
St Brandon has been proposed for a Marine Protected Area by the World Bank, has been identified as an Important Bird Area in Africa by BirdLife International, as a Marine Important Bird Area under the Nairobi Convention, and a Key Biodiversity Area by the CEPF. In 2011, the Ministry of Environment & Sustainable Development issued the "Mauritius Environment Outlook Report" which stated that "There is an urgent need to allocate more resources for a closer monitoring of the environmental assets of the islands." It further recommended that St Brandon be declared a marine protected area. In the President's Report of the Mauritian Wildlife Foundation dated March 2016, St Brandon is declared an official MWF project in order to promote the conservation of the atoll.
A wide variety of birds have been introduced into Mauritius. These include some of the most common and conspicuous birds of the islands including the common myna, red fody, red-whiskered bulbul and zebra dove. The common myna is becoming a pest due to its well documented habit of displacing smaller bird species from their habitat and also destroying the smaller bird species young. The mynas were introduced for commercial reasons, primarily to help control the locusts which eat the sugar cane leafage. Instead, they prey on small indigenous lizards which are easier to catch due to their basking habits which is required for their metabolism. The lizards have become the myna's primary source of food. Because of this, an imbalance is being created with insects which the lizard would prey on which the common myna does not eat due to its inability to crawl under rocks and forage in the dense grass, flora and fauna.
Reptiles
A number of endemic reptiles are found in Mauritius, particularly on Round Island, that were once found in the main island. These include the Mauritius ornate day gecko, Bojer's skink, keel-scaled boa and Mauritius lowland forest day gecko.
Exotic reptiles include the giant Madagascar day gecko, four-clawed gecko, spotted house gecko, common house gecko, oriental garden lizard, green iguana, panther chameleon, Indian wolf snake and the brahminy blind snake.
Five giant tortoises of the genus Cylindraspis, the domed Mauritius giant tortoise, domed Rodrigues giant tortoise, saddle-backed Rodrigues giant tortoise, saddle-backed Mauritius giant tortoise, and the Réunion giant tortoise formerly inhabited the island Mauritius, Rodrigues, and Réunion but are now extinct. As the largest terrestrial herbivores they performed an important role in the natural Mauritian ecosystem and in the regeneration of forests. For this reason, the Aldabra giant tortoise from Aldabra and the radiated tortoise from the neighboring island of Madagascar, have been introduced to several conservation areas of Mauritius such as the Pamplemousses gardens and various patches of remaining indigenous forest.
The critically endangered hawksbill turtle (17% of the archipelago) and the endangered green turtle (75% of the archipelago) visit St. Brandon, with a focus on L'Île Coco which is critically important for the visiting hawksbill turtle. The leatherback turtle is very rare to find. The Cargados Carajos shoals are of national as well as international importance, being the very last important turtle nesting area in Mauritius.
Freshwater fauna
In the 1950s, guppies locally known as millions abounded in Mauritian rivers. These little fish, often found in brackish water, appear to be outnumbered now by swordtails, introduced in the 1960s. Bigger fish like the carp, koi and the gourami have also dwindled after the introduction of tilapia in the 1950s. A popular freshwater fish used to be the damecéré, (known as carpe de Maillard in French) introduced by Monsieur Céré, an administrator of Pamplemousses garden during the French period. These silver tinted fish were common in ponds and lakes in the 1950s but are now rarely seen. They were often offered for sale at the Port-Louis Central Market and by street vendors.
Recently the berri rouge (a hybrid of the blue and Nile tilapia) has been introduced in view of supplementing the diet of the local population in protein. These fish are related to the tilapia but are somewhat rosy coloured. They are mostly bred on aquaculture farms. Two types of catfish (wels and walking catfish) are also newcomers, and were probably been dumped into local waters by aquarists. These fish are proving to be a nuisance and are disturbing the ecosystem of Mauritian rivers.
All the above fish have been introduced. Indigenous fish are few, and one of them is the goby, locally known as bichiques, of which two species are found, Awaous commersoni and Awaous pallidus, which locally are known as bichiques. They are extremely voracious fish and have been observed to swallow fish almost their size. The adults are found mostly near estuaries while the younger fish prefer the lower course of rivers. Seldom active, they lie in wait to pounce on an unsuspecting prey. Gobies go to lay their eggs in the sea, and the larvae swim upstream around December. They are caught and eaten as a delicacy by the local population. But their numbers seem to have considerably dwindled. (There is another theory that gobies do not go to the sea but that their eggs are swept into the ocean by water currents; the larvae swim upstream in great numbers during the new moon.
Another indigenous fish is the mudskipper, locally known as the cabot, which is very rare.
A fish that can live both in sea and fresh water is the milkfish. Known locally as loubine, it is found in fairly great numbers near estuaries at particular times of the year. These young fish are often caught and eaten fried. However, this practice should be discouraged because these fish can grow very fast to adults weighing over 25 kg. This is perhaps the fish that the Dutch saw when they first landed in Mauritius in 1598. As reported by historians: "they saw many fish in the streams around the coast, and some large birds which dived after the fish and ate them."
The mullet also lives in shoals near estuaries but go up rivers in search of food. It is sometimes caught by fishermen on river banks who use bread as bait. However, it is a notoriously difficult fish to catch.
An easier game for the freshwater fisherman is perhaps the natal moony, locally known as line, which can also be fished along rivers, notably the Grand River North West.
Another indigenous dweller of Mauritian rivers and lakes is the eel. It is not very often seen and prefers to stay in crevices or hide under rocks. Eels spend most of their time in fresh water but go back to the sea, where they come from, to reproduce. Mauritian eels, like those from Madagascar, Réunion, Seychelles and East Africa, have their breeding grounds in the Nazareth Trough, an ocean trench situated between longitudes 60-65 °E and latitudes 10-20 °S. Eels can wriggle across land, and this perhaps explains why eels are found in some isolated ponds of Mauritius. There are three varieties of eels on the island. Two of them are found in Madagascar, Reunion and Africa, while the third one is present in the Seychelles. Most probably, the commonest eel is the marbled eel. Eels can grow quite big, if they cannot find a way to go back to the sea. This perhaps explains why some very big eels have been caught in Mauritius, notably at La Ferme reservoir. In Rodrigues an eel more than 2 metres long was caught in a spring, in the heart of a forest, at Cascade-Pigeon. It is believed that the eel was 100 years old. There is a theory that eels play an important role in ecosystems; they prevent springs from drying up. All three Mauritian species take a silvery colour when they go back to the sea.
Shrimps are common on the banks of most rivers. There are about six varieties of shrimps, and some of them are endemic. One type of shrimp is the camaron. This shrimp has a transparent body speckled with tiny reddish-brown or black spots. The female, smaller than the male, has two pincers of equal length but of a thinner size. Another type is the crevette chevaquine. It prefers to live near estuaries. Four varieties are endemic. They are the chevrette sonz, Caridina mauritii, the betangue and the petit chevrette.
Freshwater crabs are often found in waterways close to the sea. During the reproductive period, the adults gather on some riverbanks near the coast. The eggs are swept into the sea by water currents, and on hatching the young are carried into the river or coastal pond by the tides. The crabs feed mostly on algae and other vegetable matter.
Soft shell terrapins with long necks have been noticed in some rivers. They are of Chinese origin and were apparently introduced in a river of the Moka District about a century ago; these reptiles are considered to be aggressive and are fast invading other rivers of the island.
Marine life
Fish
The marine fish of Mauritius include holocentrids (Myripristis berndti, Neoniphon sammara, Sargocentron spiniferum and Sargocentron diadema), mullet (Mugil cephalus and Crenimugil crenilabis), rabbitfish (Siganus sutor and Siganus argenteus), groupers (Cephalopholis sonnerati, Cephalopholis argus, Epinephelus fasciatus, Epinephelus hexagonatus, Epinephelus lanceolatus, Epinephelus merra, Epinephelus morio, Epinephelus tukula and Variola louti), seabream (Rhabdosargus sarba), jacks (Caranx ignobilis, Elagatis bipinnulata and Trachinotus baillonii), goatfish (Mulloidichthys vanicolensis, Parupeneus barberinus and Parupeneus cyclostomus), butterflyfish (Chaetodon trifasciatus, Chaetodon kleinii, Chaetodon auriga, Hemitaurichthys zoster and Forcipiger flavissimus), Moorish idol (Zanclus cornutus), angelfish (Pomacanthus semicirculatus), cardinalfish (Ostorhinchus apogonoides and Cheilodipterus macrodon), emperors (Monotaxis grandoculis, Gnathodentex aureolineatus, Lethrinus mahsena, Lethrinus nebulosus and Lethrinus harak), hawkfish (Cirrhitichthys oxycephalus, Cirrhitops mascarenensis and Paracirrhites forsteri), damsels (Abudefduf sparoides, Abudefduf margariteus, Abudefduf sordidus, Dascyllus abudafur, Pomacentrus pikei, Pomacentrus caeruleus, Stegastes limbatus, Stegastes lividus and Stegastes pelicieri), clownfish (Amphiprion chrysogaster, Amphiprion clarkii and Amphiprion allardi), tangs (Acanthurus nigrofuscus, Acanthurus triostegus, Ctenochaetus striatus, Paracanthurus, Zebrasoma gemmatum and Naso unicornis), snappers (Etelis carbunculus, Etelis coruscans and Lutjanus kasmira), jobfish (Aprion and Pristipomoides filamentosus), parrotfish (Chlorurus cyanescens, Scarus scaber and Scarus ghobban), mahi mahi (Coryphaena hippurus), scombrids (Thunnus albacares, Katsuwonus pelamis and Acanthocybium solandri), barracudas (Sphyraena barracuda and Sphyraena acutipinnis), natal moony (Monodactylus argenteus), boxfish (Ostracion meleagris and Ostracion trachys), pufferfish (Arothron nigropunctatus, Arothron hispidus and Canthigaster valentini), porcupinefish (Diodon hystrix, Diodon liturosus and Diodon holocanthus), triggerfish (Balistoides conspicillum, Balistapus, Pseudobalistes fuscus, Odonus niger, Rhinecanthus aculeatus and Sufflamen chrysopterum), blennies (Alticus monochrus), gobies (Nemateleotris magnifica, Istigobius decoratus and Valenciennea strigata), catfish (Plotosus lineatus), anthias (Pseudanthias squamipinnis and Pseudanthias evansi), wrasses (Coris aygula, Bodianus anthioides, Bodianus macrourus, Cheilinus trilobatus, Cheilinus chlorourus, Halichoeres hortulanus, Macropharyngodon bipartitus and Labroides dimidiatus), tilefish (Malacanthus latovittatus), fusiliers (Caesio caerulaurea and Caesio teres), eels (Gymnothorax griseus and Myrichthys maculosus), scorpionfish (Pterois antennata, Rhinopias eschmeyeri, Scorpaenopsis cirrosa and Synanceia verrucosa), anglerfish (Antennarius commerson and Antennarius maculatus), seahorses (Hippocampus histrix), cornetfish (Fistularia commersonii), trumpetfish (Aulostomus chinensis), needlefish (Tylosurus crocodilus), marlins (Istiompax indica, Makaira mazara, Kajikia audax and Istiophorus platypterus), swordfish (Xiphias gladius), rays (Aetobatus narinari and Mobula alfredi), sharks (Carcharhinus amblyrhynchos, Carcharhinus leucas, Carcharhinus limbatus, Carcharhinus melanopterus, Galeocerdo cuvier, Rhincodon typus, Sphyrna lewini and Sphyrna mokarran), remoras (Echeneis naucrates and Remora remora) and many more.
Other marine life
Crustaceans include the shore crab (Percnon guinotae), natal lightfoot crab (Grapsus tenuicrustatus), ghost crab (Ocypode pallidula and Ocypode ceratophthalmus), hermit crabs (Dardanus guttatus and Calcinus elegans), spiny lobsters (Panulirus penicillatus, Panulirus longipes and Panulirus versicolor), mantis shrimp (Odontodactylus scyllarus) and shrimp (Stenopus hispidus, Anyclocaris brevicarpalis, Lysmata amboinensis, Urocaridella antonbruunii and Rhynchocinetes durbanensis).
Cephalopods include the squid (Sepioteuthis lessoniana) and the octopus (Octopus cyanea and Octopus sp.)
Echinoderms include the brittle star (Ophiolepis superba), starfish (Fromia milleporella, Fromia monilis, Nardoa variolata, Culcita schmideliana and Acanthaster planci), urchins (Echinodiscus auritus, Colobocentrotus atratus, Echinometra mathaei, Diadema and Echinothrix diadema) and sea cucumbers (Holothuria leucospilota and Actinopyga echinites).
Marine gastropods include porcelains (Cypraea, Mauritia histrio, Monetaria caputserpentis and Monetaria annulus), cones (Conus), ranellids (Charonia tritonis, Monoplex aquatilis and Monoplex pilearis) and conchs (Gibberukus gibberulus, Turbinella pyrum, Lambis lambis, Lambis truncata, Strombus sinuatus, Strombus plicatus and Harpago arthritica).
Bivalves include the black-lip pearl oyster (Pinctada margaritifera), prickly pen shell (Pinna muricata), tiger lucine (Codakia tigerina) and giant clams (Tridacna squamosa, Tridacna squamosina, Tridacna gigas, Tridacna rosewateri and Tridacna maxima).
Cnidarians include the jellyfish (Chironex fleckeri and Thysanostoma loriferum), siphonophores (Physalia physalis and Porpita porpita), anemones (Heteractis magnifica), coral (Acropora, Pocillopora damicornis, Pocillopora eydouxi, Porites lutea, Platygyra daedalea, Galaxea fascicularis and Pavona cactus) and gorgons (Paramuricea and Subergorgia mollis).
Butterflies
About 39 butterfly species are known from Mauritius and Rodrigues. Seven of these are endemic.
Non-marine molluscs
Flora
Indigenous flora
Over 700 native species of flowering plant are found in Mauritius and nearly half of these (246) are endemic. Rainforest formerly covered most of the island with palm savannah in drier regions and areas of heathland in the mountains. Most of this natural vegetation has been destroyed and what remains is threatened by the spread of introduced plants.
Native trees include eleven surviving species of Mauritius ebony (Diospyros tesselaria, Diospyros egrettarum, Diospyros revaughanii, Diospyros melanida, Diospyros leucomelas and several others), takamaka (Calophyllum tacamahaca), manglier (Sideroxylon cinereum, Sideroxylon puberulum, Sideroxylon grandiflorum and (Sideroxylon boutonianum), ox tree (Polyscias maraisiana), bois blanc (Polyscias rodriguesiana), bois de natte (Labourdonnaisia calophylloides, Labourdonnnaisia glauca and Labourdonnaisia revoluta), makak (Mimusops balata and Mimusops petiolaris), bois puant (Foetidia mauritiana), bois d'olive (Cassine orientalis), bois de judas (Cossinia pinnata), laffouche (Ficus densifolia, Ficus reflexa, Ficus rubra and more), bois de clou (Eugenia lucida and Eugenia kanakana), arbre ferney (Eugenia bojeri), bois papaye (Polyscias gracilis), mapou tree (Cyphostemma mappia), bois de rat (Tarenna), baume (Psiadia arguta and Psiadia rodriguesiana), hop bush (Dodonaea viscosa), bois binjouin (Terminalia bentzoe), bois de pipe (Hilsenbergia petiolaris) and a range of other indigenous and endemic tree species.
The palm species that are indigenous to the island of Mauritius are Acanthophoenix rubra (possibly other species), Dictyosperma album (var. album & conjugatum), Hyophorbe lagenicaulis, Hyophorbe vaughanii, Hyophorbe verschaffeltii, Latania loddigesii, Corypha umbraculifera and Tectiphiala ferox.
Indigenous stipes include the cordyline (Cordyline mauritiana), bois de chandelle (Dracaena marginata) and chandelle (Dracaena concinna).
Mauritius is also home to the rarest palm in the world, Hyophorbe amaricaulis, with only one specimen. It is found in the SSR Botanical Garden of Curepipe.
Mauritius is the home of a large number of endemic species of Pandanus (screwpine or vacoas), namely: Pandanus carmichaelii, Pandanus barkleyi, Pandanus conglomeratus, Pandanus drupaceus, Pandanus eydouxia, Pandanus glaucocephalus, Pandanus iceryi, Pandanus incertus, Pandanus macrostigma, Pandanus microcarpus, Pandanus obsoletus, Pandanus palustris, Pandanus prostratus, Pandanus pseudomontanus, Pandanus pyramidalis, Pandanus rigidifolius, Pandanus sphaeroides, Pandanus spathulatus, Pandanus vandermeeschii and Pandanus wiehei. The common vacoas sac (Pandanus utilis) of Madagascar has also been introduced and propagated in Mauritius, and it has now naturalised.
The national flower of Mauritius is boucle d'oreille (Trochetia boutoniana), which is now restricted to a single mountain.
Other Trochetia species are endemic to Mauritius. They are Trochetia parviflora, Trochetia uniflora, Trochetia triflora and Trochetia blackburniana.
Endemic hibiscus species include the mandrinette (Hibiscus fragilis), mandrinette blanc (Hibiscus genevei), hibiscus des mascareignes (Hibiscus boryanus) and the mandrinette de rodrigues (Hibiscus liliflorus).
Endemic flowers include the dombeya (Dombeya acutangula and Dombeya rodriguesiana), bois tambour (Tambourissa cocottensis, Tambourissa amplifolia, Tambourissa peltata, Tambourissa pedicellata, Tambourissa quadrifa and Tambourissa tau), the Mauritius bloody bell flower (Nesocodon mauritianus), barleria (Barleria observatrix), bois banane (Gaertnera psychotrioides, Gaertnera hirtiflora and Gaertnera longifolia), bois corail (Chassalia coriacea and Chassalia boryana), lys du pays (Crinum mauritianum), orchidee (Oeoniella, Oeonia, etc.) and many more.
Introduced and invasive plants
Introduced plants that have become invasive include "Chinese" (actually Brazilian) guava (Psidium cattleianum), travellers trees (Ravenala madagascariensis) and Lantana camara.
For the purpose of landscaping and gardening in Mauritius, exotics have traditionally been used, and many of these have spread into the surrounding vegetation. Bougainvillea (Bougainvillea glabra and Bougainvillea spectabilis) and frangipani (Plumeria obtusa and Plumeria rubra) are still among the most commonly planted ornamental species. Another species is the royal poinciana, which is also common.
However, for urban and roadside landscaping Mauritius is beginning to turn to their many varied and unique endemic plant species. Many endemic species, such as bottle palms, mapou tree and ox tree, are now being used as ornamentals for both public landscaping and in private gardens across Mauritius. The African baobab is rare but still planted in gardens.
Conservation
Conservation work in Mauritius is carried out by the Forestry Service, National Parks and Conservation Service (NPCS) and by non-governmental organizations such as the Mauritian Wildlife Foundation (MWF), the Durrell Wildlife Conservation Trust (DWCT) and the Saint Brandon Conservation Trust. Efforts to preserve native flora and fauna have included captive breeding, habitat restoration and the eradication of introduced species.
Protection involves three national parks, nature reserves, a range of other protected areas, and botanical gardens for education and public outreach. Black River Gorges National Park covers of land and another is protected by nature reserves such as Round Island and Île aux Aigrettes.
Flora and fauna of St. Brandon
Protected areas
National parks
Black River Gorges National Park (also part of a greater Biosphere Reserve which includes the Gerald Durrell Endemic Wildlife Sanctuary)
Bras d'Eau National Park
Islets National Park
Mainland nature reserves
Macchabée-Bel Ombre Nature Reserve, the largest reserve (3,611 ha), formed from six constituent reserves in 1980.
Corps de Garde Nature Reserve
Le Pouce Nature Reserve
Perrier Nature Reserve
Bois Sec Nature Reserve
Gouly Pere Nature Reserve
Cabinet Nature Reserve
Les Mares Nature Reserve
Grande Montagne Nature Reserve, Rodrigues (20 ha)
Anse Quitor Nature Reserve, Rodrigues (34 ha)
Offshore islets nature reserves
Ile aux Aigrettes Nature Reserve
Ile Plate (Flat Island) Nature Reserve
Ile Ronde (Round Island) Nature Reserve
Ilot Gabriel Nature Reserve
Coin de Mire (Gunner's Quoin) Nature Reserve
Ilot Marianne Nature Reserve
Ile aux Serpents Nature Reserve
Ile aux Cocos Nature Reserve, Rodrigues (14 ha)
Ile aux Sables Nature Reserve, Rodrigues (8 ha)
Marine parks
Blue Bay Marine Park
Botanical gardens
Pamplemousses Botanical Garden
Monvert Nature Park
Vallée d'Osterlog Botanical Garden
Curepipe Botanic Gardens
Other protected areas
Ebony Forest Chamarel
François Leguat Giant Tortoise and Cave Reserve, Rodrigues
Vallee de Ferney Conservation Trust
Rivulet Terre Rouge Estuary Bird Sanctuary
La Vallee des Couleurs Nature Park
St Brandon
See also
Albatross Island, St. Brandon
Avocaré Island
Cargados Carajos
Carl G. Jones
Conservation status
Constitution of Mauritius
Emphyteutic lease
France Staub
Geography of Mauritius
Gerald Durrell
History of Mauritius
Hope Spots: marine areas rich in biodiversity
Île Raphael
Île Verronge
Important marine mammal area
Islets of Mauritius
L'Île Coco
L'île du Gouvernement
L'île du Sud
Law of the sea
List of mammals of Mauritius
List of marine fishes of Mauritius
List of national parks of Mauritius
Marine park
Marine spatial planning
Mascarene Islands
Mauritian Wildlife Foundation
Mauritius
Outer Islands of Mauritius
Permanent grant
Raphael Fishing Company
Special Protection Area
St. Brandon
The Saint Brandon Conservation Trust
References
Further reading
Ellis, Royston; Richards, Alexandra & Schuurman, Derek (2002) Mauritius, Rodrigues, Réunion: the Bradt Travel Guide, 5th edition, Bradt Travel Guides Ltd, UK.
Mauritian Wildlife Foundation Accessed 13 November 2007.
Sinclair, Ian & Langrand, Olivier (1998) Birds of the Indian Ocean Islands, Struik, Cape Town.
Poissons de l'ile Maurice, EOI, Claude Michel (2004).
Notre Faune, Claude Michel.
Atlas des poissons et crustacés d'eau douce de la Reunion, P.Keith et al. (1999).
Birds of the Mascarenes and St. Brandon, France Staub (1976).
External links
Marine Protected Areas by Project Regeneration
Marine Protection Atlas - an online tool from the Marine Conservation Institute that provides information on the world's protected areas and global MPA campaigns. Information comes from a variety of sources, including the World Database on Protected Areas (WDPA) and many regional and national databases.
Marine protected areas - viewable via Protected Planet, an online interactive search engine hosted by the United Nations Environment Programme's World Conservation Monitoring Center (UNEP-WCMC).
Biota of Mauritius
Mauritius | Wildlife of Mauritius | Biology | 7,032 |
763,256 | https://en.wikipedia.org/wiki/Pier%20Luigi%20Nervi | Pier Luigi Nervi (21 June 1891 – 9 January 1979) was an Italian engineer and architect. He studied at the University of Bologna graduating in 1913. Nervi taught as a professor of engineering at Rome University from 1946 to 1961 and is known worldwide as a structural engineer and architect and for his innovative use of reinforced concrete, especially with numerous notable thin shell structures worldwide.
Biography
Nervi was born in Sondrio and attended the Civil Engineering School of Bologna from which he graduated in 1913; his formal education was quite similar to that experienced today by Italian civil engineering students.
After graduating he joined the Society for Concrete Construction and, during World War I from 1915 to 1918, he served in the Corps of Engineering of the Italian Army.
From 1961 to 1962 he was the Norton professor at Harvard University.
Civil engineering works
Nervi began practicing civil engineering after 1923. His projects in the 1930s included several airplane hangars that were important for his development as an engineer. A set of hangars in Orvieto (1935) were built entirely out of reinforced concrete, and a second set in Orbetello and Torre del Lago (1939) improved the design by using a lighter roof, precast ribs, and a modular construction method.
During the 1940s he developed ideas for reinforced concrete which helped in the rebuilding of many buildings and factories throughout Western Europe, and even designed and created a boat hull that was made of reinforced concrete as a promotion for the Italian government.
Nervi also stressed that intuition should be used as much as mathematics in design, especially with thin shell structures. He borrowed from both Roman and Renaissance architecture while applying ribbing and vaulting to improve strength and eliminate columns. He combined simple geometry and prefabrication to innovate design solutions.
Engineer and architect
Nervi was educated and practised as an ingegnere edile (translated as "building engineer") – in Italy. At the time (and to a lesser degree also today), a building engineer might also be considered an architect. After 1932, his aesthetically pleasing designs were used for major projects. This was due to the booming number of construction projects at the time which used concrete and steel in Europe and the architecture aspect took a step back to the potential of engineering. Nervi successfully made reinforced concrete the main structural material of the day. Nervi expounded his ideas on building in four books (see below) and many learned papers.
Archeological excavations suggested that he may have some responsibilities for the Flaminio stadium foundations passing through ancient Roman tombs. His work was also part of the architecture event in the art competition at the 1936 Summer Olympics.
International projects
Most of his built structures are in his native Italy, but he also worked on projects abroad. Nervi's first project in the United States was the George Washington Bridge Bus Station, for which he designed the roof, which consists of triangular pieces that were cast in place. This building is still used today by over 700 buses and their passengers.
Noted works
Stadio Artemio Franchi, Florence (1931)
Ugolino Golf House, Impruneta, Italy (1934) (collaborating with Gherardo Bosio)
Torino Esposizioni, Turin, Italy (1949).
UNESCO headquarters, Paris (1950) (collaborating with Marcel Breuer and Bernard Zehrfuss)
The Pirelli Tower, Milan (1950) (collaborating with Gio Ponti)
Palazzo dello sport EUR (now PalaLottomatica), Rome (1956)
Palazzetto dello sport, Rome (1958)
Stadio Flaminio, Rome (1957)
, Turin (1961)
Palazzetto dello sport, Turin (1961)
Australia Square tower building, Sydney (1961 - 1967)
Sacro Cuore (Bell Tower), Firenze (1962)
Paper Mill, Mantua, Italy (1962)
George Washington Bridge Bus Station, New York City (1963)
Australia Square tower, Sydney (1964) Architect: Harry Seidler & Associates
Tour de la Bourse, Montreal (1964) (collaborating with Luigi Moretti)
Leverone Field House at Dartmouth College
Sede Centrale della Banca del Monte di Parma, Parma (1968, collaboration with Gio Ponti, Antonio Fornaroli, and )
Edmund Barton Building (also published as Trade Group Offices), Canberra (1970), Australia. Architect Harry Seidler & Associates
MLC Centre, Sydney (1973) Architect: Harry Seidler & Associates
Thompson Arena at Dartmouth College (1973 - 1974)
Cathedral of Saint Mary of the Assumption, San Francisco, California (1967) (collaborating with Pietro Belluschi)
Paul VI Audience Hall, Vatican City (1971)
Chrysler Hall, & Norfolk Scope Arena in Norfolk, Virginia (1971)
Australian Embassy, Paris (1973) Consulting engineer. Architect. Harry Seidler & Associates
Good Hope Centre, Cape Town (1976) by Studio Nervi, an exhibition hall and conference centre, with the exhibition hall comprising an arch with tie-beam on each of the four vertical facades and two diagonal arches supporting two intersecting barrel-like roofs which in turn were constructed from pre-cast concrete triangular coffers with in-situ concrete beams on the edges.
Awards
Pier Luigi Nervi was awarded Gold Medals by the Institution of Structural Engineers in the UK, the American Institute of Architects (AIA Gold Medal 1964) and the RIBA.
In 1957, received the Frank P. Brown Medal of The Franklin Institute and the Wilhelm Exner Medal.
Publications
Scienza o arte del costruire? Bussola, Rome, 1945.
Costruire correttamente, Hoepli, Milan, 1954.
Structures, Dodge, New York, 1958.
Aesthetics and Technology in Building (The Charles Eliot Norton Lectures, 1961-62). Cambridge, Massachusetts, Harvard University Press, 1965.
See also
Thin-shell structure
References
External links
Ing. Nervi Pier Luigi. Fascismo - Architettura - Arte / Arte fascista web site
Pierluigi Nervi e l'arte di costruire, Fausto Giovannardi, Borgo San Lorenzo (Florence) Italy 2008
NerViLab at Sapienza University, Rome
Pier Luigi Nervi Project
http://www.silvanaeditoriale.it/catalogo/prodotto.asp?id=3015, catalogue to the international travelling exhibition "Pier Luigi Nervi Architecture as Challenge, edited by Cristiana Chiorino and Carlo Olmo, Milan, 2010
1891 births
1979 deaths
People from Sondrio
IStructE Gold Medal winners
Chartered designers
20th-century Italian architects
Italian civil engineers
Structural engineers
Modernist architects from Italy
Concrete shell structures
University of Bologna alumni
Harvard University faculty
Recipients of the Royal Gold Medal
Honorary members of the Royal Academy
20th-century Italian engineers
Olympic competitors in art competitions
Italian military personnel of World War I
Recipients of the AIA Gold Medal
Honorary Fellows of the American Institute of Architects | Pier Luigi Nervi | Engineering | 1,425 |
15,214,974 | https://en.wikipedia.org/wiki/HBAP1 | Hemoglobin, alpha pseudogene 1, also known as HBAP1, is a human gene.
References
Further reading
Pseudogenes | HBAP1 | Chemistry | 32 |
14,656,113 | https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282004%29 | This page summarizes projects that brought more than of new liquid fuel capacity to market with the first production of fuel beginning in 2004. This is part of the Wikipedia summary of Oil Megaprojects—see that page for further details. 2004 saw 24 projects come on stream with an aggregate capacity of when full production was reached (which may not have been in 2004).
Quick Links to Other Years
Detailed Project Table for 2004
See also the 2004 world oil market chronology
This table is available in csv format here (updated daily).
References
Oil megaprojects
Oil fields
Proposed energy projects
2004 in technology | Oil megaprojects (2004) | Engineering | 123 |
177,696 | https://en.wikipedia.org/wiki/Energy%20economics | Energy economics is a broad scientific subject area which includes topics related to supply and use of energy in societies. Considering the cost of energy services and associated value gives economic meaning to the efficiency at which energy can be produced. Energy services can be defined as functions that generate and provide energy to the “desired end services or states”. The efficiency of energy services is dependent on the engineered technology used to produce and supply energy. The goal is to minimise energy input required (e.g. kWh, mJ, see Units of Energy) to produce the energy service, such as lighting (lumens), heating (temperature) and fuel (natural gas). The main sectors considered in energy economics are transportation and building, although it is relevant to a broad scale of human activities, including households and businesses at a microeconomic level and resource management and environmental impacts at a macroeconomic level.
History
Energy related issues have been actively present in economic literature since the 1973 oil crisis, but have their roots much further back in the history. As early as 1865, W.S. Jevons expressed his concern about the eventual depletion of coal resources in his book The Coal Question. One of the best known early attempts to work on the economics of exhaustible resources (incl. fossil fuel) was made by H. Hotelling, who derived a price path for non-renewable resources, known as Hotelling's rule.
Development of energy economics theory over the last two centuries can be attributed to three main economic subjects – the rebound effect, the energy efficiency gap and more recently, 'green nudges'.
The Rebound Effect (1860s to 1930s)
While energy efficiency is improved with new technology, expected energy savings are less-than proportional to the efficiency gains due to behavioural responses. There are three behavioural sub-theories to be considered: the direct rebound effect, which anticipates increased use of the energy service that was improved; the indirect rebound effect, which considers an increased income effect created by savings then allowing for increased energy consumption, and; the economy-wide effect, which results from an increase in energy prices due to the newly developed technology improvements.
The Energy Efficiency Gap (1980s to 1990s)
Suboptimal investment in improvement of energy efficiency resulting from market failures/barriers prevents the optimal use of energy. From an economical standpoint, a rational decision-maker with perfect information will optimally choose between the trade-off of initial investment and energy costs. However, due to uncertainties such as environmental externalities, the optimal potential energy efficiency is not always able to be achieved, thus creating an energy efficiency gap.
Green Nudges (1990s to Current)
While the energy efficiency gap considers economical investments, it does not consider behavioural anomalies in energy consumers. Growing concerns surrounding climate change and other environmental impacts have led to what economists would describe as irrational behaviours being exhibited by energy consumers. A contribution to this has been government interventions, coined "green nudges’ by Thaler and Sustein (2008), such as feedback on energy bills. Now that it is realised people do not behave rationally, research into energy economics is more focused on behaviours and impacting decision-making to close the energy efficiency gap.
Economic factors
Due to diversity of issues and methods applied and shared with a number of academic disciplines, energy economics does not present itself as a self-contained academic discipline, but it is an applied subdiscipline of economics. From the list of main topics of economics, some relate strongly to energy economics:
Computable general equilibrium
Econometrics
Environmental economics
Finance
Industrial organization
Input–output model
Microeconomics
Macroeconomics
Operations research
Resource economics
Energy economics also draws heavily on results of energy engineering, geology, political sciences, ecology etc. Recent focus of energy economics includes the following issues:
Climate change and climate policy
Demand response
Elasticity of supply and demand in energy market
Energy and economic growth
Energy derivatives
Energy elasticity
Energy forecasting
Energy markets and electricity markets - liberalisation, (de- or re-) regulation
energy infrastructure
Environmental policy
Sustainability
Some institutions of higher education (universities) recognise energy economics as a viable career opportunity, offering this as a curriculum. The University of Cambridge, Massachusetts Institute of Technology and the Vrije Universiteit Amsterdam are the top three research universities, and Resources for the Future the top research institute. There are numerous other research departments, companies, and professionals offering energy economics studies and consultations.
International Association for Energy Economics
International Association for Energy Economics (IAEE) is an international non-profit society of professionals interested in energy economics. IAEE was founded in 1977, during the period of the energy crisis. IAEE is incorporated under United States laws and has headquarters in Cleveland.
The IAEE operates through a 17-member Council of elected and appointed members. Council and officer members serve in a voluntary position.
IAEE has over 4,500 members worldwide (in over 100 countries). There are more than 25 national chapters, in countries where membership exceeds 25 individual members. Some of the regularly active national chapters of the IAEE are; USAEE - United States; GEE - Germany; BIEE - Great Britain; AEE - France; AIEE - Italy.
Publications
The International Association for Energy Economics publishes three publications throughout the year:
The Energy Journal, a quarterly academic publication
the Economics of Energy & Environmental Policy, a semi-annual publication
the Energy Forum
Conferences
The IAEE conferences address critical issues of vital concern and importance to governments and industries and provide a forum where policy issues are presented, considered and discussed at both formal sessions and informal social functions.
IAEE typically holds five Conferences each year. The main annual conference for IAEE is the IAEE International Conference which is organized at diverse locations around the world. From the year 1996 on these conferences have taken place (or will take place) in the following cities:
2021 - Online Conference
2020 - No Conference
2019 - Montreal, Canada
2018 - Groningen, The Netherlands.
2017 - Singapore.
2016 - Bergen, Norway.
2015 - Antalya, Turkey.
2014 - New York City, United States.
2013 - Daegu, South Korea.
2012 - Perth, Australia (35th).
2011 - Stockholm, Sweden.
2010 - Rio, Brazil.
2009 - San Francisco, United States.
2008 - Istanbul, Turkey.
2007 - Wellington, New Zealand.
2006 - Potsdam, Germany.
2005 - Taipei, China (Taipei).
2003 - Prague, Czech Republic.
2002 - Aberdeen, Scotland.
2001 - Houston, Texas.
2000 - Sydney, Australia.
1999 - Rome, Italy.
1998 - Quebec, Canada.
1997 - New Delhi, India.
1996 - Budapest, Hungary.
Other annual IAEE conferences are the North American Conference and the European Conference.
IAEE Awards
The Association's Immediate Past President annually chairs the Awards committee that selects the award recipients.
Outstanding Contributions to the Profession
Outstanding Contributions to the IAEE
The Energy Journal Campbell Watkins Best Paper Award
Economics of Energy & Environmental Policy Best Paper Award
Journalism Award
Sources, links and portals
Leading journals of energy economics include:
Energy Economics
The Energy Journal
Resource and Energy Economics
There are several other journals that regularly publish papers in energy economics:
Energy – The International Journal
Energy Policy
International Journal of Global Energy Issues
Journal of Energy Markets
Utilities Policy
Much progress in energy economics has been made through the conferences of the International Association for Energy Economics, the model comparison exercises of the (Stanford) Energy Modeling Forum and the meetings of the International Energy Workshop.
IDEAS/RePEc has a collection of recent working papers.
Leading energy economists
The top 20 leading energy economists as of December 2016 are:
Martin L. Weitzman
Lutz Kilian
Robert S. Pindyck
David M. Newbery
Kenneth J. Arrow
Richard S.J. Tol
Severin Borenstein
Richard G. Newell
Frederick (Rick) van der Ploeg
Michael Greenstone
Richard Schmalensee
James Hamilton
Robert Norman Stavins
Ilhan Ozturk
Paul Joskow
Ramazan Sari
Jeffrey A. Frankel
David Ian Stern
Kenneth S. Rogoff
Rafal Weron
Michael Gerald Pollitt
Ugur Soytas
See also
Energy economists (category)
Cost of electricity by source
Ecological economics
Embodied energy
Energy accounting
Energy & Environment
Energy balance
Energy policy
Energy subsidy
EROEI
Industrial ecology
International Energy Agency
List of energy storage projects
List of energy topics
Social metabolism
Sustainable energy
Thermoeconomics
References
Further reading
How to Measure the True Cost of Fossil Fuels March 30, 2013 Scientific American
Bhattacharyya, S. (2011). Energy Economics: Concepts, Issues, Markets, and Governance. London: Springer-Verlag limited.
Herberg, Mikkal (2014). Energy Security and the Asia-Pacific: Course Reader. United States: The National Bureau of Asian Research.
Zweifel, P., Praktiknjo, A., Erdmann, G. (2017). Energy Economics - Theory and Applications . Berlin, Heidelberg: Springer-Verlag.
External links
United States Association for Energy Economics
UIA - International Association for Energy Economics (IAEE)
The Distinguished Lecturer Series
IAEE Newsletter
Environmental social science
Economics
Resource economics | Energy economics | Physics,Environmental_science | 1,853 |
531,858 | https://en.wikipedia.org/wiki/Vanuatu%20rain%20forests | The Vanuatu rain forests are tropical and subtropical moist broadleaf forests ecoregion which includes the islands of Vanuatu, as well as the Santa Cruz Islands group of the neighboring Solomon Islands. It is part of the Australasian realm, which includes neighboring New Caledonia and the Solomon Islands, as well as Australia, New Guinea, and New Zealand.
Geography
The islands were created by the subduction of the northward-moving Australian Plate beneath the Pacific Plate. The surface geology of Vanuatu consists mostly of Pliocene-Pleistocene volcanic rocks and uplifted coral limestone. The Santa Cruz Islands have areas of both uplifted limestone and volcanic ash over limestone. The oldest rocks in Vanuatu are 38 million years old. The Santa Cruz islands are younger, with the oldest rocks less than 5 million years old.
Most of the islands are low-lying. The largest island is Espiritu Santo (3,955.5 km2). The highest peak is Mount Tabwemasana on Espiritu Santo (1,879 m).
Nendö, also known as Santa Cruz, is the largest of the Santa Cruz Islands with an area of 519 km2. Vanikoro is 190 km2, and Utupua is 69 km2. The highest peak in the Santa Cruz islands (924 m) is on Vanikoro. Nendö reaches over 500 meters elevation, and Utupua 350 m. They are made mostly of basaltic volcanic rocks of Pliocene origin, less than 5 million years old. The southeastern lowlands of Nendö are composed of uplifted Pleistocene reef limestone.
Smaller islands in the group include Tinakula (8 km2 and 800 meters elevation), a conical active stratovolcano 30 km north of Nendö, and the Reef Islands northeast of Nendö, composed of uplifted Pleistocene reef limestone and rising only 5 meters above sea level. The Duff Islands are a small chain with four main islands about 130 km northeast of Nendö, with a combined area of 14 km2 and which rise up to 300 m elevation. Tikopia (5 km2), Anuta (1 km2), and Fataka (5 km2) are small isolated islands east and southeast of Vanikoro.
Climate
The ecoregion has a tropical wet climate. The windward southeastern sides of the islands receive more rainfall. The leeward northwestern slopes of islands have a distinct dry season between April and October. Tropical cyclones occur regularly.
Flora
The natural plant communities on the islands include lowland rain forest, montane rain forest, seasonal forest and scrub, coastal strand, mangroves, vegetation on recent volcanic rocks, and secondary vegetation.
Lowland rain forest occurs on the southeastern, or windward, sides of Vanuatu's islands. There are several lowland rain forest types. Complex forest scrub densely covered with lianas is the most widespread forest type on the larger northern islands. Other types include high- and medium-stature forests, alluvial and floodplain forests, and mixed-species forests without conifers. Agathis-Calophyllum lowland forest is found on the southern islands of Erromango and Aneityum.
Lowland rainforest is the predominant plant community on the Santa Cruz Islands, and has some differences from the lowland rain forests on the islands further south. Typical species include Campnosperma brevipetiolatum, Calophyllum vitiense, Gmelina salomonensis, Maranthes corymbosa, Falcataria falcata, Pterocarpus indicus, and Endospermum medullosum. There is no well-developed montane forest, but the trees Metrosideros vitiensis, Syzygium branderhorstii, Syzygium buettnerianum, Syzygium effusum, Syzygium myriadenum, and Syzygium onesimum, and the conifers Agathis macrophylla and Retrophyllum vitiense, which are typically montane species elsewhere, grow in the islands' lowland forests. Agathis macrophylla grows only on Nendö and Vanikoro, and is found on ridges and slopes as an emergent tree in mixed forests or in monotypic stands.
In Vanuatu, montane rain forests extent from as low as 500 meters elevation up to patches of stunted cloud forest on the islands' highest peaks. They include the conifers Agathis macrophylla and Dacrycarpus imbricatus, together with broadleaf evergreen trees Metrosideros vitiensis, Syzygium spp., Pterophylla spp., Geissois spp., Quintinia spp., and Ascarina spp. The tree ferns Cyathea and Dickinsonia are common, and the endemic palm Clinostigma harlandii is found on the islands Ambrym, Aneityum, and Erromango. Podocarpus vanuatuensis is a Vanuatu endemic, native to Aneityum and Erromango. Agathis silbae is endemic to the Cumberland Peninsula and Mount Tabwemasana, also known as Santo Peak, on the west coast of Espiritu Santo, from 450 to 760 meters elevation. It is a large emergent tree in lower montain rainforest on the wetter western and northwestern slopes of Espiritu Santo's central mountain range, with an average annual rainfall of about 4,500 mm. Associated trees include Calophyllum neo-ebudicum, Cryptocarya turbinata, Didymocheton sp., Myristica sp., and Podocarpus sp. Metrosideros tabwemasanaensis is a tree endemic to the montane forests of Mount Tabwemasana.
Seasonal forest, scrub, and grassland grow on the leeward sides of the islands. Semideciduous Kleinhovia hospita-Castanospermum australe forests are a transition between rain forest and dry forest, and include some rain forest species. Forest of gaiac (Acacia spirorbis) is found in drier areas, with a canopy up to 15 meters high. Thickets and savannas of the introduced tree Leucaena leucocephala and grasslands are also found on the leeward sides of the islands.
Littoral forests include Casuarina equisetifolia, Pandanus spp., Barringtonia asiatica, Terminalia catappa, Hernandia spp., and Thespesia populnea.
Coastal mangrove forests are found on some islands, and contain species of Rhizophora, Avicennia, Sonneratia, Xylocarpus, and Ceriops.
Fauna
Bats are the only native mammals in the ecoregion. There are twelve species – four megabats and eight microbats – five of which are endemic. The four megabats – Vanuatu flying fox (Pteropus anetianus), Temotu flying fox (Pteropus nitendiensis), Vanikoro flying fox (Pteropus tuberculatus), and Banks flying fox (Pteropus fundatus) – are endemic. Native microchiroptera include the Fijian blossom bat (Notopteris macdonaldi), Fijian mastiff bat (Chaerephon bregullae), Pacific sheath-tailed bat (Emballonura semicaudata), large-footed bat (Myotis adversus), little bent-wing bat (Miniopterus australis), great bent-winged bat (Miniopterus tristis), Temminck's trident bat (Aselliscus tricuspidatus), and fawn leaf-nosed bat (Hipposideros cervinus). The endemic Nendo tube-nosed fruit bat (Nyctimene sanctacrucis) is presumed extinct.
The Pacific boa (Candoia bibroni), also known as Bibron’s bevel-headed boa, the Solomon Islands boa, or the Pacific ground boa (among several other names), is native to the island and surrounding region. It is unique among Boidae snakes for its “bevel” or “spade”-shaped snout, used for digging; perhaps the closest comparable species would be the Kenyan sand boa, which spends much of its time burrowing, where it will lie in wait to ambush its passing prey above.
There are 79 native bird species in Vanuatu. Fifteen species are endemic – Vanuatu scrubfowl (Megapodius layardi), Santa Cruz ground-dove (Gallicolumba sanctaecrucis), Tanna ground-dove (Gallicolumba ferruginea), Tanna fruit-dove (Ptilinopus tannensis), Baker's imperial pigeon (Ducula bakeri), palm lorikeet (Charmosyna palmarum), chestnut-bellied kingfisher (Todirhamphus farquhari), Vanikoro monarch (Mayrornis schistaceus), buff-bellied monarch (Neolalage banksiana), black-throated shrikebill (Clytorhynchus nigrogularis), Vanikoro flycatcher (Myiagra vanikorensis), Santa Cruz white-eye (Zosterops santaecrucis), yellow-fronted white-eye (Zosterops flavifrons), Sanford's white-eye (Woodfordia lacertosa),
New Hebrides honeyeater (Phylidonyris notabilis), royal parrotfinch (Erythrura regia), Polynesian starling (Aplonis tabuensis), rusty-winged starling (Aplonis zelandica), and Mountain starling (Aplonis santovestris). The ecoregion corresponds to the Vanuatu and Temotu endemic bird area.
Conservation
4.3%, or approximately 515 km2, of the ecoregion is in protected areas. Protected areas include:
Ambrym Megapode Reserve
Central Efate (Teouma) Forest Conservation Area
Erromango Kauri Forest Conservation Area
Lasenuwi Forest Conservation Area
Loru Protected Area
Pankumo Protected Area
Vatthe Forest Conservation Area
Western Peninsular Forest Conservation Area
Wiawi Conservation Area
External links
Vanuatu and Temotu endemic bird area BirdLife International.
References
Australasian ecoregions
Ecoregions of the Solomon Islands
Environment of Vanuatu
Geography of the Solomon Islands
Geography of Vanuatu
Tropical and subtropical moist broadleaf forests
Ecoregions of Vanuatu
Endemic Bird Areas | Vanuatu rain forests | Biology | 2,195 |
2,227,485 | https://en.wikipedia.org/wiki/Recursive%20data%20type | In computer programming languages, a recursive data type (also known as a recursively-defined, inductively-defined or inductive data type) is a data type for values that may contain other values of the same type. Data of recursive types are usually viewed as directed graphs.
An important application of recursion in computer science is in defining dynamic data structures such as Lists and Trees. Recursive data structures can dynamically grow to an arbitrarily large size in response to runtime requirements; in contrast, a static array's size requirements must be set at compile time.
Sometimes the term "inductive data type" is used for algebraic data types which are not necessarily recursive.
Example
An example is the list type, in Haskell:
data List a = Nil | Cons a (List a)
This indicates that a list of a's is either an empty list or a cons cell containing an 'a' (the "head" of the list) and another list (the "tail").
Another example is a similar singly linked type in Java:
class List<E> {
E value;
List<E> next;
}
This indicates that non-empty list of type E contains a data member of type E, and a reference to another List object for the rest of the list (or a null reference to indicate that this is the end of the list).
Mutually recursive data types
Data types can also be defined by mutual recursion. The most important basic example of this is a tree, which can be defined mutually recursively in terms of a forest (a list of trees). Symbolically:
f: [t[1], ..., t[k]]
t: v f
A forest f consists of a list of trees, while a tree t consists of a pair of a value v and a forest f (its children). This definition is elegant and easy to work with abstractly (such as when proving theorems about properties of trees), as it expresses a tree in simple terms: a list of one type, and a pair of two types.
This mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest:
t: v [t[1], ..., t[k]]
A tree t consists of a pair of a value v and a list of trees (its children). This definition is more compact, but somewhat messier: a tree consists of a pair of one type and a list another, which require disentangling to prove results about.
In Standard ML, the tree and forest data types can be mutually recursively defined as follows, allowing empty trees:
datatype 'a tree = Empty | Node of 'a * 'a forest
and 'a forest = Nil | Cons of 'a tree * 'a forestIn Haskell, the tree and forest data types can be defined similarly:data Tree a = Empty
| Node (a, Forest a)
data Forest a = Nil
| Cons (Tree a) (Forest a)
Theory
In type theory, a recursive type has the general form μα.T where the type variable α may appear in the type T and stands for the entire type itself.
For example, the natural numbers (see Peano arithmetic) may be defined by the Haskell datatype:
data Nat = Zero | Succ Nat
In type theory, we would say: where the two arms of the sum type represent the Zero and Succ data constructors. Zero takes no arguments (thus represented by the unit type) and Succ takes another Nat (thus another element of ).
There are two forms of recursive types: the so-called isorecursive types, and equirecursive types. The two forms differ in how terms of a recursive type are introduced and eliminated.
Isorecursive types
With isorecursive types, the recursive type and its expansion (or unrolling) (where the notation indicates that all instances of Z are replaced with Y in X) are distinct (and disjoint) types with special term constructs, usually called roll and unroll, that form an isomorphism between them. To be precise: and , and these two are inverse functions.
Equirecursive types
Under equirecursive rules, a recursive type and its unrolling are equal – that is, those two type expressions are understood to denote the same type. In fact, most theories of equirecursive types go further and essentially specify that any two type expressions with the same "infinite expansion" are equivalent. As a result of these rules, equirecursive types contribute significantly more complexity to a type system than isorecursive types do. Algorithmic problems such as type checking and type inference are more difficult for equirecursive types as well. Since direct comparison does not make sense on an equirecursive type, they can be converted into a canonical form in O(n log n) time, which can easily be compared.
Isorecursive types capture the form of self-referential (or mutually referential) type definitions seen in nominal object-oriented programming languages, and also arise in type-theoretic semantics of objects and classes. In functional programming languages, isorecursive types (in the guise of datatypes) are common too.
Recursive type synonyms
In TypeScript, recursion is allowed in type aliases. Thus, the following example is allowed.
type Tree = number | Tree[];
let tree: Tree = [1, [2, 3]];
However, recursion is not allowed in type synonyms in Miranda, OCaml (unless -rectypes flag is used or it's a record or variant), or Haskell; so, for example the following Haskell types are illegal:
type Bad = (Int, Bad)
type Evil = Bool -> Evil
Instead, they must be wrapped inside an algebraic data type (even if they only has one constructor):
data Good = Pair Int Good
data Fine = Fun (Bool -> Fine)
This is because type synonyms, like typedefs in C, are replaced with their definition at compile time. (Type synonyms are not "real" types; they are just "aliases" for convenience of the programmer.) But if this is attempted with a recursive type, it will loop infinitely because no matter how many times the alias is substituted, it still refers to itself, e.g. "Bad" will grow indefinitely: Bad → (Int, Bad) → (Int, (Int, Bad)) → ... .
Another way to see it is that a level of indirection (the algebraic data type) is required to allow the isorecursive type system to figure out when to roll and unroll.
See also
Recursive definition
Algebraic data type
Inductive type
Node (computer science)
References
Sources
Data types
Type theory | Recursive data type | Mathematics | 1,488 |
1,812,188 | https://en.wikipedia.org/wiki/Convective%20overshoot | Convective overshoot is a phenomenon of convection carrying material beyond an unstable region of the atmosphere into a stratified, stable region. Overshoot is caused by the momentum of the convecting material, which carries the material beyond the unstable region.
Deep, moist convection in Earth's atmosphere
One example is thermal columns extending above the top of the equilibrium level (EL) in thunderstorms: unstable air rising from (or near) the surface normally stops rising at the EL (near the tropopause) and spreads out as an anvil cloud; but in the event of a strong updraft, unstable air is carried past the EL as an overshooting top or dome. A parcel of air will stop ascending at the maximum parcel level (MPL). This overshoot is responsible for most of the turbulence experienced in the cruise phase of commercial air flights.
Stellar convection
Convective overshoot also occurs at the boundaries of convective zones in stars. An example of this is at the base of the convection zone in the solar interior. The heat of the Sun's thermonuclear fusion is carried outward by radiation in the deep interior radiation zone and by convective circulation in the outer convection zone, but cool sinking material from the surface penetrates further into the radiative zone than theory would suggest. This affects the heat transfer rate and the temperature of the solar interior which can be indirectly measured by helioseismology. The layer between the Sun's convective and radiative zone is called the tachocline.
Overshooting can have more pronounced effects on the evolution of stars that have a convective core, such as intermediate- and high-mass stars. Convective material that overshoots beyond the core mixes with the surrounding material, causing some of the surrounding material to mix into the core. As a result, the core mass at the end of the main sequence can be larger than would otherwise be expected. This leads to big differences in behaviour on the subgiant and giant branches for intermediate mass stars, and to radical changes in the evolution of massive supergiant stars.
References
Severe weather and convection
Cloud and fog physics
Solar phenomena | Convective overshoot | Physics | 451 |
43,595,892 | https://en.wikipedia.org/wiki/POlarization%20Emission%20of%20Millimeter%20Activity%20at%20the%20Sun | The POlarization Emission of Millimeter Activity at the Sun (POEMAS) is a solar patrol system composed of two radio telescopes with superheterodyne circular polarization receivers at 45 and 90 GHz. Since their half power beam width is around 1.4°, they observe the full sun. The acquisition system allows to gather 100 values per second at both frequencies and polarizations, with a sensitivity of around 20 solar flux units (SFU) (1 SFU ≡ 104 Jy). The telescope saw first light in November 2011, and showed excellent performance during two years, when it observed many flares. Since November 2013 is stopped for repairing. The main interest of POEMAS is the observation of solar flares in a frequency range where there are very few detectors and fill the gap between microwaves observed with the Radio Solar Telescope Network (1 to 15.4 GHz) and submillimeter observations of the Solar Submillimeter Telescope (212 and 405 GHz). Moreover, POEMAS is the only current telescope capable of carrying on circular polarization solar flare observations at 90 GHz. (Although, in principle, ALMA band 3 may also observe at 90 GHz with circular polarization).
Science
POEMAS was designed to fill the gap between microwaves and submillimeter (less than 1 mm) wavelength observations of the solar activity. There are a number of different instruments around the world that monitors the sun at microwaves. The Nobeyama Radio Heliograph (NoRH) (Nobeyama, Japan) makes daily maps at 17 GHz (1.7 cm) with circular left and right polarizations and 34 GHz (8.8 mm), total intensity. The Nobeyama Radio Polarimeters (NoRP), (Nobeyama, Japan) is a set of patrol telescopes with receivers from 1 GHz (λ≈30 cm) to 80 GHz (λ≈3.7 mm) at selected frequencies and circular polarization detection (except at 80 GHz) with full sun disk spatial resolution. The Radio Solar Telescope Network (RSTN) is worldwide network of telescopes with receivers at selected bands from few hundred MHz (λ≈75 cm) to 15.4 GHz (λ≈2 cm). At the other end of the band, the Solar Submillimeter Telescope (SST) installed at Complejo Astronomico El Leoncito in San Juan Province, Argentina observes the sun at 212 GHz (λ≈1.4 mm) and 405 GHz (λ≈0.7 mm). Since there is no observational time overlap between Japan and Argentina, the NoRH and NoRP cannot be used together with SST, and only data from some RSTN observatories have times in common with the SST.
The relevance of observing at 45 and 90 GHz comes from the necessity to determine the upturn frequency in the so-called THz events: if the main radiation mechanism is synchrotron radiation from accelerated electrons emitting at chromospheric or coronal heights, it is expected a spectrum with a long and descending logarithmic tail towards millimeter wavelengths. In some cases this classical (textbook) shape is broken and an upturn or spectral reversion is observed. Since the reversion or upturn frequency has been estimated around 50 to 100 GHz for the observed cases, the importance of POEMAS is justified.
See also
Sun
Chromosphere
Corona
Solar Flares
Radio astronomy
List of radio telescopes
Synchrotron radiation
References
External links
Nobeyama Radio Heliograph Official Site
Nobeyama Radio Polarimeters Official Site
Solar Submillimeter Telescope Official Site
Astronomical instruments
Radio telescopes
Submillimetre telescopes
Astronomical observatories in Argentina | POlarization Emission of Millimeter Activity at the Sun | Astronomy | 753 |
1,724,996 | https://en.wikipedia.org/wiki/Undo | Undo is an interaction technique which is implemented in many computer programs. It erases the last change done to the document, reverting it to an older state. In some more advanced programs, such as graphic processing, undo will negate the last command done to the file being edited. With the possibility of undo, users can explore and work without fear of making mistakes, because they can easily be undone.
The expectations for undo are easy to understand: to have a predictable functionality, and to include all "undoable" commands. Usually undo is available until the user undoes all executed operations. But there are some actions which are not stored in the undo list, and thus they cannot be undone. For example, save file is not undoable, but is queued in the list to show that it was executed. Another action which is usually not stored, and thus not undoable, is scrolling or selection.
The opposite of to undo is to redo. The redo command reverses the undo or advances the buffer to a more recent state.
The common components of undo functionality are the commands which were executed of the user, the history buffer(s) which stores the completed actions, the undo/redo manager for controlling the history buffer, and the user interface for interacting with the user.
In most graphical applications for the majority of the mainstream operating systems (such as Microsoft Windows, Linux and BSDs), the keyboard shortcut for the undo command is Ctrl+Z or Alt+Backspace, and the shortcut for redo is Ctrl+Y or Ctrl+Shift+Z. In most macOS applications, the shortcut for the undo command is Command-Z, and the shortcut for redo is Command-Shift-Z. On all platforms, the undo/redo functions can also be accessed via the Edit menu.
History
The ability to undo an operation on a computer was independently invented multiple times, in response to how people used computers.
The File Retrieval and Editing System, developed starting in 1968 at Brown University, is reported to be the first computer-based system to have had an "undo" feature.
Warren Teitelman developed a Programmer's Assistant as part of BBN-LISP with an Undo function, by 1971.
The Xerox PARC Bravo text editor had an Undo command in 1974.
A 1976 research report by Lance A. Miller and John C. Thomas of IBM, Behavioral Issues in the Use of Interactive Systems, noted that "it would be quite useful to permit users to 'take back' at least the immediately preceding command (by issuing some special 'undo' command)." The programmers at the Xerox PARC research center assigned the keyboard shortcut Ctrl-Z to the undo command, which became a crucial feature of text editors and word processors in the personal computer era. In 1980, Larry Tesler of Xerox PARC began working at Apple Computer. There, he and Bill Atkinson advocated for the presence of an undo command as a standard fixture on the Apple Lisa. Atkinson was able to convince the individual developers of the Lisa's application software to include a single level of undo and redo, but was unsuccessful in lobbying for multiple levels. When Apple introduced the Lisa's successor, the Macintosh, it stipulated that all standard applications should include an “Undo” as the first command in the “Edit” menu, which has remained the standard on macOS and Windows to this day.
Multi-level undo commands were introduced in the 1980s, allowing the users to take back a series of actions, not just the most recent one. EMACS and other timeshared screen editors had it before personal computer software. CygnusEd was the first Amiga text editor with an unlimited undo/redo feature. AtariWriter, a word-processing application introduced in 1982, featured undo. NewWord, another word-processing program released by NewStar in 1984, had an unerase command. IBM's VisiWord also had an undelete command.
Undo and redo models
Undo models can be categorized as linear or non-linear. The non-linear undo model can be sub-classified in script model, us&r model, triadic model, and selective undo.
Some common properties of models are:
stable execution property: A state is represented as an ordered list of commands. This means that a command "is always undone in the state that was reached after the original execution."
weakened stable execution: This means that if undo is executed all commands which depend on the undone command are undone dependent on the command.
stable result property: This property has the similar meaning like the stable execution property except for the list. The ordered list of commands includes that they were executed instead of only the commands.
commutative: That means that the reached state after undo and redo two different commands is the same when they are executed in the converse order.
minimalistic undo property: It describes that "undo operation of command C undoes only command C and all commands younger than C which are dependent on C."
Linear undo
Linear undo is implemented with a stack (last in first out (LIFO) data structure) that stores a history of all executed commands. When a new command is executed it is added to the top of stack. Therefore, only the last executed command can be undone and removed from the history. Undo can be repeated as long as the history is not empty.
Restricted linear model
The restricted linear model is an augmentation of the linear undo model. It satisfies the above described stable execution property for linear undo, because this model does not keep the property if a command is done while the history list includes other commands. The restricted linear model clears the history list before a new command is added. But other restrictions are available, too. For example, the size of the history list can be restricted or when a defined size is reached, the first executed command is deleted from the list.
Non-linear undo
The main difference between linear undo and non-linear undo is the possibility of the user to undo the executed commands in an arbitrary order. They have the chance to undo not the most recently command but rather choose a command from the list. For non linear model there are subclasses which implement this model.
Script model
The script model handles user actions as editing a script of commands. The history list of the executed commands are interpreted "as a script, the effect of an undo is defined to be the same as if the undone action had never occurred in the first place." As the result of undo the state has to be the way as if the undone command was never executed. A disadvantage of this model is that the user has to know the connection between undone command and the current state to avoid side effects. One of this can be for example duplication. Other problems are that if "subsequent commands are redone in a different state that they were originally executed in direct manipulation interfaces, this reinterpretation of the original user action is not always obvious or well defined".
US&R model
The special feature of this model is that it has the option of skipping a command. This means that redoing a command can be skipped. The command which is skipped is marked as skipped but not deleted. When new commands are executed, the history list is retained, so the order of the executed commands can be reproducible with that. The order can be described through a history tree which is a directed graph, "because it is possible to continue redoing commands from another branch creating a link in the graph". Even though the set of commands is simple and easy to understand, the complex structure with skipping and linking branches is hard to comprehend and to remember, when the user wants to undo more than one step.
Triadic model
This non-linear undo model has besides undo and redo the possibility to rotate. It has the same data structure as the above-mentioned models with a history list and a separated redo list which includes the redo operations. The rotate operation sets the last command of the redo list in front of it. On one hand this means that the next command to be redone can be selected by placing it in front. On the other hand, rotation can be used "to select the place in the redo list where the next undo operation will put the command". The list of redo is therefore unordered. "To undo an isolated command, the user has to undo a number of steps, rotate the redo list, and then redo a number of steps". For redo the list has to be rotated until the wanted command is above.
Selective undo
Jakubec et al. say that selective undo is a feature which a model can offer but for selective undo there is no clear definition. The authors selected functions which a model should have when it supports selective undo. It should be possible to "undo any executed action in the history buffer. Actions independent of the action being undone should be left untouched". Just like that redo has to be possible to any undone command. The third function for selective undo is that "no command can be automatically discarded from history buffer without direct user’s request." For selective undo applies that undo and redo is executable outside of any context. There are three main issues. The first is that undone commands can be outside of the originally context. Through this there can be dead references which have to be handled. The second issue that modified commands can be undone and so it has to be solved which state after undo will be presented. The third issue is discarding command problems. Selective undo has no pointer in the lists, so this means that no command should be discarded of the stack.
Direct selective undo
Direct selective undo is an extension of restricted linear undo with a history tree. The operation creates a copy of the selected command, executes this and add it to the history list. There two non-linear operations selective undo and selective redo are defined, so it is more symmetric.
Multiuser application
When multiple users can edit the same document simultaneously, a multi-user undo is needed. Global multi-user undo reverts the latest action made to the document, regardless of who performed the edit. Local multi-user undo only reverts actions done by the local user, which requires a non-linear undo implementation.
Where undo can be used to backtrack through multiple edits, the redo command goes forward through the action history. Making a new edit usually clears the redo list. If a branching redo model is used, the new edit branches the action history.
The number of previous actions that can be undone varies by program, version, and hardware or software capabilities. For example, the default undo/redo stack size in Adobe Photoshop is 20 but can be changed by the user. As another example, earlier versions of Microsoft Paint only allowed up to three edits to be undone; the version introduced in Windows 7 increased this limit to 50.
Simplistic, single-edit undo features sometimes do away with "redo" by treating the undo command itself as an action that can be undone. This is known as the flip undo model, because the user can flip between two program states using the undo command. This was the standard model prior to the widespread adoption of multiple-level undo in the early 1990s.
Undo implementation
Undo can be implemented through different patterns. The most common patterns are command pattern and memento pattern.
Command pattern
The command pattern is a software design pattern which encapsulates information from the operation into command objects. This means that every action is stored in an object. The abstract command class implements an abstract execute operation, so every command object has an execute operation. For undo there also have to be unexecuted operation, which undoes the effect of the executed command, which are stored in a history list. Undo and redo are implemented so that the list is run through forwards and backwards when the execute or unexecute command is called.
For single undo only the executed command is stored. In contrast to the multi level undo where not only the history list with the commands is saved but also the number of undo levels can be determined of the maximum length of the list.
Memento pattern
With memento pattern the internal state of an object is stored. The object in which the state is saved, is called memento and is organized through the memento originator. This returns a memento, initialized with information of the current state, when undo is executed, so that the state can be checked. The memento is only visible for the originator.
In memento pattern the undo mechanism is called caretaker. It is responsible for the safekeeping of the mementos but never change the contents of these. For undo the caretaker requests a memento of the originator and then applying the undo.
The most part of undo mechanism can implemented without dependency to specific applications or command classes. This includes "the management of history list, the history scroller, menu entries for undo and redo and update of the menu entries depending on the name of the next available command."
Every command class has a do method which is called when a command is executed. The undo-method implements the reverse operation of the do-method. To implement the reverse, there are several different strategies.
full checkpoint: That means that the complete state is saved after a command is executed. This is the easiest implementation, but is not highly efficient and therefore not often used.
complete rerun: Therefore, the initial state is saved and every state in the history list can be reached through "starting with the initial state and redoing all commands from the beginning of the history."
partial checkpoint: This is the most used strategy. The changed application state is saved and with undo the part of the state is set back to the forward value.
inverse function: Inverse function needs no saved state information. "For example, moving can be reversed by moving the object back by relative amount." For selective undo there is not enough information for saving the state.
See also
Reversible computing
Rollback (data management)
Undeletion
Version control (native file format)
References
External links
The Age of Undoing - Article about the linguistic history of Undo at The New York Times Magazine.
Cascading undo control - a paper focused on what is cascading undo and how it might be presented to users
Text editor features
Reversible computing
Computer-related introductions in 1968 | Undo | Physics | 2,945 |
7,835,738 | https://en.wikipedia.org/wiki/Combinatorial%20explosion | In mathematics, a combinatorial explosion is the rapid growth of the complexity of a problem due to the way its combinatorics depends on input, constraints and bounds. Combinatorial explosion is sometimes used to justify the intractability of certain problems. Examples of such problems include certain mathematical functions, the analysis of some puzzles and games, and some pathological examples which can be modelled as the Ackermann function.
Examples
Latin squares
A Latin square of order is an array with entries from a set of elements with the property that each element of the set occurs exactly once in each row and each column of the array. An example of a Latin square of order three is given by,
{| class="wikitable" style="margin-left:auto;margin-right:auto;text-align:center;width:6em;height:6em;table-layout:fixed;"
|-
| 1|| 2 || 3
|-
| 2 || 3 || 1
|-
| 3 || 1 || 2
|}
A common example of a Latin square would be a completed Sudoku puzzle. A Latin square is a combinatorial object (as opposed to an algebraic object) since only the arrangement of entries matters and not what the entries actually are. The number of Latin squares as a function of the order (independent of the set from which the entries are drawn) provides an example of combinatorial explosion as illustrated by the following table.
Sudoku
A combinatorial explosion can also occur in some puzzles played on a grid, such as Sudoku. A Sudoku is a type of Latin square with the additional property that each element occurs exactly once in sub-sections of size (called boxes). Combinatorial explosion occurs as increases, creating limits to the properties of Sudokus that can be constructed, analyzed, and solved, as illustrated in the following table.
Games
One example in a game where combinatorial complexity leads to a solvability limit is in solving chess (a game with 64 squares and 32 pieces). Chess is not a solved game. In 2005 all chess game endings with six pieces or fewer were solved, showing the result of each position if played perfectly. It took ten more years to complete the tablebase with one more chess piece added, thus completing a 7-piece tablebase. Adding one more piece to a chess ending (thus making an 8-piece tablebase) is considered intractable due to the added combinatorial complexity.
Furthermore, the prospect of solving larger chess-like games becomes more difficult as the board-size is increased, such as in large chess variants, and infinite chess.
Computing
Combinatorial explosion can occur in computing environments in a way analogous to communications and multi-dimensional space. Imagine a simple system with only one variable, a boolean called A. The system has two possible states, A = true or A = false. Adding another boolean variable B will give the system four possible states, A = true and B = true, A = true and B = false, A = false and B = true, A = false and B = false. A system with n booleans has 2n possible states, while a system of n variables each with Z allowed values (rather than just the 2 (true and false) of booleans) will have Zn possible states.
The possible states can be thought of as the leaf nodes of a tree of height n, where each node has Z children. This rapid increase of leaf nodes can be useful in areas like searching, since many results can be accessed without having to descend very far. It can also be a hindrance when manipulating such structures.
A class hierarchy in an object-oriented language can be thought of as a tree, with different types of object inheriting from their parents. If different classes need to be combined, such as in a comparison (like A < B) then the number of possible combinations which may occur explodes. If each type of comparison needs to be programmed then this soon becomes intractable for even small numbers of classes. Multiple inheritance can solve this, by allowing subclasses to have multiple parents, and thus a few parent classes can be considered rather than every child, without disrupting any existing hierarchy.
An example is a taxonomy where different vegetables inherit from their ancestor species. Attempting to compare the tastiness of each vegetable with the others becomes intractable since the hierarchy only contains information about genetics and makes no mention of tastiness. However, instead of having to write comparisons for carrot/carrot, carrot/potato, carrot/sprout, potato/potato, potato/sprout, sprout/sprout, they can all multiply inherit from a separate class of tasty whilst keeping their current ancestor-based hierarchy, then all of the above can be implemented with only a tasty/tasty comparison.
Arithmetic
Suppose we take the factorial of n:
Then 1! = 1, 2! = 2, 3! = 6, and 4! = 24. However, we quickly get to extremely large numbers, even for relatively small n. For example, 100! ≈ , a number so large that it cannot be displayed on most calculators, and vastly larger than the estimated number of fundamental particles in the observable universe.
Communication
In administration and computing, a combinatorial explosion is the rapidly accelerating increase in communication lines as organizations are added in a process. (This growth is often casually described as "exponential" but is actually polynomial.)
If two organizations need to communicate about a particular topic, it may be easiest to communicate directly in an ad hoc manner—only one channel of communication is required. However, if a third organization is added, three separate channels are required. Adding a fourth organization requires six channels; five, ten; six, fifteen; etc.
In general, it will take
communication lines for n organizations, which is just the number of 2-combinations of n elements (see also Binomial coefficient).
The alternative approach is to realize when this communication will not be a one-off requirement, and produce a generic or intermediate way of passing information. The drawback is that this requires more work for the first pair, since each must convert its internal approach to the common one, rather than the superficially easier approach of just understanding the other.
See also
Birthday problem
Exponential growth
Metcalfe's law
Curse of dimensionality
Information explosion
Intractability (complexity)
Second half of the chessboard
References
Combinatorics
Combinatorial game theory
Game theory | Combinatorial explosion | Mathematics | 1,336 |
39,668,177 | https://en.wikipedia.org/wiki/Combinatorial%20Chemistry%20%26%20High%20Throughput%20Screening | Combinatorial Chemistry & High Throughput Screening is a peer-reviewed scientific journal that covers combinatorial chemistry. It was established in 1998 and is published by Bentham Science Publishers. The editor-in-chief is Gerald H. Lushington (LiS Consulting, Lawrence, KS, USA). The journal has 5 sections: Combinatorial/ Medicinal Chemistry, Chemo/Bio Informatics, High Throughput Screening, Pharmacognosy, and Laboratory Automation.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2014 impact factor of 1.222, ranking it 40th out of 70 journals in the category "Chemistry, Applied".
References
External links
Biochemistry journals
Bentham Science Publishers academic journals
Academic journals established in 1998
English-language journals
Combinatorial chemistry | Combinatorial Chemistry & High Throughput Screening | Chemistry,Materials_science,Mathematics,Engineering | 171 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.