id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
8,756,788 | https://en.wikipedia.org/wiki/One-pass%20algorithm | In computing, a one-pass algorithm or single-pass algorithm is a streaming algorithm which reads its input exactly once. It does so by processing items in order, without unbounded buffering; it reads a block into an input buffer, processes it, and moves the result into an output buffer for each step in the process. A one-pass algorithm generally requires O(n) (see 'big O' notation) time and less than O(n) storage (typically O(1)), where n is the size of the input. An example of a one-pass algorithm is the Sondik partially observable Markov decision process.
Example problems solvable by one-pass algorithms
Given any list as an input:
Count the number of elements.
Given a list of numbers:
Find the k largest or smallest elements, k given in advance.
Find the sum, mean, variance and standard deviation of the elements of the list. See also Algorithms for calculating variance.
Given a list of symbols from an alphabet of k symbols, given in advance.
Count the number of times each symbol appears in the input.
Find the most or least frequent elements.
Sort the list according to some order on the symbols (possible since the and after number of symbols is limited).
Find the maximum gap between two appearances of a given symbol.
Example problems not solvable by one-pass algorithms
Given any list as an input:
Find the nth element from the end (or report that the list has fewer than n elements).
Find the middle element of the list. However, this is solvable with two passes: Pass 1 counts the elements and pass 2 picks out the middle one.
Given a list of numbers:
Find the median.
Find the modes (This is not the same as finding the most frequent symbol from a limited alphabet).
Sort the list.
Count the number of items greater than or less than the mean. However, this can be done in constant memory with two passes: Pass 1 finds the average and pass 2 does the counting.
The two-pass algorithms above are still streaming algorithms but not one-pass algorithms.
References
Streaming algorithms | One-pass algorithm | [
"Technology"
] | 435 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
8,757,546 | https://en.wikipedia.org/wiki/Between%20the%20Strokes%20of%20Night | Between the Strokes of Night (1985) is a science fiction novel by English-American writer Charles Sheffield. It first appeared in the March to June 1985 issues of Analog Science Fiction/Science Fact before being published by Baen Books in July 1985. The story is divided into two vastly separated periods: the near future of 2010, and the far future of 29,000 AD. Owing to the unique technological mechanisms of the novel, the same cast of characters appears in both parts, though it is not a time travel story.
Plot summary
The story begins in the year 2010, which was 25 years in the future from the time of the novel's writing. A UN financed research lab is pursuing a strange goal: manipulate metabolism and brain function in order to eliminate the need for sleep. They are currently working on Kodiak bears and domestic cats, but hope to adapt their techniques to humans. The world situation is very dire. Global warming is in full swing. Crop failures and production shortfalls are dragging down the standard of living, with no sign of relenting. Political tensions are very high.
Meanwhile, an eccentric billionaire industrialist has privately financed the construction of many massive orbital arcologies. Via asteroid mining these space stations have become the world's single richest entity. The UN cuts funding for the zero-sleep lab and the industrialist hires their entire staff to work in his primary station.
In the middle of the scientist's rocket approach to the station, catastrophe strikes. China, whose population is suffering massive famine, launched a desperate nuclear attack against the West. The mutually assured destruction policy plays out and the new station residents watch as the world is destroyed below them. The industrialist is so distraught by the end of Earth civilization he suffers a fatal heart attack. His dying words to the chief scientist instruct her that his real motive for hiring them was to research suspended animation technology. His dream is to fit the arcologies with interstellar drives and create human colonies on extrasolar planets.
The novel then begins Part II nearly 30,000 years later. On a planet called Pentecost in the Eta Cassiopeiae system, a large human civilization of indeterminate technological level now exists. A standout feature of their culture is "Planetfest" a series of grueling endurance challenges. The top 25 finalists are given large prizes like high government positions or land holdings. This civilization is only aware of their Earth origins in a legendary sense. They have limited space travel capacity, and citizens who go to work in space come back with rumors about beings called Immortals, who apparently live forever and can travel light years in days, and have some kind of shadowy influence on their planetary government.
The story follows a Planetfest contestant, Peron, who has just found he finished in 3rd place. This year the winners are all taken to space, where further competition will send the top 10 to meet and work with the mysterious Immortals. Peron makes fast friends with the other top finalists and during their next cycle of challenges begin to uncover suspicious elements of the Immortals, Planetfest, and their entire society. During one of the off-planet trials, Peron is critically injured and another contestant (a ringer for the Immortals) makes a snap decision to bring him to the Immortals prematurely in order to save his life.
Peron awakens on a space ship in a strange dream-like state, and is introduced to the ship's Immortal crew, some of whom are scientists from the first part of the book. They consider Peron a nuisance for circumventing the normal process of being indoctrinated into Immortal society from a distance before meeting them. He is given very little information, but witnesses the Immortals teleport throughout the ship and make objects appear in their hands at will. His compatriates are all being held in suspended animation. Peron breaks away from the Immortal's monitoring and discovers the secret to their power. He gains control of the ship, awakens his friends, and holds the ship hostage until the Immortals explain what's going on.
The last 30 millennia of human history is then summarized quickly. After the nuclear holocaust the self-sufficient space arcologies (with a total population less than 1 million) began to fragment and some went off looking for new planets, as their industrialist founder had intended. The majority stayed in earth orbit, continuing to use the resources available in our home system. The travellers developed very slowly, because they had to spend all their energy on survival in deep space. Those left behind continued scientific research and tried to re-colonize Earth, but the severe nuclear winter led into 10,000 year ice age.
Their crowning scientific achievement was called Mode II Consciousness or S-Space. This was an accidental byproduct of their zero-sleep project, which revealed a way to slow human metabolism and consciousness such that they would remain fully aware, but perceive time at 1/2000th the normal rate. This explains how they live "forever" and can travel between stars in "days", because they are calculated from the subjective perspective of someone living in S-Space. The Immortals' ability to make objects appear in their hands instantly, is just a result of service robots placing the object in their hand at normal speed, which is too fast to notice from the perspective of S-Space.
After this discovery, the leading arcology decides to track down the traveling arcologies. Their trip takes place in S-Space so they never age, gaining their Immortal moniker. Meanwhile, the normal space (N-space) travellers have endured hundreds of generations and repeated political upheavals. The Immortals discover that due to their twisted metabolisms they cannot breed. Using their vastly superior technology, they control the new planet-based colonies from behind the scenes and use the Planetfest games as a recruiting method to reinforce their numbers. Peron and company commandeer the ship and go back to their legendary roots of Earth, while in S-Space.
En route they realize that centuries have passed on their homeworld and there is no point in ever returning. The ship also encounters shadowy deep-space life forms of ambiguous intelligence, who are only visible from S-space. The Immortal crew dismisses this routine sighting as just another mystery of the galaxy. Peron arrives on Earth, finding it as nothing more than a mostly frozen nature preserve. They discuss their next move and resolve to uncover more secrets about the Immortals. While in orbit around Earth they detect that a large portion of the radio traffic throughout the Immortals' communication network seems to be coming from nowhere.
When they track down the location they find the hidden Immortal headquarters isolated in deep space. Peron's gang manages to evade security and stowaway aboard a supply ship bound for the headquarters. Upon arrival they are immediately captured by the superior security at HQ. Here they meet the other scientist characters from Part I and are congratulated for coming so far. They are invited to become equal partners in the quest to solve a new problem.
Apparently the deep space life forms they briefly saw previously, are miniature versions of giant entities situated in the gulfs of deep space between galaxies. These enormous beings are unquestionably intelligent, and the Immortal HQ is actually a research station entirely devoted to studying them. These beings communicate on extremely long wavelengths, which are so slow, even S-Space is woefully inadequate to process them. However Immortals have interpreted some signals, which seem to indicate the Deep Space Beings predict that the stars in the spiral arm will all mysteriously go dark in the next 40,000 years; an impossibly short time on the cosmological scale. Whether the Deep Space beings are actively causing this artificial transformation is unknown.
To better understand the problem, the Immortals are devising a new T-Space which is an even more radical slowing of human consciousness. Peron's group agree to help, but insist on building a new facility that will be operated only in N-space, resisting the logic that S-Space is superior method of operation. After much debate, the Immortal scientists agree to the plan. The narrative ends here, but the last few pages are from the perspective of one of Peron's friends who has volunteered as a guinea pig for T-Space. He relates the last 5 T-minutes of the universe, which is over 1000 years of normal time. He witnesses the final Big Crunch while somehow he and the deep space beings remain unaffected by the singularity.
References
External links
1985 American novels
Novels by Charles Sheffield
Fiction set in 2010
Fiction set in the 7th millennium or beyond
Hard science fiction
Sleep in fiction
Dystopian novels
Fiction about asteroid mining
Novels about nuclear war and weapons
Space exploration novels
Fiction about time
Novels about extraterrestrial life
Fiction about superhuman abilities
Novels about the end of the universe
Climate change novels
Novels first published in serial form
Baen Books books
Works originally published in Analog Science Fiction and Fact | Between the Strokes of Night | [
"Physics",
"Biology"
] | 1,803 | [
"Behavior",
"Sleep in fiction",
"Physical quantities",
"Time",
"Fiction about time",
"Spacetime",
"Sleep"
] |
8,757,778 | https://en.wikipedia.org/wiki/2%20Ursae%20Minoris | 2 Ursae Minoris (2 UMi) is a single star a few degrees away from the northern celestial pole. Despite its Flamsteed designation, the star is actually located in the constellation Cepheus. This change occurred when the constellation boundaries were formally set in 1930 by Eugene Delporte. Therefore, the star is usually referred only by its catalog numbers such as HR 285 or HD 5848. It is visible to the naked eye as a faint, orange-hued star with an apparent visual magnitude of 4.244. This object is located 280 light years away and is moving further from the Earth with a heliocentric radial velocity of +8 km/s. It is a candidate member of the Hyades Supercluster.
This is an aging K-type star with a stellar classification of K2 II-III, showing a luminosity class with blended traits of a giant and a bright giant. It has 2.3 times the mass of the Sun and has expanded to 24 times the Sun's radius. The star is radiating around 215 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,513 K.
References
External links
Stars — 2 Ursae Minoris
K-type bright giants
K-type giants
Hyades Stream
Cepheus (constellation)
Durchmusterung objects
Ursae Minoris, 02
005372
005372
0285 | 2 Ursae Minoris | [
"Astronomy"
] | 297 | [
"Constellations",
"Cepheus (constellation)"
] |
8,757,878 | https://en.wikipedia.org/wiki/Medical%20Products%20Agency%20%28Sweden%29 | The Medical Products Agency (MPA; ) is the government agency in Sweden responsible for regulation and surveillance of the development, manufacturing and sale of medicinal drugs, medical devices and cosmetics.
Its task is also to ensure that both patients and healthcare professionals have access to safe and effective medicinal products and that these are used in a rational and cost-effective manner.
The Swedish Medical Products Agency is one of the leading regulatory authorities in the EU. During the last five years, the Swedish MPA has been among the top three agencies in Europe, counting the number of approvals processes managed for central (i.e. European) approvals of medicines. The Swedish MPA also has strong representation in more than 110 working groups and committees in the scope of the Heads of Medicines Agencies (HMA) and European Medicines Agency (EMA) for regulation of medical products in Europe.
The Medical Products Agency is a government body under the aegis of the Swedish Ministry of Health and Social Affairs. Its operations are largely financed through fees. Approximately 750 people work at the agency; most are pharmacists and doctors.
General directors
1990–1999: Kjell Strandberg
1999–2008: Gunnar Alván
2008–2014: Christina Rångemark Åkerman
2014–2020: Catarina Andersson Forsman
2020–2021: Joakim Brandberg (acting)
2021– : Björn Eriksson
Critics
In 2016, the Swedish National Audit Office published an audit report examining how the state (the government, the Medical Products Agency, the National Board of Health and Welfare and the Swedish Agency for Medical and Social Evaluation) handles the pharmaceutical industry's influence over state drug control and knowledge management. In its review, the National Audit Office sharply criticizes the Medical Products Agency for shortcomings on several points. However, the Medical Products Agency has pointed out that several of the conclusions were based on claims that lack objective support. The government also rejected large parts of the National Audit Office's criticism.
See also
European Medicines Agency
References
External links
The Swedish Medical Products Agency, MPA
The Innovation Office at the MPA
National agencies for drug regulation
Government agencies of Sweden
Medical and health organizations based in Sweden | Medical Products Agency (Sweden) | [
"Chemistry"
] | 434 | [
"National agencies for drug regulation",
"Drug safety"
] |
8,758,154 | https://en.wikipedia.org/wiki/Berberine | Berberine is a quaternary ammonium salt from the protoberberine group of benzylisoquinoline alkaloids, occurring naturally as a secondary metabolite in some plants including species of Berberis, from which its name is derived.
Due to their yellow pigmentation, raw Berberis materials were once commonly used to dye wool, leather, and wood. Under ultraviolet light, berberine shows a strong yellow fluorescence, making it useful in histology for staining heparin in mast cells. As a natural dye, berberine has a color index of 75160.
Research
Studies on the pharmacological effects of berberine, including its potential use as a medicine, are preliminary basic research: some studies are conducted on cell cultures or animal models, whereas clinical trials investigating the use of berberine in humans are limited. A 2023 review study stated that berberine may improve lipid concentrations. High-quality, large clinical studies are needed to properly evaluate the effectiveness and safety of berberine in various health conditions, because existing studies are insufficient to draw reliable conclusions.
Berberine supplements are widely available in the U.S. but have not been approved by the U.S. Food and Drug Administration (FDA) for any specific medical use. Researchers publicly warn that studies linking berberine to supposed health benefits are limited. Furthermore, the quality of berberine supplements can vary between different brands. A study conducted in 2017 found that out of 15 different products sold in the U.S., only six contained at least 90% of specified berberine amount.
Drug interactions
Berberine is known to inhibit the activity of CYP3A4, an important enzyme involved in drug metabolism and clearance of endogenous substances, including steroid hormones such as cortisol, progesterone and testosterone. Several studies have demonstrated that berberine can increase the concentrations of cyclosporine in renal transplant patients and midazolam in healthy adult volunteers, confirming its inhibitory effect on CYP3A4.
Biological sources
Berberis vulgaris (barberry)
Berberis aristata (tree turmeric)
Berberis thunbergii
Fibraurea tinctoria
Mahonia aquifolium (Oregon grape)
Hydrastis canadensis (goldenseal)
Xanthorhiza simplicissima (yellowroot)
Phellodendron amurense (Amur cork tree)
Coptis chinensis (Chinese goldthread)
Tinospora cordifolia
Argemone mexicana (prickly poppy)
Eschscholzia californica (California poppy)
Berberine is usually found in the roots, rhizomes, stems, and bark.
Biosynthesis
The alkaloid berberine has a tetracyclic skeleton derived from a benzyltetrahydroisoquinoline system with the incorporation of an extra carbon atom as a bridge. Formation of the berberine bridge is rationalized as an oxidative process in which the N-methyl group, supplied by S-adenosyl methionine (SAM), is oxidized to an iminium ion, and a cyclization to the aromatic ring occurs by virtue of the phenolic group.
Reticuline is the immediate precursor of protoberberine alkaloids in plants. Berberine is an alkaloid derived from tyrosine. L-DOPA and 4-hydroxypyruvic acid both come from L-tyrosine. Although two tyrosine molecules are used in the biosynthetic pathway, only the phenethylamine fragment of the tetrahydroisoquinoline ring system is formed via DOPA; the remaining carbon atoms come from tyrosine via 4-hydroxyphenylacetaldehyde.
References
Aromatase inhibitors
Benzodioxoles
Benzylisoquinoline alkaloids
CYP2D6 inhibitors
CYP3A4 inhibitors
DNA-binding substances
Hypolipidemic agents
M2 receptor agonists
Nitrogen heterocycles
Quaternary ammonium compounds
Traditional Chinese medicine | Berberine | [
"Biology"
] | 845 | [
"Genetics techniques",
"DNA-binding substances"
] |
8,758,178 | https://en.wikipedia.org/wiki/FlyBase | FlyBase is an online bioinformatics database and the primary repository of genetic and molecular data for the insect family Drosophilidae. For the most extensively studied species and model organism, Drosophila melanogaster, a wide range of data are presented in different formats.
Information in FlyBase originates from a variety of sources ranging from large-scale genome projects to the primary research literature. These data types include mutant phenotypes; molecular characterization of mutant alleles; and other deviations, cytological maps, wild-type expression patterns, anatomical images, transgenic constructs and insertions, sequence-level gene models, and molecular classification of gene product functions. Query tools allow navigation of FlyBase through DNA or protein sequence, by gene or mutant name, or through terms from the several ontologies used to capture functional, phenotypic, and anatomical data. The database offers several different query tools in order to provide efficient access to the data available and facilitate the discovery of significant relationships within the database. Links between FlyBase and external databases, such as BDGP or modENCODE, provide opportunities for further exploration into other model organism databases and other resources of biological and molecular information. The FlyBase project is carried out by a consortium of Drosophila researchers and computer scientists at Harvard University and Indiana University in the United States, and University of Cambridge in the United Kingdom.
FlyBase is one of the organizations contributing to the Generic Model Organism Database (GMOD).
the FlyBase home page requested a website access fee of US$150.00 per person per year, stating that "The NHGRI has reduced the funding of FlyBase by 50%".
Background
Drosophila melanogaster has been an experimental organism since the early 1900s, and has since been placed at the forefront of many areas of research. As this field of research spread and became global, researchers working on the same problems needed a way to communicate and monitor progress in the field. This niche was initially filled by community newsletters such as the Drosophila Information Service (DIS), which dates back to 1934 when the field was starting to spread from Thomas Hunt Morgan's lab. Material in these pages presented regular 'catalogs' of mutations, and bibliographies of the Drosophila literature. As computer infrastructure developed in the '80s and '90s, these newsletters gave way and merged with internet mailing lists, and these eventually became online resources and data. In 1992, data on the genetics and genomics of D. melanogaster and related species were electronically available over the Internet through the funded FlyBase, BDGP (Berkeley Drosophila Genome Project) and EDGP (European Drosophila Genome Project) informatics groups. These groups recognized that most genome project and community data types overlapped. They decided it would be of value to present the scientific community with an integrated view of the data. In October 1992, the National Center for Human Genome Research of the NIH funded the FlyBase project with the objective of designing, building and releasing a database of genetic and molecular information concerning Drosophila melanogaster. FlyBase also receives support from the Medical Research Council, London. In 1998, the FlyBase consortium integrated the information into a single Drosophila genomics server. the FlyBase project was carried out by a consortium of Drosophila researchers and computer scientists at Harvard University, University of Cambridge (UK), Indiana University and the University of New Mexico.
Contents
FlyBase contains a complete annotation of the Drosophila melanogaster genome that is updated several times per year. It also included a searchable bibliography of research on Drosophila genetics in the last century. Information on current researchers, and a partial pedigree of relationships between current researchers, was searchable, based on registration of the participating scientist. The site also provides a large database of images illustrating the full genome, and several movies detailing embryogenesis (ImageBrowser ). The two major tributaries to the database are the large multispecies data sets deposited by the Drosophila 12 Genomes Consortium (Clark et al 2007) and Crosby et al 2007.
Search Strategies—Gene reports for genes from all twelve sequenced Drosophila genomes are available in FlyBase. There are four main ways this data can be browsed: Precomputed Files BLAST, Gbrowse, and Gene Report Pages. Gbrowse and precomputed files are for genome-wide analysis, bioinformatics, and comparative genomics. BLAST and gene report pages are for a specific gene, protein, or region across the species.
When looking for cytology there are two main tools available. Use Cytosearch when looking for cytologically-mapped genes or deficiencies, that have not been molecularly mapped to the sequence. Use Gbrowse when looking for molecularly mapped sequences, insertions, or Affymetrix probes.
There are two main query tools in FlyBase. The first main query tool is called Jump to Gene (J2G). This is found in the top right of the blue navigation bar on every page of FlyBase. This tool is useful when you know exactly what you are looking for and want to go to the report page with that data. The second main query tool is called QuickSearch. This is located on the FlyBase homepage. This tool is most useful when you want to look up something quickly that you may only know a little about. Searching can be performed within D. melanogaster only or within all species. Data other than genes can be searched using the ‘data class’ menu.
Related research
The following provides two examples of research that is related to or uses FlyBase:
The first is a study of expressed genes from alate (meaning "having wings") Toxoptera citricida, more commonly known as the brown citrus aphid. The brown citrus aphid, is considered the primary vector of citrus tristeza virus, a severe pathogen which causes losses to citrus industries worldwide. The winged form of this aphid can fly long distances with the wind, enabling them to spread the citrus tristeza virus in citrus growing regions. To better understand the biology of the brown citrus aphid and the emergence of genes expressed during wing development, researchers undertook a large-scale 5′ end sequencing project of cDNA clones from winged aphids. Similar large-scale expressed sequence tag (EST) sequencing projects from other insects have provided a vehicle for answering biological questions relating to development and physiology. Although there is a growing database in GenBank of ESTs from insects, most are from Drosophila melanogaster, with relatively few specifically derived from aphids. The researchers were able to provide a large data set of ESTs from the alate (winged) brown citrus aphid and have begun to analyze this valuable resource. They were able to do this with the help of information on Drosophila melanogaster in FlyBase. Putative sequence identity was determined using BLAST searches. Sequence matches with E-value scores ≤ −10 were considered significant and were categorized according to the Gene Ontology (GO) classification system based on annotation of the 5 ‘best hit’ matches in BLASTX searches. All D. melanogaster matches were cataloged using FlyBase. Nearly all of these ‘best hit’ matches were characterized with respect to the functionally annotated genes in D. melanogaster using FlyBase. Genetic information is crucial to advancing the understanding of aphid biology, and will play a major role in the development of future non-chemical, gene-based control strategies against these insect pests.
Enhancing Drosophila Gene Ontology Annotation: What gene products do and where they do it are important questions for biologists. The Gene Ontology project was established 13 years ago in order to summarize this data consistently across different databases by using a common set of defined vocabulary terms. They also encode relationships between terms. The Gene Ontology Project is a major bioinformatics initiative with the aim of standardizing the representation of gene and gene product attributes across species and databases. The project also provides gene product annotation data from GO consortium members. FlyBase was one of the three founding members of the Gene Ontology Consortium. GO annotation comprises at least three components: a GO term that describes molecular function, biological role, or subcellular location; an "evidence code" that describes the type of analysis used to support the GO term; and an attribution to a specific reference. GO annotation is useful for both small-scale and large-scale analyses. It can provide a first indication of the nature of a gene product and, in conjunction with evidence codes, point directly to papers with pertinent experimental data. The current priorities for annotation are: homologs of human disease genes, genes that are highly conserved across species, genes involved in biochemical/signaling pathways, and topical genes shown to be of significant interest in recent publications. FlyBase has been contributing GO annotations to the project since it started in August 2006. GO annotations appear on the Gene Report page in FlyBase. GO data are searchable in FlyBase using both TermLink and QueryBuilder. The GO is dynamic and can change on a daily basis, for example the addition of new terms. To keep up, FlyBase loads a new version of the GO every one or two releases of FlyBase. The GO annotation set is submitted to the GOC at the same time as a new version of FlyBase is released.
See also
List of Drosophila databases
Model Organism Databases
WormBase
Xenbase
Notes and references
External links
Official Site
Drosophila melanogaster genetics
Insect developmental biology
Model organism databases | FlyBase | [
"Biology"
] | 2,008 | [
"Model organism databases",
"Model organisms"
] |
11,893,609 | https://en.wikipedia.org/wiki/Batyr | Batyr (May 24, 1970 – August 26, 1993) was an Asian elephant claimed to be able to use a large amount of meaningful human speech. Living in a zoo in Kazakhstan in the Soviet Union, Batyr was reported as having a vocabulary of more than 20 phrases.
A recording of Batyr saying "Batyr is good", his name and using words such as drink and give was played on Kazakh state radio and on the Soviet Central Television programme Vremya in 1980.
As in all cases of talking animals, these claims are subject to the observer-expectancy effect.
Biography
Born on May 24, 1970, at Almaty Zoo, Batyr lived his entire life in the Karaganda Zoo at Karaganda in Kazakhstan. He died in 1993. Batyr was the offspring of once-wild Indian elephants (a subspecies of the Asian elephant) and was the second child of his mother Palm (1959–1998) and father Dubas, (1959–1978) presented to Kazakhstan's Almaty Zoo by the Indian Prime Minister Jawaharlal Nehru. The first baby elephant (Batyr's elder brother) was killed by his mother immediately after birth on May 15, 1968.
Abilities
Batyr, whose name is a Turkic word meaning 'dashing equestrian', 'man of courage' or 'athlete', was first alleged to speak just before New Year's Day in the winter of 1977 when he was seven years old. Zoo employees were the first to notice his "speech", but he soon delighted zoo-goers at large by appearing to ask his attendants for water and regularly praising or (infrequently) chastising himself. By 1979, his fame as the "speaking elephant" had spread in the wake of various mass-media stories about his abilities, many containing considerable fabrication and wild conjecture. Batyr's case was also included in several books on animal behaviour, and in the proceedings of several scientific conferences. These developments drew a spate of zoo visitors, and brought the offer of an exchange—Batyr for a rare bonobo—from the Czechoslovak Circus, an offer rejected by the zoo's employees.
A. N. Pogrebnoj-Aleksandroff, a young worker at the zoo who studied Batyr's abilities and wrote many publications about him, said of the elephant:
Batyr, on the level of natural blares, [Batyr] said words (including human slang) by manipulating his trunk. By putting the trunk in his mouth, pressing a tip of the trunk to the bottom of the jaw and manipulating the tongue, [the elephant] said words. Besides, being in a corner of the cage (frequently at night) with the trunk softly hanging down, the elephant said words almost silently—a sound comparable with the sound of ultrasonic devices used against mosquitoes or the peep of mosquitoes, which human hearing hears well until approximately the age of 40. While pronouncing words, only the tip of the elephant's trunk is clamped inside [the mouth] and Batyr made subtle movements with a finger-shaped shoot on the trunk tip".
Various audiovisual recordings were made during Pogrebnoj-Aleksandroff's studies of Batyr and some of these have been transferred to Russia's Moscow State University for further study.
Death
Batyr died in 1993 when zookeepers accidentally gave him an overdose of sedatives. His death was reported worldwide.
Lexicon
It is claimed that Batyr had a vocabulary of about 20 words in the Russian and Kazakh languages. He reportedly imitated the sounds of other animals, and uttered short phrases including words of human slang. Batyr's lexicon list was compiled from audiovisual records, scientific researches and statistical data from eyewitnesses who heard the elephant themselves. Individual and disputable sounds were not considered. All other words as reported by the media were treated as fiction, second-hand and interpretations of retellings (such as if, for example, Batyr were heard to say "water" and the media reported it as "the elephant asked to drink").
Full list of words and phrases reported to have been spoken by Batyr:
Using trunk in mouth:
: 'Batyr', said abruptly;
: 'I'm', said very abruptly, in combination with his name, using long pronunciation; I'm-Batyr sounded almost together;
: 'Batyr', said thoughtfully-tenderly and lingeringly;
…: 'Batyr, Batyr, Batyr', joyfully running in a cage;
: , an affectionate version of the name Batyr;
: 'water', a request;
: 'good', as in good fellow;
: 'good Batyr';
: 'Oh-yo', sonorously;
: 'fool', seldom and abruptly;
: 'bad', rarely;
: 'bad Batyr', rarely;
: 'go';
: 'go to hell', obscene Russian phrase; said for the first and only time during a telecast shooting;
: Russian curse word for 'penis', seldom and abruptly;
: short form of 'grandmother'; short children's sound ;
: 'yes';
: 'give (me)';
: 'give, give, give';
: 'one, two, three', while dancing, turning and hopping.
Other sounds:
A human-like whistle;
Human speech allegedly uttered at infrasonic and ultrasonic frequencies;
A gnashing sound imitative of rubber or polyfoam (foam plastic) on glass;
The peep of rats or mice;
The bark of dogs;
The natural trumpeting of elephants.
Press
Reporter Richard Beeston in Moscow wrote the article "Soviet Zoo Has Talking Elephant":
Publication
Scientific
Scientific conference, Agricultural Institute, Tselinograd, in Kazakhstan, 1983–1989
The International Practical Science conference for the anniversary of Moscow Zoo, in Russia, 1984
The International Practical Science conference for the anniversary of the Almaty Zoo, in Kazakhstan, 1987
The International Practical Science conference in Tallinn, Estonia, 1989
The International zoological conference; Institute of Zoology — Academy of Science, Ukraine, 1989
The International Practical Science conference for the 125 anniversary of Leningrad Zoo, Saint Petersburg, Russia, 1990
In books
The True History or Who is Talking? An Elephant!, Dr. A. Pogrebnoj-Alexandroff, 1979–1993.
Reincarnation-Перевоплощение, Dr. A. Pogrebnoj-Alexandroff, 2001.
Speaking Animals, A. Dubrov, 2001.
Speaking Birds and Animals, O. Silaeva, V. Ilyichev, A. Dubrov, 2005.
Media
Student documentary film: Who speaks? The elephant… — VGIK — Moscow (USSR)
Audio recording of Batyr's voice by scientist and writer Dr. A. Pogrebnoj-Alexandroff (1979–1983)
See also
Alex (parrot)
Animal language
Koko (gorilla)
Kosik (elephant)
List of individual elephants
N'kisi
Talking animal
Talking birds
Washoe (chimpanzee)
References
1970 animal births
1993 animal deaths
Ethology
Talking animals
Zoos in Kazakhstan
Karaganda
Individual animals in Kazakhstan
1980 in the Soviet Union
Individual Asian elephants | Batyr | [
"Biology"
] | 1,503 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
11,893,918 | https://en.wikipedia.org/wiki/Powerchip | Powerchip Semiconductor Manufacturing Corporation (PSMC) manufactures and sells semiconductor products, in particular memory chips and other integrated circuits. As of 2023, the company was the 8th largest semiconductor foundry in the world with four 12 inch and two 8 inch wafer labs. The company offers foundry services as well as design, manufacturing and test services. It was formerly known as Powerchip Semiconductor Corp. and changed its name in June 2010. Powerchip Technology Corporation was founded in 1994 and is headquartered in Hsinchu, Taiwan.
Overview
In 2017, its net profit was NT$8.08 billion. The company plans to invest NT$278 billion (US$9.04 billion) to build two new 12-inch wafer plants in Hsinchu Science Park, with construction scheduled to start in 2020
In March 2021, Powerchip broke ground on a new factory in Miaoli County that will manufacture chips with 45-nanometer and 50-nanometer technologies. The plant will employ and additional 3,000 workers.
Powerchip is a significant supplier to the automotive industry.
The company, in 2024 partnering with India's Tata group to build their first semiconductor fab outside Taiwan, on a project of NT$350 billion (US$11 billion).
Other Countries
China
In May 2015, PSMC and the Hefei City Construction and Investment Holding Group established Nexchip in China as a joint venture.
Japan
In 2023, SBI Holdings and Japan's Miyagi Prefecture, Powerchip Semiconductor Manufacturinghas officially confirmed that JSMC's first fabrication facility will be located at the Second Northern Sendai Central Industrial Park(Miyagi).
India
PSMC is cooperating with company Tata Electronics to build 12-inch wafer fabrication in Dholera, Gujarat.
See also
List of semiconductor fabrication plants
List of companies of Taiwan
References
Computer companies of Taiwan
Computer hardware companies
Semiconductor companies of Taiwan
Foundry semiconductor companies
Manufacturing companies based in Hsinchu
Electronics companies established in 1994
Companies listed on the Taiwan Stock Exchange
Private equity portfolio companies
Taiwanese brands
Taiwanese companies established in 1994 | Powerchip | [
"Technology"
] | 424 | [
"Computer hardware companies",
"Computers"
] |
11,894,762 | https://en.wikipedia.org/wiki/Polyad | In mathematics, polyad is a concept of category theory introduced by Jean Bénabou in generalising monads. A polyad in a bicategory D is a bicategory morphism Φ from a locally punctual bicategory C to D, . (A bicategory C is called locally punctual if all hom-categories C(X,Y) consist of one object and one morphism only.) Monads are polyads where C has only one object.
Notes
Bibliography
Category theory | Polyad | [
"Mathematics"
] | 113 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
11,894,889 | https://en.wikipedia.org/wiki/Cellular%20microarray | A cellular microarray (or cell microarray) is a laboratory tool that allows for the multiplex interrogation of living cells on the surface of a solid support. The support, sometimes called a "chip", is spotted with varying materials, such as antibodies, proteins, or lipids, which can interact with the cells, leading to their capture on specific spots. Combinations of different materials can be spotted in a given area, allowing not only cellular capture, when a specific interaction exists, but also the triggering of a cellular response, change in phenotype, or detection of a response from the cell, such as a specific secreted factor.
There are a large number of types of cellular microarrays:
Reverse transfection cell microarrays. David M. Sabatini's laboratory developed reverse-transfection cell microarrays at the Whitehead Institute, publishing their work in 2001.
PMHC Cellular Microarrays. This type of microarray were developed by Daniel Chen, Yoav Soen, Dan Kraft, Patrick Brown and Mark Davis at Stanford University Medical Center.
References
Chen DS, Davis MM (2006) Molecular and functional analysis using live cell microarrays. Curr Opin Chem Biol 10:28-34
Chen DS, Soen Y, Stuge TB, Lee PP, Weber JS, Brown PO, Davis MM (2005) Marked Differences in Human Melanoma Antigen-Specific T Cell Responsiveness after Vaccination Using a Functional Microarray. PLoS Med 2: 10: e265 ()
Soen Y., Chen D. S., Kraft D. L., Davis M. M. and Brown P.O. (2003) Detection and characterization of cellular immune responses using peptide-MHC microarrays. PLoS Biol. 1: E65 (http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pbio.0000065)
Chen DS, Davis MM (2005) Cellular immunotherapy: Antigen recognition is just the beginning. Springer Semin Immunopathol 27:119–127
Chen DS, Soen Y, Davis MM, Brown PO (2004) Functional and molecular profiling of heterogeneous tumor samples using a novel cellular microarray. J Clin Oncol 22:9507 (https://web.archive.org/web/20041020122342/http://meeting.jco.org/cgi/content/abstract/22/14_suppl/9507)
Soen Y, Chen DS, Stuge TB, Weber JS, Lee PP, et al. (2004) A novel cellular microarray identifies functional deficiences in tumor-specific T cell responses. J Clin Oncol 22:2510
Ziauddin J, Sabatini DM (2001) Microarrays of cells expressing defined cDNAs. Nature. 2001 May 3;411(6833):107-10. Pubmed link
Biotechnology | Cellular microarray | [
"Biology"
] | 649 | [
"nan",
"Biotechnology"
] |
11,895,109 | https://en.wikipedia.org/wiki/MicroMegas%20detector | The MicroMegas detector (Micro-Mesh Gaseous Structure) is a gaseous particle detector and an advancement of the wire chamber. Invented in 1996 by Georges Charpak and Ioannis Giomataris, Micromegas detectors are mainly used in experimental physics, in particular in particle physics, nuclear physics and astrophysics for the detection of ionizing particles.
Micromegas detectors are used to detect passing charged particles and obtain properties such as position, arrival time and momentum. The advantage of the Micromegas technology a high gain of 104 while operating with small response times in the order of 100 ns. This is realized by dividing the gas chamber with a microscopic mesh, which makes the Micromegas detector a micropattern gaseous detector. In order to minimize the perturbation on the impinging particle, the detector is just a few millimeters thick.
Working principle
Ionization and charge amplification
While passing through the detector, a particle ionizes the gas, resulting in an electron/ion pair. Due to an electric field in the order of 400 V/cm, the pair does not recombine, and the electron drifts toward the amplification electrode (the mesh) and the ion toward the cathode. Close to the mesh, the electron is accelerated by an intense electric field, typically in the order of 40 kV/cm in the amplification gap. This creates more electron/ion pairs, resulting in an electron avalanche. A gain on the order of 104 creates a sufficiently large signal to be read out by the intended electrode. The readout electrode is usually segmented into strips and pixels in order to reconstruct the position of the impinging particle. The amplitude and the shape of the signal allows users to obtain information about the impinging time and energy of the impinging particle.
Analog signal of a Micromegas
The signal is induced by the movement of charges in the volume between the micro-mesh and the readout electrode, called the amplification gap. The 100 ns long signal consists of an electron peak (blue) and an ion tail (red). Since the electron mobility in gas is over 1000 times higher than the ion mobility, its signal is registered much faster than the ionic signal. The electron signal allows to precisely measure the impinging time, while the ionic signal is necessary to reconstruct the energy of the particle.
History
First concept at the Hadron Blind Detector
In 1991, to improve the detection of hadrons at the Hadron Blind Detector experiment, I. Giomataris and G. Charpak reduced the amplification gap of a parallel plate spark chamber in order to shorten the response time. A 1 mm amplification gap prototype was built for the HDB experiment but the gain was not uniform enough to be used in the experiment. The millimeter gap was not controlled enough and created large gain fluctuations. Nevertheless, the benefits of a reduce amplification gap had been demonstrated and the Micromegas concept was born in October 1992, shortly before the announcement of the Nobel Prize attribution to Georges Charpak for the invention of the wire chambers. Georges Charpak used to say that this detector and some other new concepts belonging to the family of micro-pattern gaseous detectors (MPGDs) would revolutionize nuclear and particle physics just as his detector had done.
The Micromegas technology research and development
Starting in 1992 at CEA Saclay and CERN, the Micromegas technology has been developed to provide more stable, reliable, precise and faster detectors. In 2001, twelve large Micromegas detectors of 40 x 40 cm2 were used for the first time in a large scale experiment at COMPASS situated on the Super Proton Synchrotron accelerator at CERN.
Another example of the development of the Micromegas detectors is the invention of the “bulk” technology. The “bulk” technology consists of the integration of the micro-mesh with the printed circuit board carrying the readout electrodes in order to build a monolithic detector. Such a detector is very robust and can be produced via an industrial process (a successful implementation was demonstrated by 3M in 2006) allowing public applications. For instance, by modifying the micro-mesh in order to make it photo-sensitive to UV light, Micromegas detectors can be used to detect forest fires. A photo-sensitive Micromegas is also used for fast-timing applications. The PICOSEC-Micromegas uses a Cherenkov radiator and a photocathode in front of the gaseous volume and a time resolution of 24 ps is measured with minimum ionizing particles.
Micromegas detectors in experimental physics
Micromegas detectors are used in several experiments :
Hadronic physics: COMPASS, NA48, and projects for the ILC-TPC and CLAS12 at J-lab are under active study
Particle physics: T2K, CAST, HELAZ, IAXO
Neutron physics: nTOF, ESS nBLM
Micromegas detector will be used in the ATLAS experiment, as part of the upgrade of its planned muon spectrometer.
See also
Gaseous ionization detector
Micropattern gaseous detector
Gas electron multiplier
Notes and references
Particle detectors | MicroMegas detector | [
"Technology",
"Engineering"
] | 1,067 | [
"Particle detectors",
"Measuring instruments"
] |
11,895,648 | https://en.wikipedia.org/wiki/QMAP | QMAP was a balloon experiment to measure the anisotropy of the cosmic microwave background (CMB). It flew twice in 1996, and was used with an interlocking scan of the skies to produce CMB maps at angular scales between 0.7° and 9°.
The gondola was later used for ground-based observations in the MAT/TOCO experiment.
See also
Cosmic microwave background experiments
Observational cosmology
References
Physics experiments
Cosmic microwave background experiments
Balloon-borne telescopes | QMAP | [
"Physics"
] | 103 | [
"Experimental physics",
"Physics experiments"
] |
11,895,662 | https://en.wikipedia.org/wiki/Archeops | Archeops was a balloon-borne instrument dedicated to measuring the Cosmic microwave background (CMB) temperature anisotropies. The study of this radiation is essential to obtain precise information on the evolution of the Universe: density, Hubble constant, age of the Universe, etc. To achieve this goal, measurements were done with devices cooled down at 100mK temperature placed at the focus of a warm telescope. To avoid atmospheric disturbance the whole apparatus is placed on a gondola below a helium balloon that reaches 40 km altitude.
Archeops has four bands in the millimeter domain (143, 217, 353 and 545 GHz) with a high angular resolution (about 15 arcminutes) in order to constrain small anisotropy scales, as well as a large sky coverage fraction (30%) in order to minimize the intrinsic cosmic variance.
Instrument and flights
The instrument was designed by adapting concepts put forward for the High Frequency Instrument of Planck surveyor (Planck-HFI) and using balloon-borne constraints.
Namely, it consists of an open 3He-4He dilution cryostat cooling spiderweb-type bolometers at 100 mK; cold individual optics with horns at different temperature stages (0.1, 1.6, 10 K) and an off-axis Gregorian telescope.
The CMB signal is measured by the 143 and 217 GHz detectors while interstellar dust emission and atmospheric emission are monitored with the 353 (polarized) and 545 GHz detectors.
The whole instrument is baffled so as to avoid stray radiation from the Earth and the balloon.
To cover as far as 30% of the sky, the payload was spinning mostly above the atmosphere, scanning the sky in circles with a fixed elevation of roughly 41 degrees. The gondola, at a float altitude above 32 km, spins across the sky at a rate of 2 rpm which, combined with the Earth rotation, produces a well sampled sky at each frequency.
Archeops flew for the first time in Trapani (Sicily) with four–hours integration time. Then, the upgraded instrument was launched three times from the Esrange base near Kiruna (Sweden) by the CNES during 2 consecutive Winter seasons (2001 and 2002). The last and best flight on Feb. 7th, 2002 yields 12.5 hours of CMB–type data (at ceiling altitude and by night) from a 19–hours total. The balloon landed in Siberia and it was recovered (with its precious data recorded on–board) by a Franco–Russian team with –40°C weather.
Results
Archeops has linked, for the first time and before WMAP, the large angular scales (previously measured by COBE) to the first acoustic peak region.
From its results, inflation motivated cosmologies have been reinforced with a flat Universe (total energy density Ωtot = 1 within 3%).
When combined with complementary cosmological datasets regarding the value of Hubble's constant, Archeops gives constraints on the dark energy density and the baryonic density in very good agreement with other independent estimations based on supernovae measurements and big bang nucleosynthesis.
Archeops has given the first polarized maps of the galactic dust emission with this resolution.
References
Astronomical instruments
Cosmic microwave background experiments
Balloon-borne telescopes | Archeops | [
"Astronomy"
] | 682 | [
"Astronomical instruments"
] |
11,896,846 | https://en.wikipedia.org/wiki/TRPML | TRPML (transient receptor potential cation channel, mucolipin subfamily) comprises a group of three evolutionarily related proteins that belongs to the large family of transient receptor potential ion channels. The three proteins TRPML1, TRPML2 and TRPML3 are encoded by the mucolipin-1 (MCOLN1), mucolipin-2 (MCOLN2) and mucolipin-3 (MCOLN3) genes, respectively.
The three members of the TRPML ("ML" for mucolipin) sub-family are not extremely well characterized. TRPML1 is known to be localized in late endosomes. This subunit also contains a lipase domain between its S1 and S2 segments. While the function of this domain is unknown it has been proposed that it is involved in channel regulation. Physiological studies have described TRPML1 channels as proton leak channels in lysosomes responsible for preventing these organelles from becoming too acidic. TRPML2 and TRPML3 more poorly characterized than TRPML1.
Deficiencies can lead to enlarged vesicles.
Genes
(TRPML1)
(TRPML2)
(TRPML3)
References
External links
Membrane proteins
Ion channels | TRPML | [
"Chemistry",
"Biology"
] | 267 | [
"Neurochemistry",
"Ion channels",
"Protein classification",
"Membrane proteins"
] |
11,896,851 | https://en.wikipedia.org/wiki/TRPP | TRPP (transient receptor potential polycystic) is a family of transient receptor potential ion channels which when mutated can cause polycystic kidney disease.
Subcategories
TRPP subunits can be divided into two subcategories depending on structural similarity.
Polycystic Kidney Disease 1 (PKD1)-Like Group
The first group, polycystic kidney disease 1 (PKD1)-like, contains polycystin-1 (Previously known as TRPP1), PKDREJ, PKD1L1, PKD1L2, and PKD1L3. Polycystin-1 contains numerous N-terminal adhesive domains that are important for cell-cell contact. This group of subunits also contain a large extracellular domain with numerous polycystin motifs. These motifs are of unknown function and are located between the S6 and S7 segments. The large intracellular C-terminal segment of TRPP1 seems to interact with TRPP2 to act as a signaling complex.
Polycystic Kidney Disease 2 (PKD2)-Like Group
This group of TRPP members (previously known as TRPP2-like) are: TRPP1 (previously known as TRPP2 or PKD2), TRPP2 (previously known as TRPP3 or PKDL2), and TRPP3 (previously known as TRPP5 or polycystin-L2). Unlike the previous group, which contain 11 membrane-spanning segments, this group resemble other TRP channels, having 6 membrane-spanning segments with intracellular N- and C-termini. All of the members of this group contain a coiled coil region in their C-terminus involved in the interaction with the polycystin-1 group. TRPP1 and TRPP3 form constitutively active cation-selective ion channels that are permeable to calcium. TRPP2 has also been implicated in sour taste perception. Coupling of PKD1 and TRPP1 recruits TRPP1 to the membrane. Here, its activity is decreased and it suppresses the activation of G proteins by PKD1.
Genes
Group 1: polycystic kidney disease 1 (PKD1) like proteins
PKD1
Group 2: polycystic kidney disease 2 (PKD2) like proteins
TRPP1
TRPP2
TRPP3
See also
Polycystic kidney disease
References
External links
Membrane proteins
Ion channels | TRPP | [
"Chemistry",
"Biology"
] | 516 | [
"Neurochemistry",
"Ion channels",
"Protein classification",
"Membrane proteins"
] |
11,896,859 | https://en.wikipedia.org/wiki/TRPA%20%28ion%20channel%29 | TRPA is a family of transient receptor potential ion channels. The TRPA family is made up of 7 subfamilies: TRPA1, TRPA- or TRPA1-like, TRPA5, painless, pyrexia, waterwitch, and HsTRPA. TRPA1 is the only subfamily widely expressed across animals, while the other subfamilies (collectively referred to as the basal clade) are largely absent in deuterostomes (and in the case of HsTRPA, only expressed in hymenopteran insects).
TRPA1s have been the most extensively studied subfamily; they typically contain 14 N-terminal ankyrin repeats and are believed to function as mechanical stress, temperature, and chemical sensors. TRPA1 is known to be activated by compounds such as isothiocyanate (which are the pungent chemicals in substances such as mustard oil and wasabi) and Michael acceptors (e.g. cinnamaldehyde). These compounds are capable of forming covalent chemical bonds with the protein's cysteins. Non-covalent activators of TRPA1 also exists, such as methyl salicylate, menthol, and the synthetic compound PF-4840154.
The thermal sensitivity of TRPAs varies by species. For example, TRPA1 functions as a high-temperature sensor in insects and snakes, but as a cold sensor in mammals. The basal TRPAs have evolved some degree of thermal sensitivity as well: painless and pyrexia function in high-temperature sensing in Drosophila melanogaster, and the honey bee HsTRPA underwent neofunctionalization following its divergence from waterwitch, gaining function as a high-temperature sensor.
TRPA1s promiscuity with respect to sensory modality has been the source of controversy, particularly when considering its ability to detect cold. More recent work has alternatively (or additionally) proposed that reactive oxygen species activate TRPA1, across species.
References
External links
Membrane proteins
Ion channels | TRPA (ion channel) | [
"Chemistry",
"Biology"
] | 426 | [
"Neurochemistry",
"Ion channels",
"Protein classification",
"Membrane proteins"
] |
11,897,205 | https://en.wikipedia.org/wiki/Polycystin%201 | Polycystin 1 (PC1) is a protein that in humans is encoded by the PKD1 gene. Mutations of PKD1 are associated with most cases of autosomal dominant polycystic kidney disease, a severe hereditary disorder of the kidneys characterised by the development of renal cysts and severe kidney dysfunction.
Protein structure and function
PC1 is a membrane-bound protein 4303 amino acids in length expressed largely upon the primary cilium, as well as apical membranes, adherens junctions, and desmosomes. It has 11 transmembrane domains, a large extracellular N-terminal domain, and a short (about 200 amino acid) cytoplasmic C-terminal domain. This intracellular domain contains a coiled-coil domain through which PC1 interacts with polycystin 2 (PC2), a membrane-bound Ca2+-permeable ion channel.
PC1 has been proposed to act as a G protein–coupled receptor. The C-terminal domain may be cleaved in a number of different ways. In one instance, a ~35 kDa portion of the tail has been found to accumulate in the cell nucleus in response to decreased fluid flow in the mouse kidney. In another instance, a 15 kDa fragment may be yielded, interacting with transcriptional activator and co-activator STAT6 and p100, or components of the canonical Wnt signaling pathway in an inhibitory manner.
The structure of the human PKD1-PKD2 complex has been solved by cryo-electron microscopy, which showed a 1:3 ratio of PKD1 and PKD2 in the structure. PKD1 consists of a voltage-gated ion channel fold that interacts with PKD2.
PC1 mediates mechanosensation of fluid flow by the primary cilium in the renal epithelium and of mechanical deformation of articular cartilage.
Gene
Splice variants encoding different isoforms have been noted for PKD1. The gene is closely linked to six pseudogenes in a known duplicated region on chromosome 16p.
References
External links
GeneReviews/NIH/NCBI/UW entry on Polycystic Kidney Disease, Autosomal Dominant
EF-hand-containing proteins
Ion channels | Polycystin 1 | [
"Chemistry"
] | 480 | [
"Neurochemistry",
"Ion channels"
] |
11,897,423 | https://en.wikipedia.org/wiki/TRPM6 | TRPM6 is a transient receptor potential ion channel associated with hypomagnesemia with secondary hypocalcemia.
See also
TRPM
Ruthenium red
References
Further reading
External links
Ion channels | TRPM6 | [
"Chemistry"
] | 43 | [
"Neurochemistry",
"Ion channels"
] |
11,897,510 | https://en.wikipedia.org/wiki/TRPC6 | Transient receptor potential cation channel, subfamily C, member 6 or Transient receptor potential canonical 6, also known as TRPC6, is a protein encoded in the human by the TRPC6 gene. TRPC6 is a transient receptor potential channel of the classical TRPC subfamily.
TRPC6 channels are nonselective cation channels that respond directly to diacylglycerol (DAG), a product of phospholipase C activity. This activation leads to cellular depolarization and calcium influx.
Unlike the closely related TRPC3 channels, TRPC6 channels possess the distinctive ability to transport heavy metal ions. TRPC6 channels facilitate the transport of zinc ions, promoting their accumulation inside cells.
In addition, despite their non-selectiveness, TRPC6 exhibits a strong preference for calcium ions, with a permeability ratio of calcium to sodium (/) of roughly six. This selectivity is significantly higher compared to TRPC3, which displays a weaker preference for calcium with a (/) ratio of only 1.1.
Function
TRPC6 channels are widely distributed in the human body and are emerging as crucial regulators of several key physiological functions:
In blood vessels
Small arteries and arterioles exhibit a self-regulatory mechanism called myogenic tone, enabling them to maintain relatively stable blood flow despite fluctuating intravascular pressures. When intravascular pressure within a small artery or arteriole increases, the vessel walls automatically constrict. This narrowing reduces blood flow, effectively counteracting the rising pressure and stabilizing overall flow. Conversely, if blood pressure suddenly drops, vasodilation occurs to allow more blood flow and compensate for the decrease.
TRPC6 channels are present both in endothelial and smooth muscle cells, and their function is similar to α‑adrenoreceptors; they are both involved in vasoconstriction. However, TPRC6-mediated vasoconstriction is mechanosensetive (i.e. activated by mechanical stimulation) and these channels are involved in maintenance of the myogenic tone of blood vessels and autoregulation of blood flow.
When intravascular blood pressure rises, this causes stretching of the walls of blood vessels. This mechanical stretch activates the TRPC6 channel. Once activated, TRPC6 allows Ca2+ to enter the smooth muscle cells. This increase in intracellular Ca2+ triggers a chain reaction leading to vasoconstriction.
In the kidneys
TRPC6 channels are extensively present throughout the kidney, both in the tubular segments and the glomeruli. Within the glomeruli, expression of TRPC6 is primarily concentrated in podocytes. Despite being extensively expressed throughout the kidneys and despite the established link between TRPC6 over-activation and kidney pathologies, the physiological roles of this channel in healthy kidney function remain less understood. Podocytes normally display minimal baseline activity of TRPC6 channels and TRPC6 knockout mice have not shown any evident changes in glomerular structure or filtration.
Nevertheless, it has been hypothesized that the function of TRPC6 channels in podocytes resembles their function in smooth muscles of blood vessels.
Glomerular capillaries operate under significantly higher pressure than most other capillary beds. When podocytes are stretched by glomerular capillary pressure, mechanosensitive TRPC6 channels trigger a surge in Ca2+ influx into podocytes, causing them to contract. This podocyte contraction exerts a force that opposes capillary wall overstretching and distention, that would otherwise lead to protein leakage.
However, in order to control the degree of podocyte contraction and maintain blood vessel patency, the influx of Ca2+ mediated by TRPC6 channels is accompanied by an increase in the activity of big potassium (BK) channels, leading to the efflux of K+. BK channel activation and the resultant K+ efflux mitigate and counteract the depolarization induced by TRPC6 activation, potentially serving as a protective mechanism through regulation of membrane depolarization and limiting podocyte contraction.
In the central nervous system
Research of learning and memory mechanisms suggests that a continuous increase in the strength of synaptic transmission is necessary to achieve long-term modification of neural network properties and memory storage. TRPC6 appears to be essential for the formation of an excitatory synapse; overexpressing TRPC6 greatly increased dendritic spine density and the level of synapsin I and PSD-95 cluster, known as the pre- and postsynaptic markers.
TRPC6 has also been proven to participate in neuroprotection and its neuroprotective effect could be explained due to the antagonism of extrasynaptic NMDA receptor (NMDAR)-mediated intracellular calcium overload. TRPC6 activates calcineurin, which impedes the NMDAR activity.
Hyperactivation of NMDAR is a critical event in glutamate-driven excitotoxicity that causes a rapid increase in intracellular calcium concentration. Such rapid increases in cytoplasmic calcium concentrations may activate and over-stimulate a variety of proteases, kinases, endonucleases, etc. This downstream neurotoxic cascade may trigger severe damage to neuronal functioning. Hyperactivation of NMDAR is frequently observed during brain ischemia and late stage Alzheimer's disease.
Clinical significance
Since TRPC6 channels play a multifaceted role by participating in various signaling pathways, these channels are emerging as key players in the pathogenesis of a wide range of diseases including:
Kidney diseases
Disorders of the nervous system
Cancers
Cardiovascular diseases
Pulmonary diseases
Interactions
TRPC6 has been shown to interact with:
FYN,
TRPC2, and
TRPC3.
Ligands
Two of the primary active constituents responsible for the antidepressant and anxiolytic benefits of Hypericum perforatum, also known as St. John's Wort, are hyperforin and adhyperforin. These compounds are inhibitors of the reuptake of serotonin, norepinephrine, dopamine, γ-aminobutyric acid, and glutamate, and they are reported to exert these effects by binding to and activating TRPC6. Recent results with hyperforin have cast doubt on these findings as similar currents are seen upon Hyperforin treatment regardless of the presence of TRPC6.
References
Further reading
External links
Membrane proteins
Ion channels | TRPC6 | [
"Chemistry",
"Biology"
] | 1,361 | [
"Neurochemistry",
"Ion channels",
"Protein classification",
"Membrane proteins"
] |
11,897,618 | https://en.wikipedia.org/wiki/Stretchable%20electronics | Stretchable electronics, also known as elastic electronics or elastic circuits, is a group of technologies for building electronic circuits by depositing or embedding electronic devices and circuits onto stretchable substrates such as silicones or polyurethanes, to make a completed circuit that can experience large strains without failure. In the simplest case, stretchable electronics can be made by using the same components used for rigid printed circuit boards, with the rigid substrate cut (typically in a serpentine pattern) to enable in-plane stretchability. However, many researchers have also sought intrinsically stretchable conductors, such as liquid metals.
One of the major challenges in this domain is designing the substrate and the interconnections to be stretchable, rather than flexible (see Flexible electronics) or rigid (Printed Circuit Boards). Typically, polymers are chosen as substrates or material to embed.
When bending the substrate, the outermost radius of the bend will stretch (see Strain in an Euler–Bernoulli beam, subjecting the interconnects to high mechanical strain. Stretchable electronics often attempts biomimicry of human skin and flesh, in being stretchable, whilst retaining full functionality. The design space for products is opened up with stretchable electronics, including sensitive electronic skin for robotic devices and in vivo implantable sponge-like electronics.
Stretchable Skin electronics
Mechanical Properties of Skin
Skin is composed of collagen, keratin, and elastin fibers, which provide robust mechanical strength, low modulus, tear resistance, and softness. The skin can be considered as a bilayer of epidermis and dermis. The epidermal layer has a modulus of about 140-600 kPa and a thickness of 0.05-1.5 mm. Dermis has a modulus of 2-80 kPa and a thickness of 0.3–3 mm. This bilayer skin exhibits an elastic linear response for strains less than 15% and a non linear response at larger strains. To achieve conformability, it is preferable for devices to match the mechanical properties of the epidermis layer when designing skin-based stretchy electronics.
Tuning Mechanical Properties
Conventional high performance electronic devices are made of inorganic materials such as silicon, which is rigid and brittle in nature and exhibits poor biocompatibility due to mechanical mismatch between the skin and the device, making skin integrated electronics applications difficult. To solve this challenge, researchers employed the method of constructing flexible electronics in the form of ultrathin layers. The resistance to bending of a material object (Flexural rigidity) is related to the third power of the thickness, according to the Euler-Bernoulli equation for a beam. It implies that objects with less thickness can bend and stretch more easily. As a result, even though the material has a relatively high Young's modulus, devices manufactured on ultrathin substrates exhibit a decrease in bending stiffness and allow bending to a small radius of curvature without fracturing. Thin devices have been developed as a result of significant advancements in the field of nanotechnology, fabrication, and manufacturing. The aforementioned approach was used to create devices composed of 100-200 nm thick silicon (Si) nano membranes deposited on thin flexible polymeric substrates.
Furthermore, structural design considerations can be used to tune the mechanical stability of the devices. Engineering the original surface structure allows us to soften the stiff electronics. Buckling, island connection, and the Kirigami concept have all been employed successfully to make the entire system stretchy.
Mechanical buckling can be used to create wavy structures on elastomeric thin substrates. This feature improves the device's stretchability. The buckling approach was used to create Si nanoribbons from single crystal Si on an elastomeric substrate. The study demonstrated the device could bear a maximum strain of 10% when compressed and stretched.
In the case of island interconnect, the rigid material connects with flexible bridges made from different geometries, such as zig-zag, serpentine-shaped structures, etc., to reduce the effective stiffness, tune the stretchability of the system, and elastically deform under applied strains in specific directions. It has been demonstrated that serpentine-shaped structures have no significant effect on the electrical characteristics of epidermal electronics. It has also been shown that the entanglement of the interconnects, which oppose the movement of the device above the substrate, causes the spiral interconnects to stretch and deform significantly more than the serpentine structures. CMOS inverters constructed on a polydimethylsiloxane (PDMS) substrate employing 3D island interconnect technologies demonstrated 140% strain at stretching.
Kirigami is built around the concept of folding and cutting in 2D membranes. This contributes to an increase in the tensile strength of the substrate, as well as its out-of-plane deformation and stretchability. These 2D structures can subsequently be turned to 3D structures with varied topography, shape, and size controllability via the Buckling process, resulting in interesting properties and applications.
Energy
Several stretchable energy storage devices and supercapacitors are made using carbon-based materials such as single-walled carbon nanotubes (SWCNTs). A study by Li et al. showed a stretchable supercapacitor (composed of buckled SWCNTs macrofilm and elastomeric separators on an elastic PDMS substrate), that performed dynamic charging and discharging. The key drawback of this stretchable energy storage technology is the low specific capacitance and energy density, although this can potentially be improved by the incorporation of redox materials, for example the SWNT/MnO2 electrode. Another approach to creating a stretchable energy storage device is the use of origami folding principles. The resulting origami battery achieved significant linear and areal deformability, large twistability and bendability.
Medicine
Stretchable electronics could be integrated into smart garments to interact seamlessly with the human body and detect diseases or collect patient data in a non-invasive manner. For example, researchers from Seoul National University and MC10 (a flexible-electronics company) have developed a patch that is able to detect glucose levels in sweat and can deliver the medicine needed on demand (insulin or metformin). The patch consists of graphene riddled with gold particles and contains sensors that are able to detect temperature, pH level, glucose, and humidity.
Stretchable electronics also permit developers to create soft robots, to implement minimally invasive surgeries in hospitals. Especially when it comes to surgeries of the brain and every millimeter is important, such robots may have a more precise scope of action than a human.
Tactile Sensing
Rigid electronics doesn't typically conform well to soft, biological organisms and tissue. Since stretchable electronics is not limited by this, some researchers try to implement it as sensors for touch, or tactile sensing. One way of achieving this is to make an array of conductive OFET (Organic Field Effect Transistors) forming a network that can detect local changes in capacitance, which gives the user information about where the contact occurred. This could have potential use in robotics and virtual reality applications.
See also
Flexible electronics
Soft robotics
Stretch sensor
References
External links
Electronics manufacturing
Electronic engineering | Stretchable electronics | [
"Technology",
"Engineering"
] | 1,499 | [
"Electrical engineering",
"Electronic engineering",
"Electronics manufacturing",
"Computer engineering"
] |
11,897,622 | https://en.wikipedia.org/wiki/Integrated%20design | Integrated design is a comprehensive holistic approach to design which brings together specialisms usually considered separately. It attempts to take into consideration all the factors and modulations necessary to a decision-making process.
A few examples are the following:
Design of a building which considers whole building design including architecture, structural engineering, passive solar building design and HVAC. The approach may also integrate building lifecycle management and a greater consideration of the end users of the building. The aim of integrated building design is often to produce sustainable architecture.
Design of both a product (or family of products) and the assembly system that will produce it.
Design of an electronic product that considers both hardware and software aspects, although this is often called co-design (not to be confused with participatory design, which is also often called co-design).
The requirement for integrated design comes when the different specialisms are dependent on each other or "coupled". An alternative or complementary approach to integrated design is to consciously reduce the dependencies. In computing and systems design, this approach is known as loose coupling.
Dis-integrated design
Three phenomena are associated with a lack of integrated design:
Silent design: design by default, by omission or by people not aware that they are participating in design activity.
Partial design: design is only used to a limited degree, such as in superficial styling, often after the important design decisions have been made.
Disparate design: design activity may be widespread, but is not co-ordinated or brought together to realise its potential. The resulting design may have needless complexity, internal inconsistency, logical flaws and a lack of a unifying vision.
A committee is sometimes a deliberate attempt to address disparate design, but the phrase "design by committee" is associated with this failing, leading to disparate design. "Design by committee" can also lead to a kind of silent design, as design decisions are not properly considered, for fear of upsetting a hard-won compromise.
Methods for integrated design
The integrated design approach incorporates collaborative methods and tools to encourage and enable the specialists in the different areas to work together to produce an integrated design.
A charrette provides opportunity for all specialists to collaborate and align early in the design process.
Human-Centered Design provides an integrated approach to problem solving, commonly used in design and management frameworks that develops solutions to problems by involving the human perspective in all steps of the problem-solving process.
References
See also
Holism
Mode 2
Participatory design
System integration
Systems engineering
Design | Integrated design | [
"Engineering"
] | 517 | [
"Design"
] |
11,897,640 | https://en.wikipedia.org/wiki/A%20Specimen%20of%20the%20Botany%20of%20New%20Holland | A Specimen of the Botany of New Holland, also known by its standard abbreviation Spec. Bot. New Holland, was the first published book on the flora of Australia. Written by James Edward Smith and illustrated by James Sowerby, it was published by Sowerby in four parts between 1793 and 1795. It consists of 16 colour plates of paintings by Sowerby, mostly based on sketches by John White, and around 40 pages of accompanying text. It was presented as the first volume in a series, but no further volumes were released.
Book
The work began as a collaboration between Smith and George Shaw. Together they produced a two-part work entitled Zoology and Botany of New Holland, with each part containing two zoology plates and two botany plates, along with accompanying text. These appeared in 1793, although the publications themselves indicate 1794. The collaboration then ended, and Shaw went on to independently produce his Zoology of New Holland. Smith's contributions to Zoology and Botany of New Holland became the first two parts of A Specimen of the Botany of New Holland, a further two parts of which were issued in 1795.
Australian plants listed
The book contained details of the following Australian plants:
Billardiera scandens
Tetratheca juncea
Ceratopetalum gummiferum
Banksia spinulosa (Hairpin Banksia)
Goodenia ramosissima, now Scaevola ramosissima
Platylobium formosum
Platylobium parviflorum, now Platylobium formosum subsp. parviflorum (not figured)
Embothrium speciosissimum, now Telopea speciosissima (New South Wales Waratah)
Embothrium silaifolium, now Lomatia silaifolia
Embothrium sericeum, now Grevillea sericea
E. s. var. minor, now Grevillea sericea
E. s. var. major, now Grevillea speciosa (Red Spider Flower)
E. s. var. angustifolia, now Grevillea linearifolia
Embothrium buxifolium, now Grevillea buxifolia (Grey Spider Flower)
Pimelea linifolia
Pultenaea stipularis
Eucalyptus robusta
Eucalyptus tereticornis (not figured)
Eucalyptus capitellata (not figured)
Eucalyptus piperita (previously published by Smith in White's 1790 Journal of a Voyage to New South Wales; not figured)
Eucalyptus obliqua (previously published by Charles Louis L'Héritier de Brutelle; not figured)
Eucalyptus corymbosa, now Corymbia gummifera (Unbeknownst to Smith, this had already been published by Joseph Gaertner as Metrosideros gummifera)
Styphelia tubiflora
Styphelia ericoides, now Leucopogon ericoides (not figured)
Styphelia strigosa, now Lissanthe strigosa (not figured)
Styphelia scoparia, now Monotoca scoparia (not figured)
Styphelia daphnoides, now Brachyloma daphnoides (not figured)
Styphelia lanceolata, now Leucopogon lanceolatus (not figured)
Styphelia elliptica, now Monotoca elliptica (not figured)
Mimosa myrtifolia, now Acacia myrtifolia
Mimosa hispidula, now Acacia hispidula
References
1793 books
Books about Australian natural history
Botany in Australia
Florae (publication)
History of Australia (1788–1850)
Taxa named by James Edward Smith
1790s in science | A Specimen of the Botany of New Holland | [
"Biology"
] | 751 | [
"Flora",
"Florae (publication)"
] |
11,898,194 | https://en.wikipedia.org/wiki/Lense%E2%80%93Thirring%20precession | In general relativity, Lense–Thirring precession or the Lense–Thirring effect (; named after Josef Lense and Hans Thirring) is a relativistic correction to the precession of a gyroscope near a large rotating mass such as the Earth. It is a gravitomagnetic frame-dragging effect. It is a prediction of general relativity consisting of secular precessions of the longitude of the ascending node and the argument of pericenter of a test particle freely orbiting a central spinning mass endowed with angular momentum .
The difference between de Sitter precession and the Lense–Thirring effect is that the de Sitter effect is due simply to the presence of a central mass, whereas the Lense–Thirring effect is due to the rotation of the central mass. The total precession is calculated by combining the de Sitter precession with the Lense–Thirring precession.
According to a 2007 historical analysis by Herbert Pfister, the effect should be renamed the Einstein–Thirring–Lense effect.
Lense–Thirring metric
The gravitational field of a spinning spherical body of constant density was studied by Lense and Thirring in 1918, in the weak-field approximation. They obtained the metric
where the symbols represent:
the metric,
the flat-space line element in three dimensions,
the "radial" position of the observer,
the speed of light,
the gravitational constant,
the completely antisymmetric Levi-Civita symbol,
the mass of the rotating body,
the angular momentum of the rotating body,
the energy–momentum tensor.
The above is the weak-field approximation of the full solution of the Einstein equations for a rotating body, known as the Kerr metric, which, due to the difficulty of its solution, was not obtained until 1965.
Coriolis term
The frame-dragging effect can be demonstrated in several ways. One way is to solve for geodesics; these will then exhibit a Coriolis force-like term, except that, in this case (unlike the standard Coriolis force), the force is not fictional, but is due to frame dragging induced by the rotating body. So, for example, an (instantaneously) radially infalling geodesic at the equator will satisfy the equation
where
is the time,
is the azimuthal angle (longitudinal angle),
is the magnitude of the angular momentum of the spinning massive body.
The above can be compared to the standard equation for motion subject to the Coriolis force:
where is the angular velocity of the rotating coordinate system. Note that, in either case, if the observer is not in radial motion, i.e. if , there is no effect on the observer.
Precession
The frame-dragging effect will cause a gyroscope to precess. The rate of precession is given by
where:
is the angular velocity of the precession, a vector, and one of its components,
the angular momentum of the spinning body, as before,
the ordinary flat-metric inner product of the position and the angular momentum.
That is, if the gyroscope's angular momentum relative to the fixed stars is , then it precesses as
The rate of precession is given by
where is the Christoffel symbol for the above metric. Gravitation by Misner, Thorne, and Wheeler provides hints on how to most easily calculate this.
Gravitoelectromagnetic analysis
It is popular in some circles to use the gravitoelectromagnetic approach to the linearized field equations. The reason for this popularity should be immediately evident below, by contrasting it to the difficulties of working with the equations above. The linearized metric can be read off from the Lense–Thirring metric given above, where , and . In this approach, one writes the linearized metric, given in terms of the gravitomagnetic potentials and is
and
where
is the gravito-electric potential, and
is the gravitomagnetic potential. Here is the 3D spatial coordinate of the observer, and is the angular momentum of the rotating body, exactly as defined above. The corresponding fields are
for the gravitoelectric field, and
is the gravitomagnetic field. It is then a matter of substitution and rearranging to obtain
as the gravitomagnetic field. Note that it is half the Lense–Thirring precession frequency. In this context, Lense–Thirring precession can essentially be viewed as a form of Larmor precession. The factor of 1/2 suggests that the correct gravitomagnetic analog of the g-factor is two. This factor of two can be explained completely analogous to the electron's g-factor by taking into account relativistic calculations.
The gravitomagnetic analog of the Lorentz force in the non-relativistic limit is given by
where is the mass of a test particle moving with velocity . This can be used in a straightforward way to compute the classical motion of bodies in the gravitomagnetic field. For example, a radially infalling body will have a velocity ; direct substitution yields the Coriolis term given in a previous section.
Example: Foucault's pendulum
To get a sense of the magnitude of the effect, the above can be used to compute the rate of precession of Foucault's pendulum, located at the surface of the Earth.
For a solid ball of uniform density, such as the Earth, of radius , the moment of inertia is given by so that the absolute value of the angular momentum is with the angular speed of the spinning ball.
The direction of the spin of the Earth may be taken as the z axis, whereas the axis of the pendulum is perpendicular to the Earth's surface, in the radial direction. Thus, we may take , where is the latitude. Similarly, the location of the observer is at the Earth's surface . This leaves rate of precession is as
As an example the latitude of the city of Nijmegen in the Netherlands is used for reference. This latitude gives a value for the Lense–Thirring precession
At this rate a Foucault pendulum would have to oscillate for more than 16000 years to precess 1 degree. Despite being quite small, it is still two orders of magnitude larger than Thomas precession for such a pendulum.
The above does not include the de Sitter precession; it would need to be added to get the total relativistic precessions on Earth.
Experimental verification
The Lense–Thirring effect, and the effect of frame dragging in general, continues to be studied experimentally. There are two basic settings for experimental tests: direct observation via satellites and spacecraft orbiting Earth, Mars or Jupiter, and indirect observation by measuring astrophysical phenomena, such as accretion disks surrounding black holes and neutron stars, or astrophysical jets from the same.
The Juno spacecraft's suite of science instruments will primarily characterize and explore the three-dimensional structure of Jupiter's polar magnetosphere, auroras and mass composition.
As Juno is a polar-orbit mission, it will be possible to measure the orbital frame-dragging, known also as Lense–Thirring precession, caused by the angular momentum of Jupiter.
Results from astrophysical settings are presented after the following section.
Astrophysical setting
A star orbiting a spinning supermassive black hole experiences Lense–Thirring precession, causing its orbital line of nodes to precess at a rate
where
a and e are the semimajor axis and eccentricity of the orbit,
M is the mass of the black hole,
χ is the dimensionless spin parameter (0 < χ < 1).
The precessing stars also exert a torque back on the black hole, causing its spin axis to precess, at a rate
where
Lj is the angular momentum of the jth star,
aj and ej are its semimajor axis and eccentricity.
A gaseous accretion disk that is tilted with respect to a spinning black hole will experience Lense–Thirring precession, at a rate given by the above equation, after setting and identifying a with the disk radius. Because the precession rate varies with distance from the black hole, the disk will "wrap up", until viscosity forces the gas into a new plane, aligned with the black hole's spin axis.
Astrophysical tests
The orientation of an astrophysical jet can be used as evidence to deduce the orientation of an accretion disk; a rapidly changing jet orientation suggests a reorientation of the accretion disk, as described above. Exactly such a change was observed in 2019 with the black hole X-ray binary in V404 Cygni.
Pulsars emit rapidly repeating radio pulses with extremely high regularity, which can be measured with microsecond precision over time spans of years and even decades. A 2020 study reports the observation of a pulsar in a tight orbit with a white dwarf, to sub-millisecond precision over two decades. The precise determination allows the change of orbital parameters to be studied; these confirm the operation of the Lense–Thirring effect in this astrophysical setting.
It may be possible to detect the Lense–Thirring effect by long-term measurement of the orbit of the S2 star around the supermassive black hole in the center of the Milky Way, using the GRAVITY instrument of the Very Large Telescope. The star orbits with a period of 16 years, and it should be possible to constrain the angular momentum of the black hole by observing the star over two to three periods (32 to 48 years).
See also
Gravity Probe B
References
External links
(German) explanation of Thirring–Lense effect; has pictures for the satellite example.
Precession
General relativity | Lense–Thirring precession | [
"Physics"
] | 2,079 | [
"Physical quantities",
"General relativity",
"Precession",
"Theory of relativity",
"Wikipedia categories named after physical quantities"
] |
11,898,409 | https://en.wikipedia.org/wiki/Pilot%20%28operating%20system%29 | Pilot is a single-user, multitasking operating system designed by Xerox PARC in early 1977. Pilot was written in the Mesa programming language, totalling about 24,000 lines of code.
Overview
Pilot was designed as a single user system in a highly networked environment of other Pilot systems, with interfaces designed for inter-process communication (IPC) across the network via the Pilot stream interface. Pilot combined virtual memory and file storage into one subsystem, and used the manager/kernel architecture for managing the system and its resources.
Its designers considered a non-preemptive multitasking model, but later chose a preemptive (run until blocked) system based on monitors. Pilot included a debugger, Co-Pilot, that could debug a frozen snapshot of the operating system, written to disk.
A typical Pilot workstation ran 3 operating systems at once on 3 different disk volumes : Co-Co-Pilot (a backup debugger in case the main operating system crashed), Co-Pilot (the main operating system, running under co-pilot and used to compile and bind programs) and an inferior copy of Pilot running in a third disk volume, that could be booted to run test programs (that might crash the main development environment).
The debugger was written to read and write variables for a program stored on a separate disk volume.
This architecture was unique because it allowed the developer to single-step even operating system code with semaphore locks, stored on an inferior disk volume. However, as the memory and source code of the D-series Xerox processors grew, the time to checkpoint and restore the operating system (known as a "world swap") grew very high. It could take 60-120 seconds to run just one line of code in the inferior operating system environment.
Eventually, a co-resident debugger was developed to take the place of Co-Pilot.
Pilot was used as the operating system for the Xerox Star workstation.
See also
Timeline of operating systems
References
Further reading
History of human–computer interaction
Proprietary operating systems
Window-based operating systems
Pilot
1981 software | Pilot (operating system) | [
"Technology"
] | 441 | [
"History of human–computer interaction",
"Computing stubs",
"History of computing",
"Operating system stubs"
] |
11,898,761 | https://en.wikipedia.org/wiki/Nondeterministic%20programming | A nondeterministic programming language is a language which can specify, at certain points in the program (called "choice points"), various alternatives for program flow. Unlike an if-then statement, the method of choice between these alternatives is not directly specified by the programmer; the program must decide at run time between the alternatives, via some general method applied to all choice points. A programmer specifies a limited number of alternatives, but the program must later choose between them. ("Choose" is, in fact, a typical name for the nondeterministic operator.) A hierarchy of choice points may be formed, with higher-level choices leading to branches that contain lower-level choices within them.
One method of choice is embodied in backtracking systems (such as Amb, or unification in Prolog), in which some alternatives may "fail," causing the program to backtrack and try other alternatives. If all alternatives fail at a particular choice point, then an entire branch fails, and the program will backtrack further, to an older choice point. One complication is that, because any choice is tentative and may be remade, the system must be able to restore old program states by undoing side-effects caused by partially executing a branch that eventually failed.
Another method of choice is reinforcement learning, embodied in systems such as Alisp. In such systems, rather than backtracking, the system keeps track of some measure of success and learns which choices often lead to success, and in which situations (both internal program state and environmental input may affect the choice). These systems are suitable for applications to robotics and other domains in which backtracking would involve attempting to undo actions performed in a dynamic environment, which may be difficult or impractical.
See also
Nondeterminism (disambiguation)
Category: Nondeterministic programming languages
angelic non-determinism
demonic non-determinism
References
Computer programming
Programming paradigms
Determinism | Nondeterministic programming | [
"Technology",
"Engineering"
] | 408 | [
"Software engineering",
"Computer programming",
"Computers"
] |
11,899,236 | https://en.wikipedia.org/wiki/Diallel%20cross | A diallel cross is a mating scheme used by plant and animal breeders, as well as geneticists, to investigate the genetic underpinnings of quantitative traits.
In a full diallel, all parents are crossed to make hybrids in all possible combinations. Variations include half diallels with and without parents, omitting reciprocal crosses. Full diallels require twice as many crosses and entries in experiments, but allow for testing for maternal and paternal effects. If such "reciprocal" effects are assumed to be negligible, then a half diallel without reciprocals can be effective.
Common analysis methods utilize general linear models to identify heterotic groups, estimate general or specific combining ability, interactions with testing environments and years, or estimates of additive, dominant, and epistatic genetic effects and genetic correlations.
Mating designs
There are four main types of diallel mating design:
Full diallel with parents and reciprocal F1 crosses
Full diallel as above, but excluding parents
Half diallel with parents, but without reciprocal crosses
Half diallel without parents or reciprocal crosses
References
Genetics
Breeding | Diallel cross | [
"Biology"
] | 226 | [
"Behavior",
"Genetics",
"Breeding",
"Reproduction"
] |
11,899,262 | https://en.wikipedia.org/wiki/Sound%20%28nautical%29 | In nautical terms, the word sound is used to describe the process of determining the depth of water in a tank or under a ship. Tanks are sounded to determine if they are full (for cargo tanks) or empty (to determine if a ship has been holed) and for other reasons. Soundings may also be taken of the water around a ship if it is in shallow water to aid in navigation.
Methods
Tanks may be sounded manually or with electronic or mechanical automated equipment. Manual sounding is undertaken with a sounding line- a rope with a weight on the end. Per the Code of Federal Regulations, most steel vessels with integral tanks are required to have sounding tubes and reinforcing plates under the tubes which the weight strikes when it reaches the bottom of the tank. Sounding tubes are steel pipes which lead upwards from the ships' tanks to a place on deck.
Electronic and mechanical automated sounding may be undertaken with a variety of equipment including float level sensors, capacitance sensors, sonar, etc.
See also
Depth sounding
Sources
Code of Federal Regulations, Title 46
Navigational aids
Oceanography | Sound (nautical) | [
"Physics",
"Environmental_science"
] | 220 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
11,899,273 | https://en.wikipedia.org/wiki/Visual%20cycle | The visual cycle is a process in the retina that replenishes the molecule retinal for its use in vision. Retinal is the chromophore of most visual opsins, meaning it captures the photons to begin the phototransduction cascade. When the photon is absorbed, the 11-cis retinal photoisomerizes into all-trans retinal as it is ejected from the opsin protein. Each molecule of retinal must travel from the photoreceptor cell to the RPE and back in order to be refreshed and combined with another opsin. This closed enzymatic pathway of 11-cis retinal is sometimes called Wald's visual cycle after George Wald (1906–1997), who received the Nobel Prize in 1967 for his work towards its discovery.
Retinal
Retinal is a chromophore that forms photosensitive Retinylidene proteins when covalently bound to proteins called opsins. Retinal can be photoisomerized by itself, but requires to be bound to an opsin protein to both trigger the phototransduction cascade and tune the spectral sensitivity to longer wavelengths, which enable color vision.
Retinal is a species of retinoid and the aldehyde form of Vitamin A. Retinal is interconvertible with retinol, the transport and storage form of vitamin A. During the visual cycle, retinal moves between several different isomers and is also converted to retinol and retinyl ester. Retinoids can be derived from the oxidation of carotenoids like beta carotene or can be consumed directly. To reach the retina, it is bound to Retinol Binding Protein (RBP) and Transthyretin, which prevents its filtration in the glomeruli.
As in transport via the RBP-Transthyretin pathway, retinoids must always be bound to Chaperone molecules, for several reasons. Retinoids are toxic, insoluble in aqueous solutions, and prone to oxidation, and as such they must be bound and protected when within the body. The body uses a variety of chaperones, particularly in the retina, to transport retinoids.
Overview
The visual cycle is consistent within mammals, and is summarized as follows:
all-trans-retinyl ester + H2O → 11-cis-retinol + fatty acid; RPE65 isomerohydrolases;
11-cis-retinol + NAD+ → 11-cis-retinal + NADH + H+; 11-cis-retinol dehydrogenases;
11-cis-retinal + aporhodopsin → rhodopsin + H2O; forms Schiff base linkage to lysine, -CH=N+H-;
rhodopsin + hν → metarhodopsin II (i.e., 11-cis photoisomerizes to all-trans):
(rhodopsin + hν → photorhodopsin → bathorhodopsin → lumirhodopsin → metarhodopsin I → metarhodopsin II);
metarhodopsin II + H2O → aporhodopsin + all-trans-retinal;
all-trans-retinal + NADPH + H+ → all-trans-retinol + NADP+; all-trans-retinol dehydrogenases;
all-trans-retinol + fatty acid → all-trans-retinyl ester + H2O; lecithin retinol acyltransferases (LRATs).
Steps 3, 4, 5, and 6 occur in rod cell outer segments; Steps 1, 2, and 7 occur in retinal pigment epithelium (RPE) cells.
Description
When a photon is absorbed, 11-cis-retinal is transformed to all-trans-retinal, and it moves to the exit site of rhodopsin. It will not leave the opsin protein until another fresh chromophore comes to replace it, except for in the ABCR pathway. Whilst still bound to the opsin, all-trans-retinal is transformed into all-trans-retinol by all-trans-Retinol Dehydrogenase. It then proceeds to the cell membrane of the rod, where it is chaperoned to the Retinal Pigment Epithelium (RPE) by Interphotoreceptor retinoid-binding protein (IRBP). It then enters the RPE cells, and is transferred to the Cellular Retinol Binding Protein (CRBP) chaperone.
When inside the RPE cell, bound to CRBP, the all-trans-retinol is esterified by Lecithin Retinol Acyltransferase (LRAT) to form a retinyl ester. The retinyl esters of the RPE are chaperoned by a protein known as RPE65. It is in this form that the RPE stores most of its retinoids, as the RPE stores 2-3 times more retinoids than the neural retina itself. When further chromophore is required, the retinyl esters are acted on by isomerohydrolase to produce 11-cis-retinol, which is transferred to the Cellular retinaldehyde binding protein (CRALBP). 11-cis-Retinol is transformed into 11-cis retinal by 11-cis-retinol dehydrogenase, then it is shipped back to the photoreceptor cells via IRBP. There, it replaces the spent chromophore in opsin molecules, rendering the opsin photosensitive.
ABCR pathway
Under normal circumstances, the spent chromophore is discharged from the protein by an incoming "recharged" chromophore. However, sometimes the spent chromophore may leave the opsin protein prior to its replacement, when it is bound to the ABCA4 protein (also known as ABCR). At this stage, it is also transformed to all-trans-retinol, and then leaves the photoreceptor outer segment via the IRBP chaperone. It then follows the conventional visual cycle. It is from this pathway that the presence of opsin without a chromophore can be explained.
RGR regulation
The visual cycle can be regulated by the retinal G-protein-coupled Receptor (RGR-opsin) system. When light activates the RGR-opsin, the recycling of chromophore in the RPE is accelerated. This mechanism provides additional chromophore after intense bleaches, and can be seen as an important mechanism in the early phases of dark adaptation and chromophore replenishment.
Alternative cycles
Cone-specific visual cycle
It is believed that an alternative visual cycle exists, which uses Müller glial cells instead of Retinal Pigment Epithelium. In this pathway, cones reduce all-trans retinal to all-trans retinol via all-trans Retinol Dehydrogenase, then transport all-trans retinol to Müller cells. There, it is transformed into 11-cis retinol by all-trans retinol isomerase, and can either be stored as retinyl esters within Müller cells, or transported back to the cone photoreceptors, where it is transformed from 11-cis retinol to 11-cis retinal by 11-cis Retinal Dehydrogenase. This pathway helps explain the rapid dark adaptation in the cone system, and the presence of 11-cis Retinal Dehydrogenase in cone photoreceptors, as it is not found in rods, only in the RPE.
Melanopsin visual cycle
Melanopsin is a visual opsin present in Intrinsically photosensitive retinal ganglion cell (ipRGC) also with a retinal chromaphore. However, unlike the rod and cone pigments, melanopsin has the ability to act as both the excitable photopigment and as a photoisomerase. Melanopsin is therefore able to isomerize all-trans-retinal into 11-cis-retinal itself when stimulated with another photon. An ipRGC therefore does not rely on Müller cells and/or retinal pigment epithelium cells for this conversion.
Leber's congenital amaurosis
A possible mechanism for Leber's congenital amaurosis has been proposed as the deficiency of RPE65. Without the RPE65 protein, the RPE is unable to store retinyl esters, and the visual cycle is therefore interrupted. At the beginning stages of the disease, the cone cells are unaffected, as they can rely on the alternate Muller cell visual cycle. However, rods do not have access to this alternative and are rendered inert. LCA therefore manifests as nyctalopia (night blindness). In the later stages of the disease, general retinopathy is observed as the rod cells lose their ability to signal. As a result, the rods continually secrete glutamate, a neurotransmitter, at a rate the Muller cells are unable to absorb. The glutamate levels will build up within the retina, where they will reach neurotoxic levels. The RPE65 deficiency would be genetic in origin, and is only one of many proposed possible pathophysiologies of the disease. However, there is a retinal gene therapy to reintroduce normal RPE65 genes that has been approved by the FDA since 2017.
See also
Visual phototransduction
Visual system
References
Visual system
Nervous system
Sensory receptors
Metabolism | Visual cycle | [
"Chemistry",
"Biology"
] | 2,101 | [
"Nervous system",
"Organ systems",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
11,899,642 | https://en.wikipedia.org/wiki/Scientists%20for%20Global%20Responsibility | Scientists for Global Responsibility (SGR) in the United Kingdom promotes the ethical practice and use of science, design and technology. SGR is affiliated to the International Network of Engineers and Scientists for Global Responsibility (INES). It is an independent UK-based membership organisation of hundreds of natural scientists, social scientists, engineers, IT professionals and architects. In 2017 its partner organization ICAN (International Campaign to Abolish Nuclear Weapons) won the Nobel Peace Prize. ICAN have promoted a Kurzgesagt YouTube video endorsed by the International Committee of the Red Cross and Crescent (ICRC) showing the consequences of a single atomic weapon exploded over a city.
SGR's work is focused on four main issues: security and disarmament; climate change and energy, including nuclear power; who controls science and technology?; emerging technologies. The main areas of concern are arms and arms control, including military involvement in UK universities; effect of excessive greenhouse gas emissions on climate; the nature of war and reducing barbarity; topsoil and water shortages resulting from modern agricultural methods; depletion of species of fish due to over-fishing; continual spread of nuclear weapons, and reduction of occurrence of serious nuclear accidents.
In 2019 SGR launched the journal Responsible Science. SGR evaluates the risk of new science and new technological solutions to older science-based problems and threats, while recognizing the enormous contribution science, design and technology has made to civilisation and human well-being.
SGR promotes science, design and technology that contribute to peace, social justice and environmental sustainability.
See also
Campaign for Nuclear Disarmament
Scientists against Nuclear Arms, a forerunner of SGR
References
External links
SGR website
Organizations with year of establishment missing
Non-profit organisations based in the United Kingdom
Ethics organizations
Anti-nuclear organizations
Anti–nuclear weapons movement
Science and technology in the United Kingdom
Ethics of science and technology | Scientists for Global Responsibility | [
"Physics",
"Technology",
"Engineering"
] | 383 | [
"Nuclear organizations",
"Nuclear physics",
"Anti-nuclear organizations",
"Nuclear and atomic physics stubs",
"Ethics of science and technology"
] |
11,901,467 | https://en.wikipedia.org/wiki/Colored%20people%27s%20time | Colored People's Time (also abbreviated to CP Time or CPT) is an American expression referring to African Americans as frequently being late. It claims that African Americans can have a relaxed or indifferent view of punctuality, which leads to them being labeled as lazy or unreliable.
According to NPR's podcast Code Switch, the phrase has variations in many other languages and cultures, is often used as a light-hearted comment or joke regarding being late, and may have first been used in 1914 by The Chicago Defender newspaper.
There are differences between monochronic societies and polychronic societies (e.g., some of those found in Sub-Saharan Africa).
In popular culture
The expression has been referenced numerous times in various types of media, including the films Friday Foster, The Best Man, Bamboozled, Undercover Brother, Let's Do It Again, House Party, BlacKkKlansman, and several television series: The Mindy Project, Prison Break, The Boondocks, The Wire, Weeds, Where My Dogs At?, Reno 911!, 30 Rock, Everybody Hates Chris, A Different World, The PJs, Bridezillas, Mad TV, Cedric the Entertainer Presents, In Living Color, Empire, F is for Family, and reality series The Real Housewives of Atlanta.
Colored People's Time was used as the name of a 1960s public interest program produced by Detroit Public Television. It was also used in the title of the 1983 play, "Colored People's Time: A History Play," written by Leslie Lee, which consisted of 13 fictional vignettes of African American history, from the Civil War through Civil Rights and the Montgomery bus riots. CP Time was also a 2007 book by J. L. King.
In his 1982 book Let the Trumpet Sound: The Life of Martin Luther King, Jr., author Stephen B. Oates notes that Martin Luther King Jr. and his staff operated by what they jocularly called "CPT"—Colored People's Time—"and kept appointments with cheerful disregard for punctuality". King once apologized for being late for a banquet, saying he forgot what time he was on—EST, CST, or Colored People's Time, adding that "It always takes us longer to get where we're going."
On April 9, 2016, in a staged joke skit at that year's annual Inner Circle dinner, Mayor of New York City Bill de Blasio said he'd been operating on "C.P. time" for his delay in endorsing Hillary Clinton as the Democratic Party nominee for president. The actor Leslie Odom Jr., then starring in the Broadway show Hamilton, then replied "I don't like jokes like that, Bill," after which Clinton delivered the punch line that CPT stood for "cautious politician time." This skit was widely criticized, with The Root calling it "cringeworthy" while the conservative outlet TownHall pointed to a double standard that, "It's only racist if Republicans do it." In response, President Barack Obama, during the 2016 White House Correspondents' Dinner on April 30, jokingly apologized for being late because of "running on C.P.T." adding that this stands for "jokes white people should not make".
In February 2018, Roy Wood Jr. presented a segment on The Daily Show called "CP Time" to celebrate Black History Month by "honoring the unsung heroes of black history". It has since become a recurring segment on the show.
See also
Time management
Tardiness § Ethnic stereotypes, describing several other similar expressions
African time
References
Stereotypes of African Americans
Time management | Colored people's time | [
"Physics"
] | 756 | [
"Spacetime",
"Physical quantities",
"Time",
"Time management"
] |
11,901,730 | https://en.wikipedia.org/wiki/Gi%20alpha%20subunit | {{DISPLAYTITLE:Gi alpha subunit}}
Gi protein alpha subunit is a family of heterotrimeric G protein alpha subunits. This family is also commonly called the Gi/o (Gi /Go ) family or Gi/o/z/t family to include closely related family members. G alpha subunits may be referred to as Gi alpha, Gαi, or Giα.
Family members
There are four distinct subtypes of alpha subunits in the Gi/o/z/t alpha subunit family that define four families of heterotrimeric G proteins:
Gi proteins: Gi1α, Gi2α, and Gi3α
Go protein: Goα (in mouse there is alternative splicing to generate Go1α and Go2α)
Gz protein: Gzα
Transducins (Gt proteins): Gt1α, Gt2α, Gt3α
Giα proteins
Gi1α
Gi1α is encoded by the gene GNAI1.
Gi2α
Gi2α is encoded by the gene GNAI2.
Gi3α
Gi3α is encoded by the gene GNAI3.
Goα protein
Go1α is encoded by the gene GNAO1.
Gzα protein
Gzα is encoded by the gene GNAZ.
Transducin proteins
Gt1α
Transducin/Gt1α is encoded by the gene GNAT1.
Gt2α
Transducin 2/Gt2α is encoded by the gene GNAT2.
Gt3α
Gustducin/Gt3α is encoded by the gene GNAT3.
Function
The general function of Gi/o/z/t is to activate intracellular signaling pathways in response to activation of cell surface G protein-coupled receptors (GPCRs). GPCRs function as part of a three-component system of receptor-transducer-effector. The transducer in this system is a heterotrimeric G protein, composed of three subunits: a Gα protein such as Giα, and a complex of two tightly linked proteins called Gβ and Gγ in a Gβγ complex. When not stimulated by a receptor, Gα is bound to GDP and to Gβγ to form the inactive G protein trimer. When the receptor binds an activating ligand outside the cell (such as a hormone or neurotransmitter), the activated receptor acts as a guanine nucleotide exchange factor to promote GDP release from and GTP binding to Gα, which drives dissociation of GTP-bound Gα from Gβγ. GTP-bound Gα and Gβγ are then freed to activate their respective downstream signaling enzymes.
Gi proteins primarily inhibit the cAMP dependent pathway by inhibiting adenylyl cyclase activity, decreasing the production of cAMP from ATP, which, in turn, results in decreased activity of cAMP-dependent protein kinase. Therefore, the ultimate effect of Gi is the inhibition of the cAMP-dependent protein kinase. The Gβγ liberated by activation of Gi and Go proteins is particularly able to activate downstream signaling to effectors such as G protein-coupled inwardly-rectifying potassium channels (GIRKs). Gi and Go proteins are substrates for pertussis toxin, produced by Bordetella pertussis, the infectious agent in whooping cough. Pertussis toxin is an ADP-ribosylase enzyme that adds an ADP-ribose moiety to a particular cysteine residue in Giα and Goα proteins, preventing their coupling to and activation by GPCRs, thus turning off Gi and Go cell signaling pathways.
Gz proteins also can link GPCRs to inhibition of adenylyl cyclase, but Gz is distinct from Gi/Go by being insensitive to inhibition by pertussis toxin.
Gt proteins function in sensory transduction. The Transducins Gt1 and Gt2 serve to transduce signals from G protein-coupled receptors that receive light during vision. Rhodopsin in dim light night vision in retinal rod cells couples to Gt1, and color photopsins in color vision in retinal cone cells couple to Gt2, respectively. Gt3/Gustducin subunits transduce signals in the sense of taste (gustation) in taste buds by coupling to G protein-coupled receptors activated by sweet or bitter substances.
Receptors
The following G protein-coupled receptors couple to Gi/o subunits:
5-HT1 and 5-HT5 serotonergic receptors
Acetylcholine M2 & M4 receptors
Adenosine A1 & A3 receptors
Adrenergic α2A, α2B, & α2C receptors
Apelin receptors
Calcium-sensing receptor
Cannabinoid receptors (CB1 and CB2)
Chemokine CXCR4 receptor
Dopamine D2, D3 and D4 receptors
GABAB receptor
Glutamate mGluR2, mGluR3, mGluR4, mGluR6, mGluR7, & mGluR8 receptors
Histamine H3 & H4 receptors
Melatonin MT1, MT2, & MT3 receptors
Hydroxycarboxylic acid receptors: HCA1, HCA2, & HCA3
Opioid δ, κ, μ, & nociceptin receptors
Prostaglandin EP1, EP3, FP, & TP receptors
Short chain fatty acid receptors: FFAR2 & FFAR3
Somatostatin sst1, sst2, sst3, sst4 & sst5 receptors
Trace amine-associated receptor 8
See also
Second messenger system
G protein-coupled receptor
Heterotrimeric G protein
Adenylyl cyclase
Protein kinase A
Gs alpha subunit
Gq alpha subunit
G12/G13 alpha subunits
Retina
Taste
References
External links
Peripheral membrane proteins | Gi alpha subunit | [
"Chemistry"
] | 1,219 | [
"G proteins",
"Signal transduction"
] |
11,901,885 | https://en.wikipedia.org/wiki/No%20instruction%20set%20computing | No instruction set computing (NISC) is a computing architecture and compiler technology for designing highly efficient custom processors and hardware accelerators by allowing a compiler to have low-level control of hardware resources.
Overview
NISC is a statically scheduled horizontal nanocoded architecture (SSHNA). The term "statically scheduled" means that the operation scheduling and Hazard handling are done by a compiler. The term "horizontal nanocoded" means that NISC does not have any predefined instruction set or microcode. The compiler generates nanocodes which directly control functional units, registers and multiplexers of a given datapath. Giving low-level control to the compiler enables better utilization of datapath resources, which ultimately result in better performance. The benefits of NISC technology are:
Simpler controller: no hardware scheduler, no instruction decoder
Better performance: more flexible architecture, better resource utilization
Easier to design: no need for designing instruction-sets
The instruction set and controller of processors are the most tedious and time-consuming parts to design. By eliminating these two, design of custom processing elements become significantly easier.
Furthermore, the datapath of NISC processors can even be generated automatically for a given application. Therefore, designer's productivity is improved significantly.
Since NISC datapaths are very efficient and can be generated automatically, NISC technology is comparable to high level synthesis (HLS) or C to HDL synthesis approaches. In fact, one of the benefits of this architecture style is its capability to bridge these two technologies (custom processor design and HLS).
Zero instruction set computer
In computer science, zero instruction set computer (ZISC) refers to a computer architecture based solely on pattern matching and absence of (micro-)instructions in the classical sense. These chips are known for being thought of as comparable to the neural networks, being marketed for the number of "synapses" and "neurons". The acronym ZISC alludes to reduced instruction set computer (RISC).
ZISC is a hardware implementation of Kohonen networks (artificial neural networks) allowing massively parallel processing of very simple data (0 or 1). This hardware implementation was invented by Guy Paillet and Pascal Tannhof (IBM), developed in cooperation with the IBM chip factory of Essonnes, in France, and was commercialized by IBM.
The ZISC architecture alleviates the memory bottleneck by blending pattern memory with pattern learning and recognition logic. Their massively parallel computing solves the by allotting each "neuron" its own memory and allowing simultaneous problem-solving the results of which are settled up disputing with each other.
Applications and controversy
According to TechCrunch, software emulations of these types of chips are currently used for image recognition by many large tech companies, such as Facebook and Google. When applied to other miscellaneous pattern detection tasks, such as with text, results are said to be produced in microseconds even with chips released in 2007.
Junko Yoshida, of the EE Times, compared the NeuroMem chip with "The Machine", a machine capable of being able to predict crimes from scanning people's faces from the television series Person of Interest, describing it as "the heart of big data" and "foreshadow[ing] a real-life escalation in the era of massive data collection".
History
In the past, microprocessor design technology evolved from complex instruction set computer (CISC) to reduced instruction set computer (RISC). In the early days of the computer industry, compiler technology did not exist and programming was done in assembly language. To make programming easier, computer architects created complex instructions which were direct representations of high level functions of high level programming languages. Another force that encouraged instruction complexity was the lack of large memory blocks.
As compiler and memory technologies advanced, RISC architectures were introduced. RISC architectures need more instruction memory and require a compiler to translate high-level languages to RISC assembly code. Further advancement of compiler and memory technologies leads to emerging very long instruction word (VLIW) processors, where the compiler controls the schedule of instructions and handles data hazards.
NISC is a successor of VLIW processors. In NISC, the compiler has both horizontal and vertical control of the operations in the datapath. Therefore, the hardware is much simpler. However the control memory size is larger than the previous generations. To address this issue, low-overhead compression techniques can be used.
See also
C to HDL
Content-addressable memory
Reduced instruction set computer
Complex instruction set computer
Explicitly parallel instruction computing
Minimal instruction set computer
Very long instruction word
One-instruction set computer
TrueNorth
References
Further reading
Chapter 2.
External links
US Patent for ZISC hardware, issued to IBM/G.Paillet on April 15, 1997
Image Processing Using RBF like Neural Networks: A ZISC-036 Based Fully Parallel Implementation Solving Real World and Real Complexity Industrial Problems by K. Madani, G. de Trémiolles, and P. Tannhof
From CISC to RISC to ZISC by S. Liebman on lsmarketing.com
Neural Networks on Silicon at aboutAI.net
French Patent Request NISC for purely applicative engine - the sole operation of application (no lambda-calculus that is a particular case of quasi-applicative systems with two operations : application and abstraction - Curry 1958 p. 31)
Electronic design
Central processing unit
Instruction processing | No instruction set computing | [
"Engineering"
] | 1,126 | [
"Electronic design",
"Electronic engineering",
"Design"
] |
11,902,016 | https://en.wikipedia.org/wiki/Stem%20Cell%20Research%20Enhancement%20Act | Stem Cell Research Enhancement Act was the name of two similar bills that both passed through the United States House of Representatives and Senate, but were both vetoed by President George W. Bush and were not enacted into law.
Stem Cell Research Enhancement Act of 2005
The Stem Cell Research Enhancement Act of 2005 () was the first bill ever vetoed by United States President George W. Bush, more than five years after his inauguration. The bill, which passed both houses of Congress, but by less than the two-thirds majority needed to override the veto, would have allowed federal funding of stem cell research on new lines of stem cells derived from discarded human embryos created for fertility treatments.
The bill passed the House of Representatives by a vote of 238 to 194 on May 24, 2005., then passed the Senate by a vote of 63 to 37 on July 18, 2006. President Bush vetoed the bill on July 19, 2006. The House of Representatives then failed to override the veto (235 to 193) on July 19, 2006.
Stem Cell Research Enhancement Act of 2007
The Stem Cell Research Enhancement Act of 2007 (), was proposed federal legislation that would have amended the Public Health Service Act to provide for human embryonic stem cell research. It was similar in content to the vetoed Stem Cell Research Enhancement Act of 2005.
The bill passed the Senate on April 11, 2007, by a vote of 63–34, then passed the House on June 7, 2007, by a vote of 247–176. President Bush vetoed the bill on June 19, 2007, and an override was not attempted.
Stem Cell Research Enhancement Act of 2009
The bill was re-introduced in the 111th Congress. It was introduced in the House by Representative Diana DeGette (D-CO) on February 4, 2009. A Senate version was introduced by Tom Harkin (D-IA) on February 26, 2009. The House bill had 113 co-sponsors and the Senate 10 co-sponsors, as of November 20, 2009.
Legislative history
References
External links
How your senator voted, "U.S. Senate Roll Call Votes," from www.senate.gov, recorded on July 18, 2006, accessed on October 31, 2006.
How your congressman voted, "FINAL VOTE RESULTS FOR ROLL CALL 388," from clerk.house.gov, recorded on July 19, 2006, accessed on October 31, 2006.
Text of the 2007 Bill
S. 5: Stem Cell Research Enhancement Act of 2007 at GovTrack.us
World Stem Cell Policies
|/bss/111search.html|
Stem cell research pros and cons, Information and resource for stem cell research
Proposed legislation of the 109th United States Congress
Proposed legislation of the 110th United States Congress
Proposed legislation of the 111th United States Congress
Stem cell research
Medical law | Stem Cell Research Enhancement Act | [
"Chemistry",
"Biology"
] | 574 | [
"Translational medicine",
"Tissue engineering",
"Stem cell research"
] |
11,902,309 | https://en.wikipedia.org/wiki/Open%20Verification%20Library | Open Verification Library (OVL) is a library of property checkers for digital circuit descriptions written in popular Hardware Description Languages (HDLs). OVL is currently maintained by Accellera.
Applications
OVL works by placing modules or components checking specific properties of the circuit alongside regular modules or components. Those special modules are called checkers and are tied to circuit signals via ports. Some aspects of the checker functionality can be modified by adjusting checker parameters. Typical properties verified by OVL checkers include:
condition that should be always met,
sequence of conditions that should be met,
condition that should never occur,
proper data value (even, odd, within a range, etc.),
proper value change (e.g. increment or decrement within specified range),
proper data encoding (e.g. one hot or one cold),
proper timing of event (within given number of clock cycles or within window created by trigger events),
valid protocol of data transmission,
valid behavior of popular building blocks (e.g. FIFOs).
Depending on the selected parameters, OVL checkers can work as assertion, assumption or coverage point checkers.
Main source of OVL popularity is the fact that it allows introducing high-level verification concepts to the existing or new designs without requiring new language, e.g. a designer having access to Verilog tools does not need a new language to start using property checking with OVL.
Supported Languages
While first versions of OVL supported Verilog and VHDL, most recent versions support (in alphabetical order):
PSL - Verilog flavour
SystemVerilog
Verilog
VHDL
Depending on the demand, support for two more languages may be added: PSL - VHDL flavour and SystemC.
External links
OVL section of the Accellera page
Hardware description languages | Open Verification Library | [
"Engineering"
] | 390 | [
"Electronic engineering",
"Hardware description languages"
] |
11,903,542 | https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20R%C3%A9dei | László Rédei (15 November 1900 – 21 November 1980) was a Hungarian mathematician.
Rédei graduated from the University of Budapest and initially worked as a schoolteacher. In 1940 he was appointed professor in the University of Szeged and in 1967 moved to the Mathematical Institute of the Hungarian Academy of Sciences in Budapest.
His mathematical work was in algebraic number theory and abstract algebra, especially group theory. He proved that every finite tournament contains an odd number of Hamiltonian paths. He gave several proofs of the theorem on quadratic reciprocity. He proved important results concerning the invariants of the class groups of quadratic number fields. In several cases, he determined if the ring of integers of the real quadratic field Q() is Euclidean or not. He successfully generalized Hajós's theorem. This led him to the investigations of lacunary polynomials over finite fields, which he eventually published in a book. This work on lacunary polynomials has had a big influence in the field of finite geometry where it plays an important role in the theory of blocking sets. He introduced a very general notion of skew product of groups, of which both the Schreier-extension and the Zappa–Szép product are special case. He explicitly determined those finite noncommutative groups whose all proper subgroups were commutative (1947). This is one of the very early results which eventually led to the classification of all finite simple groups.
Rédei was the president of the János Bolyai Mathematical Society (1947–1949). He was awarded the Kossuth Prize twice. He was elected corresponding member (1949), full member (1955) of the Hungarian Academy of Sciences.
Books
1959: Algebra. Erster Teil, Mathematik und ihre Anwendungen in Physik und Technik, Reihe A, 26, Teil 1 Akademische Verlagsgesellschaft, Geest & Portig, K.-G., Leipzig, xv+797 pp.
1967: English translation, Algebra, volume 1, Pergamon Press
1963: Theorie der endlich erzeugbaren kommutativen Halbgruppen, Hamburger Mathematische Einzelschriften, 41, Physica-Verlag, Würzburg 228 pp.
1968: Foundation of Euclidean and non-Euclidean geometries according to F. Klein, Pergamon Press, 404 pp.
1970: Lückenhafte Polynome über endlichen Körpern, Lehrbücher und Monographien aus dem Gebiete der exakten Wissenschaften, Mathematische Reihe, 42, Birkhäuser Verlag, Basel-Stuttgart, 271 pp.
1973: English translation: I. Földes: Lacunary Polynomials over Finite Fields North--Holland, London and Amsterdam, American Elsevier, New York, (Europe) (US)
1989: Endliche p-Gruppen, Akadémiai Kiadó, Budapest, 304 pp.
References
1981: László Rédei, Acta Scientiarum Mathematicarum, 43: 1–2
L. Márki (1985) "A tribute to L. Rédei", Semigroup Forum, 32, 1–21.
External links
1900 births
1980 deaths
Academic staff of the University of Szeged
Members of the Hungarian Academy of Sciences
20th-century Hungarian mathematicians
Number theorists
Algebraists
Mathematicians from Austria-Hungary | László Rédei | [
"Mathematics"
] | 709 | [
"Algebra",
"Number theorists",
"Number theory",
"Algebraists"
] |
11,904,061 | https://en.wikipedia.org/wiki/Metamorphic%20reaction | A metamorphic reaction is a chemical reaction that takes place during the geological process of metamorphism wherein one assemblage of minerals is transformed into a second assemblage which is stable under the new temperature/pressure conditions resulting in the final stable state of the observed metamorphic rock.
Examples include the production of talc under varied metamorphic conditions:
serpentine + carbon dioxide → talc + magnesite + water
chlorite + quartz → kyanite + talc + water
Polymorphic transformations
Exsolution reactions
Devolatilization reactions
Continuous reactions
Ion exchange reactions
Oxidation/reduction reactions
Reactions involving dissolved species
Chemographics
Petrogenetic grids
Schreinemaker's method
Reaction mechanisms
See also
Index mineral
Notes
Metamorphic petrology
Geochemical processes
Reaction mechanisms | Metamorphic reaction | [
"Chemistry"
] | 158 | [
"Reaction mechanisms",
"Chemical kinetics",
"Geochemical processes",
"Physical organic chemistry"
] |
11,904,093 | https://en.wikipedia.org/wiki/Ramsbottom%20carbon%20residue | Ramsbottom carbon residue (RCR) is well known in the petroleum industry as a method to calculate the carbon residue of a fuel. The carbon residue value is considered by some to give an approximate indication of the combustibility and deposit forming tendencies of the fuel.
The carbon residue of a fuel
The Ramsbottom test is used to measure carbon residues of an oil. In brief, the carbon residue of a fuel is the tendency to form carbon deposits under high temperature conditions in an inert atmosphere. This is an important value for the crude oil refinery, and usually one of the measurements in a crude oil assay. Carbon residue is an important measurement for the feed to the refinery process fluid catalytic cracking and delayed coking.
Calculation methods
There are three methods to calculate this carbon residue. It may be expressed as Ramsbottom carbon residue (RCR), Conradson carbon residue (CCR) or micro carbon residue (MCR). Numerically, the CCR value is the same as that of MCR.
Sometimes the carbon residue value can be listed as residual carbon content, RCC, which is normally the same as MCR/CCR.
For the test, 4 grams of the sample are put into a weighed glass bulb. The sample in the bulb is heated in a bath at 553°C for 20 minutes. After cooling the bulb is weighed again and the difference noted.
See also
Conradson carbon residue
Cracking (chemistry)
Crude oil assay
Micro carbon residue
Oil refinery
Petroleum
References
Petroleum technology | Ramsbottom carbon residue | [
"Chemistry",
"Engineering"
] | 307 | [
"Petroleum",
"Petroleum engineering",
"Petroleum technology",
"Petroleum stubs"
] |
11,904,309 | https://en.wikipedia.org/wiki/Vis5D | Vis5D is a 3D visualization system used primarily for animated 3D visualization of weather simulations. It was the first system to produce fully interactive animated 3D displays of time-dynamic volumetric data sets and the first open source 3D visualization system. It is GNU GPL licensed.
Design
Vis5D was created in response to two circumstances:
1. Output data from weather models and similar simulations are sampled on time sequences of regular 3D grids and are relatively straightforward to visualize.
2. The appearance in 1988 of commercial workstations such as the Stellar GS 1000 capable of rendering Gouraud-shaded 3D graphics fast enough for smooth animation.
Vis5D takes its name from its 5D array containing time sequences of 3D spatial grids for a set of physical parameters of the atmosphere or ocean. Its graphical user interface enables users to select from various ways of visualizing each parameter (e.g., iso-surfaces, plane slices, volume renderings), and to select a combination of parameters for view. A key innovation of Vis5D is that it computes and stores the geometries and colors for such graphics over the simulated time sequence, allowing them to be animated quickly so users can watch movies of their simulations. Furthermore, users can interactively rotate the animations in 3D.
Vis5D provides other visualization techniques. Users can drag a 3D cursor to a selected time and location, then trigger the calculation of a forward and backward wind trajectory from that point. Users can drag a vertical bar cursor and see, in another window, a thermodynamic diagram for the selected vertical column of atmosphere. And users can drag a 3D cursor to a selected time and location and read out individual values for parameters at that point. These examples all involve direct manipulation interfaces, as does the placement of plane slices through 3D grids.
Vis5D provides options for memory management, so that very large data sets can be visualized at individual time steps without the need to compute graphics over the simulation's entire time sequence, while smaller data sets can be visualized with full animation. Vis5D also provides an API enabling developers of other systems to incorporate Vis5D's functionality. This API is the basis of a TCL scripting capability so users can write automated scripts for producing animations.
History
Vis5D was first demonstrated, via videotape, at the December 1988 Workshop on Graphics in Meteorology at the ECMWF. The first live demos were at the January 1989 annual meeting of the American Meteorological Society.
Vis5D running on the GS 1000 was the first visualization system to provide smooth animation of 3D gridded time-sequence data sets with interactive rotation.
Vis5D was the first open-source 3D visualization system.
Vis5D is a natural for immersive virtual reality and was adapted to the CAVE for the VROOM at the 1994 SIGGRAPH conference. This became Cave5D.
References
Notes
Bibliography
W. Hibbard and D. Santek, Visualizing weather data, Workshop on Graphics in Meteorology. ECMWF, Reading, England, December 1988, pp. 63–65.
W. Hibbard, and D. Santek, Interactive Earth Science Visualization, Siggraph Video Review 43, 1989.
W. Hibbard and D. Santek, Visualizing Large Data Sets in the Earth Sciences, Computer 22, No. 8, August 1989, pp. 53–57.
W. Hibbard and D. Santek, The Vis5D System for Easy Interactive Visualization, Proc. IEEE Visualization 1990, pp 129–134.
W. Hibbard, B. Paul, D. Santek, C. Dyer, A. Battaiola, and M-F. Voidrot-Martinez, Interactive Visualization of Earth and Space Science Computations Computer 27, No. 7, July 1994, pp. 65–72.
W. Hibbard, J. Anderson, I. Foster, B. Paul, R. Jacob, C. Schafer, and M. Tyree, Exploring Coupled Atmosphere-Ocean Models Using Vis5D, International Journal of Supercomputer Applications 10, no. 2, 1996, pp. 211–222.
External links
Vis5D Home Page
Vis5D page at SourceForge
History of Vis5D and VisAD
Meteorological data and networks
Computational science
Infographics
Free data and information visualization software
Graphic software in meteorology | Vis5D | [
"Mathematics"
] | 917 | [
"Computational science",
"Applied mathematics"
] |
11,904,406 | https://en.wikipedia.org/wiki/Mazur%27s%20lemma | In mathematics, Mazur's lemma is a result in the theory of normed vector spaces. It shows that any weakly convergent sequence in a normed space has a sequence of convex combinations of its members that converges strongly to the same limit, and is used in the proof of Tonelli's theorem.
Statement of the lemma
See also
References
Banach spaces
Theorems involving convexity
Theorems in functional analysis
Lemmas in analysis
Compactness theorems | Mazur's lemma | [
"Mathematics"
] | 98 | [
"Compactness theorems",
"Theorems in mathematical analysis",
"Theorems in topology",
"Theorems in functional analysis",
"Lemmas in mathematical analysis",
"Lemmas"
] |
11,905,123 | https://en.wikipedia.org/wiki/Lynx%20Software%20Technologies | Lynx Software Technologies, Inc. (formerly LynuxWorks) is a San Jose, California software company founded in 1988. Lynx specializes in secure virtualization and open, reliable, certifiable real-time operating systems (RTOSes). Originally known as Lynx Real-Time Systems, the company changed its name to LynuxWorks in 2000 after acquiring, and merging with, ISDCorp (Integrated Software & Devices Corporation), an embedded systems company with a strong Linux background. In May 2014, the company changed its name to Lynx Software Technologies.
Lynx embraced open standards from its inception, with its original RTOS, LynxOS, featuring a UNIX-like user model and standard POSIX interfaces to embedded developers. LynxOS-178 is developed and certified to the FAA DO-178C DAL A safety standard and received the first and only FAA Reusable Software Component certificate for an RTOS. It supports ARINC API and FACE standards.
In 1989, LynxOS, the company's flagship RTOS, was selected for use in the NASA/IBM Space Station Freedom project. Lynx Software Technologies operating systems are also used in medical, industrial and communications systems around the world.
In early 2020, Lynx announced that the TR3 modernization program for the joint strike fighter had adopted Lynx’s LYNX MOSA.ic software development framework. The F-35 Lightning II Program (also known as the Joint Strike Fighter Program) is the US Department of Defense's focal point for defining affordable next generation strike aircraft weapon systems It is intended to replace a wide range of existing fighter, strike, and ground attack aircraft for the United States, the United Kingdom, Italy, Canada, Australia, the Netherlands, and their allies. After a competition between the Boeing X-32 and the Lockheed Martin X-35, a final design was chosen based on the X-35. This is the F-35 Lightning II, which will replace various tactical aircraft.
The company’s technology is also used in medical, industrial and communications systems around the world by companies like Airbus, Bosch, Denso, General Dynamics, Lockheed Martin, Raytheon, Rohde and Schwartz and Toyota.
Operating system evolution and history
LynxOS is the company's real-time operating system. It is UNIX-compatible and POSIX-compliant. It features predictable worst-case response time, preemptive scheduling, real-time priorities, ROMable kernel, and memory locking. LynxOS 7.0 is marketed as a "military grade", general purpose multi-core hard real-time operating system, and is intended for developers to embed security features during the design process, rather than adding security features after development. LynxOS and LynxOS-178 have been deployed in millions of safety-critical applications worldwide, including multiple military and aerospace systems.
In 2003, the company introduced the LynxOS-178 real-time operating system, a specialized version of LynxOS geared toward avionics applications that require certification to industry standards such as DO-178B. LynxOS-178 is a commercial off-the-shelf (COTS) RTOS that fully satisfies the objectives of the DO-178B level A specification and meets requirements for Integrated Modular Avionics (IMA) developers. LynxOS-178 is a native POSIX, hard real-time partitioning operating system developed and certified to FAA DO-178B/C DAL A safety standards. It is the only Commercial-off-the-Shelf (COTS) OS to be awarded a Reusable Software Component (RSC) certificate from the FAA for re-usability in DO-178B/C certification projects. LynxOS-178 is the primary host for real-time POSIX and FACE applications within the LYNX MOSA.ic development and integration framework. LynxOS-178 satisfies the PSE 53/54 profiles for both dedicated and multi-purpose real-time as well as FACE applications.
The LynxSecure Hypervisor ("bare metal," type 1) and separation kernel was released in 2005. Within the LYNX MOSA.ic development framework, it acts as a programmable processor partitioning system leveraging hardware virtualization capabilities of modern multi-core processors to isolate computing resources.
In February 2019, Lynx announced LYNX MOSA.ic (pronounced “mosaic”). LYNX MOSA.ic is a software development framework for rapidly building security- and safety-critical software systems out of independent application modules. Designed to deliver on the vision of the Modular Open Systems Approach (MOSA), its focus is to enable developers to collapse existing development cycles to create, certify, and deploy robust, secure platforms for manned and unmanned autonomous systems.
Lynx Software Technologies' patents on LynxOS technology include patent #5,469,571, "Operating System Architecture using Multiple Priority Light Weight kernel Task-based Interrupt Handling," November 21, 1995, and patent #5,594,903, "Operating System architecture with reserved memory space resident program code identified in file system name space," January 14, 1997.
LYNX MOSA.ic
A modular software development framework, the framework allows developers to design and integrate multi-core safety and security systems for industries such as the avionics, industrial, automotive, and UAV/satellite industries.
Features
There are three LYNX MOSA.ic bundles used for building secure applications. These bundles include: LYNX MOSA.ic for Avionics, LYNX MOSA.ic for Industrial, and LYNX MOSA.ic for UAVs/Satellites. These bundles are referred to as the "Mission Critical Edge," as they focus on security. There are differences between these bundles' features, such as LYNX MOSAI.ic for Industrial's support for Azure IoT Edge and Windows 10 and LYNX MOSA.ic for Avionics' support for Arm and x86 processor architectures.
LYNX MOSA.ic is built on Lynx Software Technologies' LynxSecure separation kernel hypervisor, which helps isolate applications and manage critical system assets. LYNX MOSA.ic supports LynxOS-178, Linux, Windows, and third-party OS systems. LYNX MOSA.ic also has support for bare metal applications such as Lynx Simple Applications (LSA).
LYNX MOSA.ic's use of multi-core processors supports hardware virtualization. LYNX MOSA.ic's modular structure allows users to isolate computing resources into self-managed independent environments. TRACE32 provides JTAG debug support for the independent applications stored in LYNX MOSA.ic's modules.
History
LYNX MOSA.ic was first announced by Lynx Software Technologies in 2019. The framework was developed for integration with the U.S. Department of Defense's MOSA (Modular Open Systems Approach).
Starting in 2020, LYNX MOSA.ic is being utilized by the F-35 Joint Strike Fighter Program Office to support the development of upgraded mission system avionics for F-35 Lightning II fighter jets.
In August 2021, Lynx Software Technologies and Advantech announced a collaboration to offer Mission Critical Edge Starter Kits for IT/OT convergence through Lynx LYNX MOSA.ic for Industrial. Lynx also partnered with CODESYS Group to integrate their control automation technology into the LYNX MOSA.ic for Industrial product in August 2021. In July 2021, Lynx also partnered with Collins Aerospace, providing LYNX MOSA.ic for Avionics as the foundation for Collins Aerospace's Perigon flight computer. Lynx Software Technologies released LYNX MOSA.ic for Industrial on the Microsoft Azure marketplace in 2021.
References
Software companies based in California
Linux companies
Real-time operating systems
Embedded operating systems
Companies based in San Jose, California | Lynx Software Technologies | [
"Technology"
] | 1,566 | [
"Real-time computing",
"Real-time operating systems"
] |
11,905,141 | https://en.wikipedia.org/wiki/Caesium%20cadmium%20chloride | Caesium cadmium chloride (CsCdCl3) is a synthetic crystalline material. It belongs to the AMX3 group (where A=alkali metal, M=bivalent metal, X=halogen ions). It crystallizes in a hexagonal space group P63/mmc with unit cell lengths a = 7.403 Å and c = 18.406 Å, with one cadmium ion having D3d symmetry and the other having C3v symmetry.
It is formed when an aqueous solution of hydrochloric acid containing an equimolar solution of caesium chloride and cadmium chloride.
References
Metal halides
Crystals
Optical materials
Caesium compounds
Cadmium compounds
Chlorides | Caesium cadmium chloride | [
"Physics",
"Chemistry",
"Materials_science"
] | 150 | [
"Materials science stubs",
"Chlorides",
"Inorganic compounds",
"Salts",
"Inorganic compound stubs",
"Crystallography stubs",
"Materials",
"Optical materials",
"Crystallography",
"Crystals",
"Metal halides",
"Matter"
] |
11,905,171 | https://en.wikipedia.org/wiki/Tonelli%27s%20theorem%20%28functional%20analysis%29 | In mathematics, Tonelli's theorem in functional analysis is a fundamental result on the weak lower semicontinuity of nonlinear functionals on Lp spaces. As such, it has major implications for functional analysis and the calculus of variations. Roughly, it shows that weak lower semicontinuity for integral functionals is equivalent to convexity of the integral kernel. The result is attributed to the Italian mathematician Leonida Tonelli.
Statement of the theorem
Let be a bounded domain in -dimensional Euclidean space and let be a continuous extended real-valued function. Define a nonlinear functional on functions by
Then is sequentially weakly lower semicontinuous on the space for and weakly-∗ lower semicontinuous on if and only if is convex.
See also
References
(Theorem 10.16)
Calculus of variations
Convex analysis
Function spaces
Measure theory
Theorems in functional analysis
Variational analysis | Tonelli's theorem (functional analysis) | [
"Mathematics"
] | 176 | [
"Theorems in mathematical analysis",
"Function spaces",
"Vector spaces",
"Space (mathematics)",
"Theorems in functional analysis"
] |
335,054 | https://en.wikipedia.org/wiki/Axion | An axion () is a hypothetical elementary particle originally theorized in 1978 independently by Frank Wilczek and Steven Weinberg as the Goldstone boson of Peccei–Quinn theory, which had been proposed in 1977 to solve the strong CP problem in quantum chromodynamics (QCD). If axions exist and have low mass within a specific range, they are of interest as a possible component of cold dark matter.
History
Strong CP problem
As shown by Gerard 't Hooft, strong interactions of the standard model, QCD, possess a non-trivial vacuum structure that in principle permits violation of the combined symmetries of charge conjugation and parity, collectively known as CP. Together with effects generated by weak interactions, the effective periodic strong CP-violating term, , appears as a Standard Model input – its value is not predicted by the theory, but must be measured. However, large CP-violating interactions originating from QCD would induce a large electric dipole moment (EDM) for the neutron. Experimental constraints on the unobserved EDM implies CP violation from QCD must be extremely tiny and thus must itself be extremely small. Since could have any value between 0 and 2, this presents a "naturalness" problem for the standard model. Why should this parameter find itself so close to zero? (Or, why should QCD find itself CP-preserving?) This question constitutes what is known as the strong CP problem.
Prediction
In 1977, Roberto Peccei and Helen Quinn postulated a more elegant solution to the strong CP problem, the Peccei–Quinn mechanism. The idea is to effectively promote to a field. This is accomplished by adding a new global symmetry (called a Peccei–Quinn (PQ) symmetry) that becomes spontaneously broken. This results in a new particle, as shown independently by Frank Wilczek and Steven Weinberg, that fills the role of , naturally relaxing the CP-violation parameter to zero. Wilczek named this new hypothesized particle the "axion" after a brand of laundry detergent because it "cleaned up" a problem, while Weinberg called it "the higglet". Weinberg later agreed to adopt Wilczek's name for the particle. Because it has a non-zero mass, the axion is a pseudo-Nambu–Goldstone boson.
Axion dark matter
QCD effects produce an effective periodic potential in which the axion field moves. Expanding the potential about one of its minima, one finds that the product of the axion mass with the axion decay constant is determined by the topological susceptibility of the QCD vacuum. An axion with mass much less than 60 keV is long-lived and weakly interacting: A perfect dark matter candidate.
The oscillations of the axion field about the minimum of the effective potential, the so-called misalignment mechanism, generate a cosmological population of cold axions with an abundance depending on the mass of the axion. With a mass above 5 μeV/2 ( times the electron mass) axions could account for dark matter, and thus be both a dark-matter candidate and a solution to the strong CP problem. If inflation occurs at a low scale and lasts sufficiently long, the axion mass can be as low as 1 peV/2.
There are two distinct scenarios in which the axion field begins its evolution, depending on the following two conditions:
Broadly speaking, one of the two possible scenarios outlined in the two following subsections occurs:
Pre-inflationary scenario
If both (a) and (b) are satisfied, cosmic inflation selects one patch of the Universe within which the spontaneous breaking of the PQ symmetry leads to a homogeneous value of the initial value of the axion field. In this "pre-inflationary" scenario, topological defects are inflated away and do not contribute to the axion energy density. However, other bounds that come from isocurvature modes severely constrain this scenario, which require a relatively low-energy scale of inflation to be viable.
Post-inflationary scenario
If at least one of the conditions (a) or (b) is violated, the axion field takes different values within patches that are initially out of causal contact, but that today populate the volume enclosed by our Hubble horizon. In this scenario, isocurvature fluctuations in the PQ field randomise the axion field, with no preferred value in the power spectrum.
The proper treatment in this scenario is to solve numerically the equation of motion of the PQ field in an expanding Universe, in order to capture all features coming from the misalignment mechanism, including the contribution from topological defects like "axionic" strings and domain walls. An axion mass estimate between 0.05 and 1.50 meV was reported by Borsanyi et al. (2016). The result was calculated by simulating the formation of axions during the post-inflation period on a supercomputer.
Progress in the late 2010s in determining the present abundance of a KSVZ-type axion using numerical simulations lead to values between 0.02 and 0.1 meV, although these results have been challenged by the details on the power spectrum of emitted axions from strings.
Phenomenology of the axion field
Searches
The axion models originally proposed by Wilczek and by Weinberg chose axion coupling strengths that were so strong that they would have already been detected in prior experiments. It had been thought that the Peccei-Quinn mechanism for solving the strong CP problem required such large couplings. However, it was soon realized that "invisible axions" with much smaller couplings also work. Two such classes of models are known in the literature as (Kim–Shifman–Vainshtein– and (Dine–Fischler––
The very weakly coupled axion is also very light, because axion couplings and mass are proportional. Satisfaction with "invisible axions" changed when it was shown that any very light axion would have been overproduced in the early universe and therefore must be excluded.
Maxwell's equations with axion modifications
Pierre Sikivie computed how Maxwell's equations are modified in the presence of an axion in 1983. He showed that these axions could be detected on Earth by converting them to photons, using a strong magnetic field, motivating a number of experiments. For example, the Axion Dark Matter Experiment converts axion dark matter to microwave photons, the CERN Axion Solar Telescope converts axions produced in the Sun's core to X-rays, and other experiments search for axions produced in laser light. As of the early 2020s, there are dozens of proposed or ongoing experiments searching for axion dark matter.
The equations of axion electrodynamics are typically written in "natural units", where the reduced Planck constant , speed of light , and permittivity of free space all reduce to 1 when expressed in these "natural units". In this unit system, the electrodynamic equations are:
{| class="wikitable" style="text-align: center;"
|-
! scope="col" style="width: 15em;" | Name
! scope="col" | Equations
|-
! scope="row" | Gauss's law
|
|-
! scope="row" | Gauss's law for magnetism
|
|-
! scope="row" | Faraday's law
|
|-
! scope="row" | Ampère–Maxwell law
|
|-
! scope="row" | Axion field's equation of motion
|
|}
Above, a dot above a variable denotes its time derivative; the dot spaced between variables is the vector dot product; the factor is the axion-to-photon coupling constant rendered in "natural units".
Alternative forms of these equations have been proposed, which imply completely different physical signatures. For example, Visinelli wrote a set of equations that imposed duality symmetry, assuming the existence of magnetic monopoles. However, these alternative formulations are less theoretically motivated, and in many cases cannot even be derived from an action.
Analogous effect for topological insulators
A term analogous to the one that would be added to Maxwell's equations to account for axions also appears in recent (2008) theoretical models for topological insulators giving an effective axion description of the electrodynamics of these materials.
This term leads to several interesting predicted properties including a quantized magnetoelectric effect. Evidence for this effect has been given in THz spectroscopy experiments performed at the Johns Hopkins University on quantum regime thin film topological insulators developed at Rutgers University.
In 2019, a team at the Max Planck Institute for Chemical Physics of Solids published their detection of an axion insulator phase of a Weyl semimetal material. In the axion insulator phase, the material has an axion-like quasiparticle – an excitation of electrons that behave together as an axion – and its discovery demonstrates the consistency of axion electrodynamics as a description of the interaction of axion-like particles with electromagnetic fields. In this way, the discovery of axion-like quasiparticles in axion insulators provides motivation to use axion electrodynamics to search for the axion itself.
Experiments
Despite not yet having been found, the axion has been well studied for over 40 years, giving time for physicists to develop insight into axion effects that might be detected. Several experimental searches for axions are presently underway; most exploit axions' expected slight interaction with photons in strong magnetic fields. Axions are also one of the few remaining plausible candidates for dark matter particles, and might be discovered in some dark matter experiments.
Direct conversion in a magnetic field
Several experiments search for astrophysical axions by the Primakoff effect, which converts axions to photons and vice versa in electromagnetic fields.
The Axion Dark Matter Experiment (ADMX) at the University of Washington uses a strong magnetic field to detect the possible weak conversion of axions to microwaves. ADMX searches the galactic dark matter halo for axions resonant with a cold microwave cavity. ADMX has excluded optimistic axion models in the 1.9–3.53 μeV range. From 2013 to 2018 a series of upgrades were done and it is taking new data, including at 4.9–6.2 μeV. In December 2021 it excluded the 3.3–4.2 μeV range for the KSVZ model.
Other experiments of this type include DMRadio, HAYSTAC, CULTASK, and ORGAN. HAYSTAC completed the first scanning run of a haloscope above 20 μeV in the late 2010s.
Polarized light in a magnetic field
The Italian PVLAS experiment searches for polarization changes of light propagating in a magnetic field. The concept was first put forward in 1986 by Luciano Maiani, Roberto Petronzio and Emilio Zavattini. A rotation claim in 2006 was excluded by an upgraded setup. An optimized search began in 2014.
Light shining through walls
Another technique is so called "light shining through walls", where light passes through an intense magnetic field to convert photons into axions, which then pass through metal and are reconstituted as photons by another magnetic field on the other side of the barrier. Experiments by BFRS and a team led by Rizzo ruled out an axion cause. GammeV saw no events, reported in a 2008 Physics Review Letter. ALPS I conducted similar runs, setting new constraints in 2010; ALPS II began collecting data in May 2023. OSQAR found no signal, limiting coupling, and will continue.
Astrophysical axion searches
Axion-like bosons could have a signature in astrophysical settings. In particular, several works have proposed axion-like particles as a solution to the apparent transparency of the Universe to TeV photons. It has also been demonstrated that, in the large magnetic fields threading the atmospheres of compact astrophysical objects (e.g., magnetars), photons will convert much more efficiently. This would in turn give rise to distinct absorption-like features in the spectra detectable by early 21st century telescopes. A new (2009) promising means is looking for quasi-particle refraction in systems with strong magnetic gradients. In particular, the refraction will lead to beam splitting in the radio light curves of highly magnetized pulsars and allow much greater sensitivities than currently achievable. The International Axion Observatory (IAXO) is a proposed fourth generation helioscope.
Axions can resonantly convert into photons in the magnetospheres of neutron stars. The emerging photons lie in the GHz frequency range and can be potentially picked up in radio detectors, leading to a sensitive probe of the axion parameter space. This strategy has been used to constrain the axion–photon coupling in the 5–11 μeV mass range, by re-analyzing existing data from the Green Bank Telescope and the Effelsberg 100 m Radio Telescope. A novel, alternative strategy consists in detecting the transient signal from the encounter between a neutron star and an axion minicluster in the Milky Way.
Axions can be produced in the Sun's core when X-rays scatter in strong electric fields. The CAST solar telescope is underway, and has set limits on coupling to photons and electrons. Axions may also be produced within neutron stars by nucleon–nucleon bremsstrahlung. The subsequent decay of axions to gamma rays allows constraints on the axion mass to be placed from observations of neutron stars in gamma-rays using the Fermi Gamma-ray Space Telescope. From an analysis of four neutron stars, Berenji et al. (2016) obtained a 95% confidence interval upper limit on the axion mass of 0.079 eV. In 2021 it has been also suggested that a reported excess of hard X-ray emission from a system of neutron stars known as the magnificent seven could be explained as axion emission.
In 2016, a theoretical team from Massachusetts Institute of Technology devised a possible way of detecting axions using a strong magnetic field that need be no stronger than that produced in an MRI scanning machine. It would show variation, a slight wavering, that is linked to the mass of the axion. Results from the ensuing experiment published in 2021 reported no evidence of axions in the mass range from 4.1x10−10 to 8.27x10−9 eV.
In 2022 the polarized light measurements of Messier 87* by the Event Horizon Telescope were used to constrain the mass of the axion assuming that hypothetical clouds of axions could form around a black hole, rejecting the approximate – range of mass values.
Searches for resonance effects
Resonance effects may be evident in Josephson junctions from a supposed high flux of axions from the galactic halo with mass of 110 μeV and density compared to the implied dark matter density , indicating said axions would not have enough mass to be the sole component of dark matter. The ORGAN experiment plans to conduct a direct test of this result via the haloscope method.
Dark matter recoil searches
Dark matter cryogenic detectors have searched for electron recoils that would indicate axions. CDMS published in 2009 and EDELWEISS set coupling and mass limits in 2013. UORE and XMASS also set limits on solar axions in 2013. XENON100 used a 225-day run to set the best coupling limits to date and exclude some parameters.
Nuclear spin precession
While Schiff's theorem states that a static nuclear electric dipole moment (EDM) does not produce atomic and molecular EDMs, the axion induces an oscillating nuclear EDM that oscillates at the Larmor frequency. If this nuclear EDM oscillation frequency is in resonance with an external electric field, a precession in the nuclear spin rotation occurs. This precession can be measured using precession magnetometry and if detected, would be evidence for Axions.
An experiment using this technique is the Cosmic Axion Spin Precession Experiment (CASPEr).
Searches at particle colliders
Axions may also be produced at colliders, in particular in electron-positron collisions as well as in ultra-peripheral heavy ion collisions at the Large Hadron Collider at CERN, reinterpreting the light-by-light scattering process. Those searches are sensitive for rather large axion masses between 100 MeV/c2 and hundreds of GeV/c2. Assuming a coupling of axions to the Higgs boson, searches for anomalous Higgs boson decays into two axions can theoretically provide even stronger limits.
Disputed detections
It was reported in 2014 that evidence for axions may have been detected as a seasonal variation in observed X-ray emission that would be expected from conversion in the Earth's magnetic field of axions streaming from the Sun. Studying 15 years of data by the European Space Agency's XMM-Newton observatory, a research group at Leicester University noticed a seasonal variation for which no conventional explanation could be found. One potential explanation for the variation, described as "plausible" by the senior author of the paper, is the known seasonal variation in visibility to XMM-Newton of the sunward magnetosphere in which X-rays may be produced by axions from the Sun's core.
This interpretation of the seasonal variation is disputed by two Italian researchers, who identify flaws in the arguments of the Leicester group that are said to rule out an interpretation in terms of axions. Most importantly, the scattering in angle assumed by the Leicester group to be caused by magnetic field gradients during the photon production, necessary to allow the X-rays to enter the detector that cannot point directly at the sun, would dissipate the flux so much that the probability of detection would be negligible.
In 2013, Christian Beck suggested that axions might be detectable in Josephson junctions; and in 2014, he argued that a signature, consistent with a mass ≈110 μeV, had in fact been observed in several preexisting experiments.
In 2020, the XENON1T experiment at the Gran Sasso National Laboratory in Italy reported a result suggesting the discovery of solar axions. The results were not significant at the 5-sigma level required for confirmation, and other explanations of the data were possible though less likely. New observations made in July 2022 after the observatory upgrade to XENONnT discarded the excess, thus ending the possibility of new particle discovery.
Properties
Predictions
One theory of axions relevant to cosmology had predicted that they would have no electric charge, a very small mass in the range from to , and very low interaction cross-sections for strong and weak forces. Because of their properties, axions would interact only minimally with ordinary matter. Axions would also change to and from photons in magnetic fields.
Cosmological implications
The properties of the axion, such as the axion mass, decay constant, and abundance, all have implications for cosmology.
Inflation theory suggests that if they exist, axions would be created abundantly during the Big Bang. Because of a unique coupling to the instanton field of the primordial universe (the "misalignment mechanism"), an effective dynamical friction is created during the acquisition of mass, following cosmic inflation. This robs all such primordial axions of their kinetic energy.
Ultralight axion (ULA) with is a kind of scalar field dark matter that seems to solve the small scale problems of CDM. A single ULA with a GUT scale decay constant provides the correct relic density without fine-tuning.
Axions would also have stopped interaction with normal matter at a different moment after the Big Bang than other more massive dark particles. The lingering effects of this difference could perhaps be calculated and observed astronomically.
If axions have low mass, thus preventing other decay modes (since there are no lighter particles to decay into), the low coupling constant thus predicts that the axion is not scattered out of its state despite its small mass so that the universe would be filled with a very cold Bose–Einstein condensate of primordial axions. Hence, axions could plausibly explain the dark matter problem of physical cosmology. Observational studies are underway, but they are not yet sufficiently sensitive to probe the mass regions if they are the solution to the dark matter problem with the fuzzy dark matter region starting to be probed via superradiance. High mass axions of the kind searched for by Jain and Singh (2007) would not persist in the modern universe. Moreover, if axions exist, scatterings with other particles in the thermal bath of the early universe unavoidably produce a population of hot axions.
Low mass axions could have additional structure at the galactic scale. If they continuously fall into galaxies from the intergalactic medium, they would be denser in "caustic" rings, just as the stream of water in a continuously flowing fountain is thicker at its peak. The gravitational effects of these rings on galactic structure and rotation might then be observable. Other cold dark matter theoretical candidates, such as WIMPs and MACHOs, could also form such rings, but because such candidates are fermionic and thus experience friction or scattering among themselves, the rings would be less sharply defined.
João G. Rosa and Thomas W. Kephart suggested that axion clouds formed around unstable primordial black holes might initiate a chain of reactions that radiate electromagnetic waves, allowing their detection. When adjusting the mass of the axions to explain dark matter, the pair discovered that the value would also explain the luminosity and wavelength of fast radio bursts, being a possible origin for both phenomena. In 2022 a similar hypothesis was used to constrain the mass of the axion from data of M87*.
In 2020, it was proposed that the axion field might actually have influenced the evolution of the early Universe by creating more imbalance between the amounts of matter and antimatter – which possibly resolves the baryon asymmetry problem.
Supersymmetry
In supersymmetric theories the axion has both a scalar and a fermionic superpartner. The fermionic superpartner of the axion is called the axino, the scalar superpartner is called the saxion or dilaton. They are all bundled in a chiral superfield.
The axino has been predicted to be the lightest supersymmetric particle in such a model. In part due to this property, it is also considered a candidate for dark matter.
See also
Dark photon
List of hypothetical particles
Weakly interacting slender particle
Footnotes
References
Sources
External links
Astroparticle physics
Concepts in astrophysics
Bosons
Dark matter
Hypothetical elementary particles
Subatomic particles with spin 0
Physics beyond the Standard Model
Quantum chromodynamics | Axion | [
"Physics",
"Astronomy"
] | 4,764 | [
"Dark matter",
"Unsolved problems in astronomy",
"Concepts in astrophysics",
"Concepts in astronomy",
"Astroparticle physics",
"Unsolved problems in physics",
"Astrophysics",
"Bosons",
"Subatomic particles",
"Particle physics",
"Exotic matter",
"Hypothetical elementary particles",
"Physics b... |
335,094 | https://en.wikipedia.org/wiki/Reducing%20atmosphere | A reducing atmosphere is an atmosphere in which oxidation is prevented by absence of oxygen and other oxidizing gases or vapours, and which may contain actively reductant gases such as hydrogen, carbon monoxide, methane and hydrogen sulfide that would be readily oxidized to remove any free oxygen. Although Early Earth had a reducing prebiotic atmosphere prior to the Proterozoic eon, starting at about 2.5 billion years ago in the late Neoarchaean period, the Earth's atmosphere experienced a significant rise in oxygen and transitioned to an oxidizing atmosphere with a surplus of molecular oxygen (dioxygen, O2) as the primary oxidizing agent.
Foundry operations
The principal mission of an iron foundry is the conversion of iron oxides (purified iron ores) to iron metal. This reduction is usually effected using a reducing atmosphere consisting of some mixture of natural gas, hydrogen (H2), and carbon monoxide. The byproduct is carbon dioxide.
Metal processing
In metal processing, a reducing atmosphere is used in annealing ovens for relaxation of metal stresses without corroding the metal. A non-oxidizing gas, usually nitrogen or argon, is typically used as a carrier gas so that diluted amounts of reducing gases may be used. Typically, this is achieved through using the combustion products of fuels and tailoring the ratio of CO:CO2. However, other common reducing atmospheres in the metal processing industries consist of dissociated ammonia, vacuum, and direct mixing of appropriately pure gases of N2, Ar, and H2.
A reducing atmosphere is also used to produce specific effects on ceramic wares being fired. A reduction atmosphere is produced in a fuel fired kiln by reducing the draft and depriving the kiln of oxygen. This diminished level of oxygen causes incomplete combustion of the fuel and raises the level of carbon inside the kiln. At high temperatures the carbon will bond with and remove the oxygen in the metal oxides used as colorants in the glazes. This loss of oxygen results in a change in the color of the glazes because it allows the metals in the glaze to be seen in an unoxidized form. A reduction atmosphere can also affect the color of the clay body. If iron is present in the clay body, as it is in most stoneware, then it will be affected by the reduction atmosphere as well.
In most commercial incinerators, exactly the same conditions are created to encourage the release of carbon-bearing fumes. These fumes are then oxidized in reburn tunnels where oxygen is injected progressively. The exothermic oxidation reaction maintains the temperature of the reburn tunnels. This system allows lower temperatures to be employed in the incinerator section, where the solids are volumetrically reduced.
Origin of life
The atmosphere of Early Earth is widely speculated to have been reducing. The Miller–Urey experiment, related to some hypotheses for the origin of life, entailed reactions in a reducing atmosphere composed of a mixed atmosphere of methane, ammonia and hydrogen sulfide. Some hypotheses for the origin of life invoke a reducing atmosphere consisting of hydrogen cyanide (HCN). Experiments show that HCN can polymerize in the presence of ammonia to give a variety of products including amino acids. The same principle applies to Mars, Venus and Titan.
Cyanobacteria are suspected to be the first photoautotrophs that evolved oxygenic photosynthesis, which over the latter half of the Archaen eon eventually depleted all reductants in the Earth's oceans, terrestrial surface and atmosphere, gradually increasing the oxygen concentration in the atmosphere, changing it to what is known as an oxidizing atmosphere. This rising oxygen initially led to a 300 million-year-long ice age that devastated the then-mostly anaerobe-dominated biosphere, forcing the surviving anaerobic colonies to evolve into symbiotic microbial mats with the newly evolved aerobes. Some aerobic bacteria eventually became endosymbiont within other anaerobes (likely archaea), and the resultant symbiogenesis led to the evolution of a completely new lineage of life — the eukaryotes, who took advantage of mitochondrial aerobic respiration to power their cellular activities, allowing life to thrive and evolve into ever more complex forms. The increased oxygen in the atmosphere also eventually created the ozone layer, which shielded away harmful ionizing ultraviolet radiation that otherwise would have photodissociated away surface water and rendered life impossible on land and the ocean surface.
In contrast to the hypothesized early reducing atmosphere, evidence exists that Hadean atmospheric oxygen levels were similar to those of today. These results suggests prebiotic building blocks were delivered from elsewhere in the galaxy. The results however do not run contrary to existing theories on life's journey from anaerobic to aerobic organisms. The results quantify the nature of gas molecules containing carbon, hydrogen, and sulphur in the earliest atmosphere, but they shed no light on the much later rise of free oxygen in the air.
See also
Notes
Metallurgy
Planetary science
Pottery
Redox | Reducing atmosphere | [
"Chemistry",
"Materials_science",
"Astronomy",
"Engineering"
] | 1,067 | [
"Redox",
"Metallurgy",
"Materials science",
"Reducing agents",
"Electrochemistry",
"nan",
"Planetary science",
"Astronomical sub-disciplines"
] |
335,109 | https://en.wikipedia.org/wiki/Compressive%20strength | In mechanics, compressive strength (or compression strength) is the capacity of a material or structure to withstand loads tending to reduce size (compression). It is opposed to tensile strength which withstands loads tending to elongate, resisting tension (being pulled apart). In the study of strength of materials, compressive strength, tensile strength, and shear strength can be analyzed independently.
Some materials fracture at their compressive strength limit; others deform irreversibly, so a given amount of deformation may be considered as the limit for compressive load. Compressive strength is a key value for design of structures.
Compressive strength is often measured on a universal testing machine. Measurements of compressive strength are affected by the specific test method and conditions of measurement. Compressive strengths are usually reported in relationship to a specific technical standard.
Introduction
When a specimen of material is loaded in such a way that it extends it is said to be in tension. On the other hand, if the material compresses and shortens it is said to be in compression.
On an atomic level, molecules or atoms are forced together when in compression, whereas they are pulled apart when in tension. Since atoms in solids always try to find an equilibrium position, and distance between other atoms, forces arise throughout the entire material which oppose both tension or compression. The phenomena prevailing on an atomic level are therefore similar.
The "strain" is the relative change in length under applied stress; positive strain characterizes an object under tension load which tends to lengthen it, and a compressive stress that shortens an object gives negative strain. Tension tends to pull small sideways deflections back into alignment, while compression tends to amplify such deflection into buckling.
Compressive strength is measured on materials, components, and structures.
The ultimate compressive strength of a material is the maximum uniaxial compressive stress that it can withstand before complete failure. This value is typically determined through a compressive test conducted using a universal testing machine. During the test, a steadily increasing uniaxial compressive load is applied to the test specimen until it fails. The specimen, often cylindrical in shape, experiences both axial shortening and lateral expansion under the load. As the load increases, the machine records the corresponding deformation, plotting a stress-strain curve that would look similar to the following:
The compressive strength of the material corresponds to the stress at the red point shown on the curve. In a compression test, there is a linear region where the material follows Hooke's law. Hence, for this region, where, this time, refers to the Young's modulus for compression. In this region, the material deforms elastically and returns to its original length when the stress is removed.
This linear region terminates at what is known as the yield point. Above this point the material behaves plastically and will not return to its original length once the load is removed.
There is a difference between the engineering stress and the true stress. By its basic definition the uniaxial stress is given by:
where is load applied [N] and is area [m2].
As stated, the area of the specimen varies on compression. In reality therefore the area is some function of the applied load i.e. . Indeed, stress is defined as the force divided by the area at the start of the experiment. This is known as the engineering stress, and is defined bywhere is the original specimen area [m2].
Correspondingly, the engineering strain is defined bywhere is the current specimen length [m] and is the original specimen length [m]. True strain, also known as logarithmic strain or natural strain, provides a more accurate measure of large deformations, such as in materials like ductile metalsThe compressive strength therefore corresponds to the point on the engineering stress–strain curve defined by
where is the load applied just before crushing and is the specimen length just before crushing.
Deviation of engineering stress from true stress
When a uniaxial compressive load is applied to an object it will become shorter and spread laterally so its original cross sectional area () increases to the loaded area (). Thus the true stress () deviates from engineering stress (). Tests that measure the engineering stress at the point of failure in a material are often sufficient for many routine applications, such as quality control in concrete production. However, determining the true stress in materials under compressive loads is important for research focused on the properties on new materials and their processing.
The geometry of test specimens and friction can significantly influence the results of compressive stress tests. Friction at the contact points between the testing machine and the specimen can restrict the lateral expansion at its ends (also known as 'barreling') leading to non-uniform stress distribution. This is discussed in section on contact with friction.
Frictionless contact
With a compressive load on a test specimen it will become shorter and spread laterally so its cross sectional area increases and the true compressive stress isand the engineering stress isThe cross sectional area () and consequently the stress ( ) are uniform along the length of the specimen because there are no external lateral constraints. This condition represents an ideal test condition. For all practical purposes the volume of a high bulk modulus material (e.g. solid metals) is not changed by uniaxial compression. SoUsing the strain equation from aboveandNote that compressive strain is negative, so the true stress ( ) is less than the engineering stress (). The true strain () can be used in these formulas instead of engineering strain () when the deformation is large.
Contact with friction
As the load is applied, friction at the interface between the specimen and the test machine restricts the lateral expansion at its ends. This has two effects:
It can cause non-uniform stress distribution across the specimen, with higher stress at the centre and lower stress at the edges, which affects the accuracy of the result.
It causes a barreling effect (bulging at the centre) in ductile materials. This changes the specimen's geometry and affects its load-bearing capacity, leading to a higher apparent compressive strength.
Various methods can be used to reduce the friction according to the application:
Applying a suitable lubricant, such as MoS2, oil or grease; however, care must be taken not to affect the material properties with the lubricant used.
Use of PTFE or other low-friction sheets between the test machine and specimen.
A spherical or self-aligning test fixture, which can minimize friction by applying the load more evenly across the specimen's surface.
Three methods can be used to compensate for the effects of friction on the test result:
Correction formulas
Geometric extrapolation
Finite element analysis
Correction formulas
Round test specimens made from ductile materials with a high bulk modulus, such as metals, tend to form a barrel shape under axial compressive loading due to frictional contact at the ends. For this case the equivalent true compressive stress for this condition can be calculated usingwhere
is the loaded length of the test specimen,
is the loaded diameter of the test specimen at its ends, and
is the maximum loaded diameter of the test specimen.
Note that if there is frictionless contact between the ends of the specimen and the test machine, the bulge radius becomes infinite () and . In this case, the formulas yield the same result as because changes according to the ratio .
The parameters () obtained from a test result can be used with these formulas to calculate the equivalent true stress at failure.
The graph of specimen shape effect shows how the ratio of true stress to engineering stress (σ´/σe) varies with the aspect ratio of the test specimen (). The curves were calculated using the formulas provided above, based on the specific values presented in the table for specimen shape effect calculations. For the curves where end restraint is applied to the specimens, they are assumed to be fully laterally restrained, meaning that the coefficient of friction at the contact points between the specimen and the testing machine is greater than or equal to one (μ ⩾ 1). As shown in the graph, as the relative length of the specimen increases (), the ratio of true to engineering stress () approaches the value corresponding to frictionless contact between the specimen and the machine, which is the ideal test condition.
Geometric extrapolation
As shown in the section on correction formulas, as the length of test specimens is increased and their aspect ratio approaches zero (), the compressive stresses (σ) approach the true value (σ′). However, conducting tests with excessively long specimens is impractical, as they would fail by buckling before reaching the material's true compressive strength. To overcome this, a series of tests can be conducted using specimens with varying aspect ratios, and the true compressive strength can then be determined through extrapolation.
Finite element analysis
Comparison of compressive and tensile strengths
Concrete and ceramics typically have much higher compressive strengths than tensile strengths. Composite materials, such as glass fiber epoxy matrix composite, tend to have higher tensile strengths than compressive strengths. Metals are difficult to test to failure in tension vs compression. In compression metals fail from buckling/crumbling/45° shear which is much different (though higher stresses) than tension which fails from defects or necking down.
Compressive failure modes
If the ratio of the length to the effective radius of the material loaded in compression (Slenderness ratio) is too high, it is likely that the material will fail under buckling. Otherwise, if the material is ductile yielding usually occurs which displaying the barreling effect discussed above. A brittle material in compression typically will fail by axial splitting, shear fracture, or ductile failure depending on the level of constraint in the direction perpendicular to the direction of loading. If there is no constraint (also called confining pressure), the brittle material is likely to fail by axial splitting. Moderate confining pressure often results in shear fracture, while high confining pressure often leads to ductile failure, even in brittle materials.
Axial Splitting relieves elastic energy in brittle material by releasing strain energy in the directions perpendicular to the applied compressive stress. As defined by a materials Poisson ratio a material compressed elastically in one direction will strain in the other two directions. During axial splitting a crack may release that tensile strain by forming a new surface parallel to the applied load. The material then proceeds to separate in two or more pieces. Hence the axial splitting occurs most often when there is no confining pressure, i.e. a lesser compressive load on axis perpendicular to the main applied load. The material now split into micro columns will feel different frictional forces either due to inhomogeneity of interfaces on the free end or stress shielding. In the case of stress shielding, inhomogeneity in the materials can lead to different Young's modulus. This will in turn cause the stress to be disproportionately distributed, leading to a difference in frictional forces. In either case this will cause the material sections to begin bending and lead to ultimate failure.
Microcracking
Microcracks are a leading cause of failure under compression for brittle and quasi-brittle materials. Sliding along crack tips leads to tensile forces along the tip of the crack. Microcracks tend to form around any pre-existing crack tips. In all cases it is the overall global compressive stress interacting with local microstructural anomalies to create local areas of tension. Microcracks can stem from a few factors.
Porosity is the controlling factor for compressive strength in many materials. Microcracks can form around pores, until about they reach approximately the same size as their parent pores. (a)
Stiff inclusions within a material such as a precipitate can cause localized areas of tension. (b) When inclusions are grouped up or larger, this effect can be amplified.
Even without pores or stiff inclusions, a material can develop microcracks between weak inclined (relative to applied stress) interfaces. These interfaces can slip and create a secondary crack. These secondary cracks can continue opening, as the slip of the original interfaces keeps opening the secondary crack (c). The slipping of interfaces alone is not solely responsible for secondary crack growth as inhomogeneities in the material's Young's modulus can lead to an increase in effective misfit strain. Cracks that grow this way are known as wingtip microcracks.
The growth of microcracks is not the growth of the original crack or imperfection. The cracks that nucleate do so perpendicular to the original crack and are known as secondary cracks. The figure below emphasizes this point for wingtip cracks.
These secondary cracks can grow to as long as 10-15 times the length of the original cracks in simple (uniaxial) compression. However, if a transverse compressive load is applied. The growth is limited to a few integer multiples of the original crack's length.
Shear bands
If the sample size is large enough such that the worse defect's secondary cracks cannot grow large enough to break the sample, other defects within the sample will begin to grow secondary cracks as well. This will occur homogeneously over the entire sample. These micro-cracks form an echelon that can form an “intrinsic” fracture behavior, the nucleus of a shear fault instability. Shown right:
Eventually this leads the material deforming non-homogeneously. That is the strain caused by the material will no longer vary linearly with the load. Creating localized shear bands on which the material will fail according to deformation theory. “The onset of localized banding does not necessarily constitute final failure of a material element, but it presumably is at least the beginning of the primary failure process under compressive loading.”
Typical values
Compressive strength of concrete
For designers, compressive strength is one of the most important engineering properties of concrete. It is standard industrial practice that the compressive strength of a given concrete mix is classified by grade. Cubic or cylindrical samples of concrete are tested under a compression testing machine to measure this value. Test requirements vary by country based on their differing design codes. Use of a Compressometer is common. As per Indian codes, compressive strength of concrete is defined as:
The compressive strength of concrete is given in terms of the characteristic compressive strength of 150 mm size cubes tested after 28 days (fck). In field, compressive strength tests are also conducted at interim duration i.e. after 7 days to verify the anticipated compressive strength expected after 28 days. The same is done to be forewarned of an event of failure and take necessary precautions. The characteristic strength is defined as the strength of the concrete below which not more than 5% of the test results are expected to fall.
For design purposes, this compressive strength value is restricted by dividing with a factor of safety, whose value depends on the design philosophy used.
The construction industry is often involved in a wide array of testing. In addition to simple compression testing, testing standards such as ASTM C39, ASTM C109, ASTM C469, ASTM C1609 are among the test methods that can be followed to measure the mechanical properties of concrete. When measuring the compressive strength and other material properties of concrete, testing equipment that can be manually controlled or servo-controlled may be selected depending on the procedure followed. Certain test methods specify or limit the loading rate to a certain value or a range, whereas other methods request data based on test procedures run at very low rates.
Ultra-high performance concrete (UHPC) is defined as having a compressive strength over 150 MPa.
See also
Buff strength
Container compression test
Crashworthiness
Deformation (engineering)
Schmidt hammer, for measuring compressive strength of materials
Plane strain compression test
References
Mikell P. Groover, Fundamentals of Modern Manufacturing, John Wiley & Sons, 2002 U.S.A,
Callister W.D. Jr., Materials Science & Engineering an Introduction, John Wiley & Sons, 2003 U.S.A,
Materials science
Product testing | Compressive strength | [
"Physics",
"Materials_science",
"Engineering"
] | 3,312 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
335,114 | https://en.wikipedia.org/wiki/Mansard%20roof | A mansard or mansard roof (also called French roof or curb roof) is a multi-sided gambrel-style hip roof characterised by two slopes on each of its sides, with the lower slope at a steeper angle than the upper, and often punctured by dormer windows. The steep roofline and windows allow for additional floors of habitable space (a garret), and reduce the overall height of the roof for a given number of habitable storeys. The upper slope of the roof may not be visible from street level when viewed from close proximity to the building.
The earliest known example of a mansard roof is credited to Pierre Lescot on part of the Louvre built around 1550. This roof design was popularised in the early 17th century by François Mansart (1598–1666), an accomplished architect of the French Baroque period. It became especially fashionable during the Second French Empire (1852–1870) of Napoléon III. Mansard in Europe (France, Germany and elsewhere) also means the attic or garret space itself, not just the roof shape and is often used in Europe to mean a gambrel roof.
Identification
Two distinct traits of the mansard roof – steep sides and a double pitch – sometimes lead to it being confused with other roof types. Since the upper slope of a mansard roof is rarely visible from the ground, a conventional single-plane roof with steep sides may be misidentified as a mansard roof. The gambrel roof style, commonly seen in barns in North America, is a close cousin of the mansard. Both mansard and gambrel roofs fall under the general classification of "curb roofs" (a pitched roof that slopes away from the ridge in two successive planes).
The mansard is a curb hip roof, with slopes on all sides of the building, and the gambrel is a curb gable roof, with slopes on only two sides. (The curb is a horizontal, heavy timber directly under the intersection of the two roof surfaces.) A significant difference between the two, for snow loading and water drainage, is that, when seen from above, gambrel roofs culminate in a long crease at the main ridge beam, whereas mansard roofs form a rectangular shaped crease, outlined by the curb beams, with a low-pitched roof inside this rectangle.
French roof is often used as a synonym for a mansard but is also defined as an American variation of a mansard with the lower pitches nearly vertical and larger in proportion to the upper pitches.
In France and Germany, no distinction is made between gambrels and mansards – they are both called "mansards". In the French language, mansarde can be a term for the style of roof, or for the garret living space, or attic, directly within it.
Advantages
The mansard style makes maximum use of the interior space of the attic and offers a simple way to add one or more storeys to an existing (or new) building without necessarily requiring any masonry. Often the decorative potential of the mansard is exploited through the use of convex or concave curvature and with elaborate dormer window surrounds.
One frequently seen explanation for the popularity of the mansard style is that it served as a method of tax avoidance. One such example of this claim, from the 1914 book How to Make a Country Place, reads, "Monsieur Mansard is said to have circumvented that senseless window tax of France by adapting the windowed roof that bears his name." This is improbable in many respects: Mansart was a profligate spender of his clients' money, and while a French window tax did exist, it was enacted in 1798, 132 years after Mansart's death, and did not exempt mansard windows.
Later examples suggest that either French or American buildings were taxed by their height (or number of storeys) to the base of the roof, or that mansards were used to bypass zoning restrictions. This last explanation is the nearest to the truth: a Parisian law had been in place since 1783, restricting the heights of buildings to 20 metres (65 feet). The height was only measured up to the cornice line, making any living space contained in a mansard roof exempt. A 1902 revision of the law permitted building three or even four storeys within such a roof.
In London in the 1930s, building regulations decreed that "a building (not being a church or a chapel) shall not be erected of, or be subsequently increased to, a greater height than 80 ft., exclusive of two stories in the roof, and of ornamental towers". This was to stop buildings blocking the light, and effectively mandated mansard roofs for tall buildings.
History and use
Early use
The style was popularised in France by architect François Mansart (1598–1666). Although he was not the inventor of the style, his extensive and prominent use of it in his designs gave rise to the term "mansard roof", an adulteration of his name. The design tradition was continued by numerous architects, including Jules Hardouin-Mansart (1646–1708), his great-nephew, who is responsible for Château de Dampierre in Dampierre-en-Yvelines.
Second Empire
The mansard roof became popular once again during Haussmann's renovation of Paris beginning in the 1850s, in an architectural movement known as Second Empire style.
Second Empire influence spread throughout the world, frequently adopted for large civic structures such as government administration buildings and city halls, as well as hotels and railway stations. In the United States and Canada, and especially in New England, the Second Empire influence spread to family residences and mansions, often incorporated with Italianate and Gothic Revival elements. A mansard-topped tower became a popular element incorporated into many designs, such as Main Building (Vassar College), Poughkeepsie, New York, which shows a large mansard-roofed structure with two towers.
20th century
The 1916 Zoning Resolution adopted by New York City promoted the use of mansard roofs; rules requiring the use of setbacks on tall buildings were conducive to the mansard design.
In the 1960s and 1970s, a modernised form of mansard roof, sometimes with deep, narrow windows, became popular for both residential and commercial architecture in many areas of the United States. In many cases, these are not true mansard roofs but flat on top, the sloped façade providing a way to conceal heating, ventilation and air-conditioning equipment from view. The style grew out of interest in postmodern stylistic elements and the "French eclectic" house style popular in the 1930s and 1940s, and in housing also offered a way to provide an upper storey despite height restrictions. Houses with mansard roofs were sometimes described as French Provincial; architect John Elgin Woolf popularised it in the Los Angeles area, calling his houses Hollywood Regency.
Transportation
The roof of two Victorian Railways hopper wagons resembled a mansard roof. The Australian Commonwealth Railways CL class locomotive also has a mansard roof.
See also
List of roof shapes
References
External links
An Illustrated Roof Glossary
What is the Mansard Roof, Advantages and Disadvantages Sheltered, Architect Anton Giuroiu , Ion Mincu University of Architecture and Urbanism, Bucharest, Romania
Roofs
Structural system
Architectural elements | Mansard roof | [
"Technology",
"Engineering"
] | 1,493 | [
"Structural engineering",
"Building engineering",
"Architecture",
"Structural system",
"Architectural elements",
"Roofs",
"Components"
] |
335,217 | https://en.wikipedia.org/wiki/Brooch | A brooch (, ) is a decorative jewellery item designed to be attached to garments, often to fasten them together. It is usually made of metal, often silver or gold or some other material. Brooches are frequently decorated with enamel or with gemstones and may be solely for ornament or serve a practical function as a clothes fastener. The earliest known brooches are from the Bronze Age. As fashions in brooches changed rather quickly, they are important chronological indicators. In archaeology, ancient European brooches are usually referred to by the Latin term fibula. One example is the Tara Brooch.
Ancient brooches
Brooches from antiquity and before the Middle Ages are often called fibula (plural fibulae), especially in Continental contexts. British archaeologists tend to distinguish between bowed fibulae and flatter brooches, even in antiquity. They were necessary as clothes fasteners, but also often highly decorative, and important markers of social status for both men and women, from the Bronze Age onwards. In Europe, during the Iron Age, metalworking technology had advanced dramatically. The newer techniques of casting, metal bar-twisting and wire making were the basis for many new objects, including the fibula. In Europe, Celtic craftsmen were creating fibulae decorated in red enamel and coral inlay, as early as 400 BC.
The earliest manufacture of brooches in Great Britain was during the period from 600 to 150 BC. The most common brooch forms during this period were the bow, the plate and in smaller quantities, the penannular brooch. Iron Age brooches found in Britain are typically cast in one piece, with the majority made in copper alloy or iron. Prior to the late Iron Age, gold and silver were rarely used to make jewellery.
Medieval brooches
Migration period
The distinctive metalwork that was created by the Germanic peoples from the fourth through the eighth centuries belong to the art movement known as Migration period art. During the 5th and 6th centuries, five Germanic tribes migrated to and occupied four different areas of Europe and England after the collapse of the Roman Empire. The tribes were the Visigoths who settled in Spain, the Ostrogoths in Eastern Germany and Austria, the Franks in West Germany, the Lombards in Northern Italy and the Anglo-Saxons in England. Because the tribes were closely linked by their origins, and their jewellery techniques were strikingly similar, the work of these people was first referred to as Barbarian art. This art style is now called Migration period art.
Brooches dating from this period were developed from a combination of Late Roman and new Germanic art forms, designs and technology. Metalworkers throughout western Europe created some of the most colourful, lively and technically superior jewellery ever seen. The brooches of this era display techniques from Roman art: repoussé, filigree, granulation, enamelling, openwork and inlay, but it is inlay that the Migration period artists are famous for. Their passion for colour makes their jewellery stand out. Colour is the primary feature of Migration period jewellery. The precious stone most often used in brooches was the almandine, a burgundy variety of garnet, found in Europe and India.
According to J. Anderson Black, "designers would cover the entire surface of an object with the tiny geometric shapes of precious stones or enamel which were then polished flat until they were flush with the cloisonné settings, giving the appearance of a tiny stained glass window."
Brooch designs were many and varied: geometric decoration, intricate patterns, abstract designs from nature, bird motifs and running scrolls. Zoomorphic ornamentation was a common element during this period, in Anglo-Saxon England as well as in Europe. Intertwined beasts were a signature feature of these lively, intricately decorated brooches. Bow shaped, S-shaped, radiate-headed and decorated disc brooches were the most common brooch styles during the Migration period, which spanned the 5th through the 7th centuries.
Anglo-Saxon
The majority of brooches found in early Anglo-Saxon England were Continental styles that had migrated from Europe and Scandinavia. The long brooch style was most commonly found in 5th- and 6th-century England. Circular brooches first appeared in England in the middle of the 5th century. During the 6th century, craftsmen from Kent began manufacturing brooches using their own distinctive styles and techniques. The circular form was the preferred brooch type by the end of the 6th century. During the 7th century, all brooches in England were in decline. They reappeared in the 8th century and continued to be fashionable through the end of the Anglo-Saxon era.
Brooch styles were predominantly circular by the middle to late Anglo-Saxon era. During this time period, the preferred styles were the annular and jewelled (Kentish) disc brooch styles. The circular forms can be divided generally into enamelled and non-enamelled styles. A few non-circular style were fashionable during the 8th to 11th centuries. The ansate, the safety-pin, the strip and a few other styles can be included in this group. Ansate brooches were traditional brooches from Europe migrated to England and became fashionable in the late Anglo-Saxon period. Safety- pin brooches, more abundant in the early Anglo-Saxon period became more uncommon by the 7th century and by the 8th century, evolve into the strip brooch. Miscellaneous brooches during this time period include the bird, the ottonian, the rectangle and the cross motif.
Celtic
Celtic brooches represent a distinct tradition of elaborately decorated penannular and pseudo-penannular brooch types developed in Early Medieval Ireland and Scotland. Techniques, styles and materials used by the Celts were different from Anglo-Saxon craftsmen. Certain attributes of Celtic jewellery, such as inlaid millefiori glass and curvilinear styles have more in common with ancient brooches than contemporary Anglo-Saxon jewellery. The jewellery of Celtic artisans is renowned for its inventiveness, complexity of design and craftsmanship. The Tara Brooch is a well-known example of a Celtic brooch.
Scandinavian
Germanic Animal Style decoration was the foundation of Scandinavian art that was produced during the Middle Ages. The lively decorative style originated in Denmark in the late fifth century as an insular response to Late Roman style metalwork. During the early medieval period, Scandinavian craftsmen created intricately carved brooches with their signature animal style ornamentation. The brooches were generally made of copper alloy or silver.
Beginning in the eighth century and lasting until the eleventh century, Scandinavian seafarers were exploring, raiding and colonising Europe, Great Britain and new lands to the west. This era of Scandinavian expansion is known as the Viking Age, and the art created during this time period is known as Viking art. Metalwork, including brooches, produced during this period were decorated in one or more of the Viking art styles. These five sequential styles are: Oseberg, Borre, Jellinge, Mammen, Ringerike and Urnes.
A variety of Scandinavian brooch forms were common during this period: circular, bird-shaped, oval, equal-armed, trefoil, lozenge-shaped, and domed disc. The most common Scandinavian art styles of the period are the Jellinge and Borre art styles. Some of the characteristics of these related art styles are: interlaced gripping beasts, single animal motifs, ribbon-shaped animals, knot and ring-chain patterns, tendrils, and leaf, beast and bird motifs.
Late medieval
Brooches found during the late medieval era, (1300 to 1500 AD), were worn by both men and women. Brooch shapes were generally: star-shaped, pentagonal, lobed, wheel, heart-shaped, and ring. Rings were smaller than other brooches, and often used to fasten clothing at the neck. Brooch decoration usually consisted of a simple inscription or gems applied to a gold or silver base. Inscriptions of love, friendship and faith were a typical feature of ring brooches of this period. The heart-shaped brooch was a very popular gift between lovers or friends.
Amulet brooches were very common prior to medieval times. In late antiquity, they were embellished with symbols of pagan deities or gems that held special powers to protect the wearer from harm. These pagan inspired brooches continued to be worn after the spread of Christianity. Pagan and Christian symbols were often combined to decorate brooches during the Middle Ages. Beginning in the fourteenth century, three-dimensional brooches appeared for the first time. The Dunstable Swan Brooch is a well-known example of a three-dimensional brooch.
Early modern brooches
The early modern period of jewellery extended from 1500 to 1800. Global exploration and colonisation brought new prosperity to Europe and Great Britain along with new sources of diamonds, gems, pearls, and precious metals. The rapid changes in clothing fashion during this era generated similar changes in jewellery styles. The demand for new jewellery resulted in the deconstruction and melting down of many old jewellery pieces to create new jewellery. Because of this, there are very few surviving jewellery pieces from this era. The primary jewellery styles during this time period are: Renaissance, Georgian and Neoclassical.
Renaissance
The Renaissance period in jewellery (1300–1600) was a time of wealth and opulence. Elaborate brooches covered in gemstones or pearls were in fashion, especially with the upper classes. Gemstones commonly used for brooches were emeralds, diamonds, rubies, amethyst and topaz. Brooches with religious motifs and enamelled miniature portraits were popular during this time period. Gems were often selected for their protective properties as well as their vibrant colours. During the fifteenth century, new cutting techniques inspired new gemstone shapes.
Georgian
The Georgian jewellery era (1710–1830) was named after the four King Georges of England. In the early 1700s, ornate brooches with complex designs were fashionable. By the mid- to late 1700s, simpler forms and designs were more common, with simpler themes of nature, bows, miniature portraits and animals. Georgian jewellery was typically handmade in gold or silver. Diamonds and pearls continued to be fashionable during this period.
Neoclassical
The Neoclassical era (1760–1830) in jewellery design was inspired by classical themes of ancient Greece and Rome. The main difference between Renaissance jewellery and neoclassical jewellery was that Renaissance jewellery was created primarily for the upper class and neoclassical jewellery was made for the general public. An important innovation in jewellery making during this era was the technique of producing cameos with hard pastes called black basalt and jasper. English pottery manufacturer Josiah Wedgwood is responsible for this important contribution to jewellery making. Cameos and brooches with classical scenes were fashionable during this period. Pearls and gemstones continued to be used in brooches, but were less popular than before. The beginning of the French Revolution halted the manufacture and demand for opulent jewellery.
Late modern brooches
The late modern era of jewellery covers the period from 1830 to 1945. The major jewellery styles of this period are: Victorian (1835–1900), Art Nouveau (1895–1914), Edwardian (1901–1910) and Art deco (1920–1939).
Victorian
This period was named for Queen Victoria of the United Kingdom, who reigned from 1837 to 1901. Cameos, locket brooches, flowers, nature, animal and hearts were popular jewellery styles in the early Victorian era. When Victoria's husband, Prince Albert, died in 1861, jewellery fashion changed to reflect the queen in mourning. Styles turned heavier and more sombre, using materials like black enamel, jet, and black onyx. Mourning brooches were commonly worn until the end of the Victorian period.
It was fashionable during this period to incorporate hair and portraiture into a brooch. The practice began as an expression of mourning, then expanded to keepsakes of loved ones who were living. Human hair was encased within the brooch or braided and woven into a band to which clasps were affixed.
Art Nouveau
The Art Nouveau period of jewellery spanned a short period from 1895 to 1905. The style began in France as a reaction to the heavy, sombre jewellery of the Victorian era. Innovative, flowing designs were now in fashion along with nature, flowers, insects and sensuous women with flowing hair. The jewellery style was fashionable for fifteen years, and ended with the beginning of World War I.
Edwardian
The Edwardian era of jewellery (1901–1910) began after the death of Queen Victoria. This period marked the first time platinum was used in jewellery. Because of platinum's strength, new jewellery pieces were created with delicate filigree to look like lace and silk. The main gemstones used in brooches were diamonds, typically with platinum or white gold, and coloured gemstones or pearls. Platinum and diamond brooches were a common brooch style. Small brooches continued to be fashionable. Popular brooch forms were bows, ribbons, swags, and garlands, all in the delicate new style.
Art Deco
The Art Deco period lasted from 1920 to 1939. Cubism and Fauvism, early 20th century art movements, were inspirations for this new art style, along with Eastern, African and Latin American art. Art Deco was named after the International Exhibition of Modern Decorative and Industrial Arts, a decorative and industrial arts exhibition held in Paris in 1925. Common brooch decoration of this period are: geometric shapes, abstract designs, designs from Cubism, Fauvism, and art motifs from Egypt and India. Black onyx, coral, quartz, lapis and carnelian were used with classic stones such as diamonds, rubies, emeralds, and sapphires.
See also
Medieval art
Anglo-Saxon art
Migration period art
Jewellery
Lapel pin
Badge
Pin-back button
Notes
References
Hellenic Ministry of Culture: Katie Demakopoulou, "Bronze Age Jewellery in Greece"
External links
Fasteners
Types of jewellery | Brooch | [
"Engineering"
] | 2,886 | [
"Construction",
"Fasteners"
] |
335,262 | https://en.wikipedia.org/wiki/Nelly%20Sachs | Nelly Sachs (; 10 December 1891 – 12 May 1970) was a German–Swedish poet and playwright. Her experiences resulting from the rise of the Nazis in World War II Europe transformed her into a poignant spokesperson for the grief and yearnings of her fellow Jews. Her best-known play is (1950); other works include the poems "" (1962), "" (1970), and the collections of poetry (1947), (1959), (1961), and (1971). She was awarded the 1966 Nobel Prize in Literature.
Life and career
Leonie Sachs was born in Berlin-Schöneberg, Germany, in 1891 to a Jewish family. Her parents were the wealthy natural rubber and gutta-percha manufacturers Georg William Sachs (1858–1930) and his wife Margarete, née Karger (1871–1950). She was educated at home because of frail health. She showed early signs of talent as a dancer, but her protective parents did not encourage her to pursue a profession. She grew up as a very sheltered, introverted young woman and never married. She pursued an extensive correspondence with her friends Selma Lagerlöf and Hilde Domin. As the Nazis took power, she became increasingly terrified, at one point losing the ability to speak, as she would remember in verse: "When the great terror came/I fell dumb." Sachs fled with her aged mother to Sweden in 1940. It was her friendship with Lagerlöf that saved their lives: shortly before her own death, Lagerlöf intervened with the Swedish royal family to secure their release from Germany. Sachs and her mother escaped on the last flight from Nazi Germany to Sweden, a week before Sachs was scheduled to report to a concentration camp. They settled in Sweden, and Sachs became a Swedish citizen in 1952.
Living in a tiny two-room apartment in Stockholm, Sachs cared for her mother alone for many years, and supported their existence by translations between Swedish and German. After her mother's death, Sachs suffered several psychotic breakdowns, characterized by hallucinations, paranoia, and delusions of persecution by Nazis, and spent a number of years in a mental institution. She continued to write while hospitalized, and eventually recovered sufficiently to live on her own, though her mental health remained fragile. Her worst breakdown was ostensibly precipitated by hearing spoken German during a trip to Switzerland to accept a literary prize. But she maintained a forgiving attitude toward younger Germans, and corresponded with many German-speaking writers of the postwar period, including Hans Magnus Enzensberger and Ingeborg Bachmann.
Paul Celan and lyrical poetry
In the context of the Shoah, her deep friendship with "brother" poet Paul Celan is often noted today. Their bond was described in one of Celan's most famous poems, "" ("Zürich, The Stork Inn"). Sachs and Celan shared the Holocaust and the fate of the Jews throughout history, their interest in Jewish and Christian beliefs and practices, and their literary models; their imagery was often remarkably similar, though developed independently. Their friendship was supportive during professional conflicts. Celan also suffered from artistic infighting (Claire Goll's accusations of plagiarism) during a period of frustration with his work's reception. When Sachs met Celan she was embroiled in a long dispute with Finnish-Jewish composer Moses Pergament over his adaptation of her play . In Celan she found someone who understood her anxiety and hardships as an artist.
Sachs's poetry is intensely lyrical and reflects some influence by German Romanticism, especially in her early work. The poetry she wrote as a young woman in Berlin is more inspired by Christianity than Judaism and makes use of traditional Romantic imagery and themes. Much of it concerns an unhappy love affair Sachs suffered in her teens with a non-Jewish man who would eventually be killed in a concentration camp. After Sachs learned of her only love interest's death, she bound up his fate with that of her people and wrote many love lyrics ending not only in the beloved's death, but in the catastrophe of the Holocaust. Sachs herself mourns no longer as a jilted lover but as a personification of the Jewish people in their vexed relationship with history and God. Her fusion of grief with subtly romantic elements is in keeping with the imagery of the kabbalah, where the Shekhinah represents God's presence on earth and mourns for the separation of God from His people in their suffering. Thus Sachs's Romanticism allowed her to develop self-consciously from a German to a Jewish writer, with a corresponding change in her language: still flowery and conventional in some of her first poetry on the Holocaust, it becomes ever more compressed and surreal, returning to a series of the same images and tropes (dust, stars, breath, stones and jewels, blood, dancers, fish suffering out of water, madness, and ever-frustrated love) in ways that are sometimes comprehensible only to her readers, but always moving and disturbing. Though Sachs does not resemble many authors, she appears to have been influenced by Gertrud Kolmar and Else Lasker-Schüler, in addition to Celan.
In 1961 Sachs won the first Nelly Sachs Prize, a literary award given biennially by the German city of Dortmund and named in her honour. The city commissioned Walter Steffens to compose the opera Eli based on her mystery play, which premiered at the new opera house in 1967. When, with Shmuel Yosef Agnon, she was awarded the 1966 Nobel Prize in Literature, she observed that Agnon represented Israel whereas "I represent the tragedy of the Jewish people." She read her poem "In der Flucht" at the ceremony.
Sachs died from colorectal cancer in 1970. She was interred in the Norra begravningsplatsen in Stockholm. Her possessions were donated to the National Library of Sweden.
A memorial plaque commemorates her birthplace, Maaßenstraße 12, in Schöneberg, Berlin, where there is also a park named for her in Dennewitzstraße. A park on the island of Kungsholmen in Stockholm also bears her name.
Partial bibliography
Poetry
In den Wohnungen des Todes [In the Houses of Death], 1947.
Sternverdunkelung [Eclipse of Stars], 1949.
Und niemand weiss weiter [And No One Knows Where to Go], 1957.
Flucht und Verwandlung [Flight and Metamorphosis], 1959.
Fahrt ins Staublose: Die Gedichte der Nelly Sachs 1 [Journey into the Dustless Realm: The Poetry of Nelly Sachs, 1], 1961.
Zeichen im Sand [Signs in the Sand], 1962
Suche nach Lebenden: Die Gedichte der Nelly Sachs 2 [Search for the Living: The Poetry of Nelly Sachs, 2], 1971.
Stories
Legenden und Erzählungen [Legends and Tales], 1921.
Drama
[Eli: A Mystery Play of the Suffering of Israel], 1950
Letters
Briefe der Nelly Sachs [Letters of Nelly Sachs] ed. Ruth Dinesen and Helmut Müssener, 1984.
Paul Celan, Nelly Sachs: Correspondence, tr. Christopher Clark, ed. Barbara Wiedemann, 1995.
Translations
O the Chimneys: Selected Poems, Including the Verse Play, Eli, tr. Michael Hamburger et al., 1967.
The Seeker and Other Poems. tr. Ruth Mead, Matthew Mead, and Michael Hamburger, 1970.
Contemporary German Poetry, selections, ed. and tr. Gertrude C. Schwebell, 1964.
Collected Poems I, 1944–1949, 2007.
Glowing Enigmas, tr. Michael Hamburger, 2013.
Flight and Metamorphosis, tr. Joshua Weiner with Linda B. Parshall, 2022.
Sachs is published by Suhrkamp Verlag.
See also
List of female Nobel laureates
List of Jewish Nobel laureates
Manfred George, Nelly Sachs's cousin
Notes
References
www.nobel-winners.com – Nelly Sachs. This article includes some text from that page, in its version as of 13 December 2006, which is licensed under the GNU Free Documentation License
Further reading
In English
Bower, Kathrin M. Ethics and remembrance in the poetry of Nelly Sachs and Rose Ausländer. Camden House, 2000.
Barbara Wiedemann (ed.) Paul Celan, Nelly Sachs: Correspondence, trans. Christopher Clark. Sheep Meadow, 1998.
Olsson, Anders.
In German
Walter A. Berendsohn: Nelly Sachs: Einführung in das Werk der Dichterin jüdischen Schicksals. Agora, Darmstadt 1974, .
Gudrun Dähnert: "" in: Sinn und Form February 2009, pp. 226–257
Ruth Dinesen: Nelly Sachs. Eine Biographie. Suhrkamp, Frankfurt 1992,
Gabriele Fritsch-Vivié: Nelly Sachs. Monographie. Rowohlt, Reinbek, 3rd edition, 2001, .
: "". In: Charlotte Kerner: Nicht nur Madame Curie. Frauen, die den Nobelpreis bekamen. Beltz, Weinheim 1999, .
Gerald Sommerer: Aber dies ist nichts für Deutschland, das weiß und fühle ich. Nelly Sachs – Untersuchungen zu ihrem szenischen Werk. Königshausen & Neumann, Würzburg 2008, .
External links
Guide to the Papers of Nelly Sachs
Red Yucca – German Poetry in Translation (trans. by Eric Plattner)
Map showing location of Maaßenstraße and Nelly-Sachs-Park in Berlin-Schöneberg.
A selection of works by Sachs from the Sophie database
1891 births
1970 deaths
Nobel laureates in Literature
Women Nobel laureates
German Nobel laureates
Swedish Nobel laureates
Jewish Nobel laureates
Writers from Berlin
Jewish emigrants from Nazi Germany to Sweden
German women poets
Swedish women poets
Jewish poets
Jewish writers
Writers from the Province of Brandenburg
Deaths from colorectal cancer in Sweden
20th-century German poets
20th-century German women writers
Burials at Norra begravningsplatsen
Jewish women writers | Nelly Sachs | [
"Technology"
] | 2,099 | [
"Women Nobel laureates",
"Women in science and technology"
] |
335,331 | https://en.wikipedia.org/wiki/Ern%C5%91%20Rubik | Ernő Rubik (; born 13 July 1944) is a Hungarian inventor, widely known for creating the Rubik's Cube (1974), Rubik's Magic, and Rubik's Snake.
While Rubik became famous for inventing the Rubik's Cube and his other puzzles, much of his recent work involves the promotion of science in education. Rubik is involved with several organizations such as Beyond Rubik's Cube, the Rubik Learning Initiative and the Judit Polgar Foundation, all of which aim to engage students in science, mathematics, and problem solving at a young age.
Rubik studied sculpture at the Academy of Applied Arts and Design in Budapest and architecture at the Technical University, also in Budapest. While a professor of design at the academy, he pursued his hobby of building geometric models. One of these was a prototype of his cube, made of 27 wooden blocks; it took Rubik a month to solve the problem of the cube. It proved a useful tool for teaching algebraic group theory, and in late 1977 Konsumex, Hungary's state trading company, began marketing it. By 1980, Rubik's Cube was marketed throughout the world, and over 100 million authorized units, with an estimated 50 million unauthorized imitations, were sold, mostly during its subsequent three years of popularity. Approximately 50 books were published describing how to solve the puzzle of Rubik's Cube. Following his cube's popularity, Rubik opened a studio to develop designs in 1984; among its products was another popular puzzle toy, Rubik's Magic.
Early life and education
Ernő Rubik was born in Budapest, Hungary, on 13 July 1944, during World War II, and has lived all of his life in Hungary. His father, who was also named Ernő Rubik, was a flight engineer at the Esztergom aircraft factory, and his mother, Magdolna Szántó, was a poet. He has stated in almost every interview that he got his inspiration from his father.
His father, Ernő, was a highly respected engineer of gliders. His extensive work and expertise in this area gained him an international reputation as an expert in his field. Ernő Rubik has stated that:
From 1958 to 1962, Rubik specialized in sculpture at the Secondary School of Fine and Applied Arts. From 1962 to 1967, Rubik attended the Budapest University of Technology where he became a member of the Architecture Faculty. From 1967 to 1971, Rubik attended the Hungarian Academy of Applied Arts and Design and was in the Faculty of Interior Architecture and Design.
Rubik considers university and the education it afforded him as the decisive event which shaped his life. Rubik stated, "Schools offered me the opportunity to acquire knowledge of subjects or rather crafts that need a lot of practice, persistence, and diligence with the direction of a mentor."
Career
Professorship and origin of the Rubik's Cube
From 1971 to 1979, Rubik was a professor of architecture at the Budapest College of Applied Arts (Iparművészeti Főiskola). It was during his time there that he built designs for a three-dimensional puzzle and completed the first working prototype of the Rubik's Cube in 1974, applying for a patent on the puzzle in 1975. In an interview with CNN, Rubik stated that he was "searching to find a good task for my students."
Starting with blocks of wood and rubber bands, Rubik set out to create a structure that would allow the individual pieces to move without the whole structure falling apart. Rubik originally used wood for the block because of the convenience of a workshop at the university and because he viewed wood as a simple material to work with that did not require sophisticated machinery. Rubik made the original prototypes of his cube by hand, cutting the wood, boring the holes and using elastic bands to hold the contraption together.
Rubik showed his prototype to his class and his students liked it very much. Rubik realized that, because of the cube's simple structure, it could be manufactured relatively easily and might have appeal to a larger audience. Rubik's father possessed several patents, so Rubik was familiar with the process and applied for a patent for his invention. Rubik then set out to find a manufacturer in Hungary, but had great difficulty due to the rigid planned economy of communist Hungary at the time. Eventually, Rubik was able to find a small company that worked with plastic and made chess pieces. The cube was originally known in Hungary as the Magic Cube.
Rubik licensed the Magic Cube to Ideal Toys, a US company in 1979. Ideal rebranded The Magic Cube to the Rubik's Cube before its introduction to an international audience in 1980. The process from early prototype to mass production of the Cube had taken over six years. The Rubik's Cube became an instant success worldwide, won several Toy of the Year awards, and became a staple of 1980s popular culture. To date, over 350 million Rubik's Cubes have been sold, making it one of the best selling toys of all time. There are many sizes from 2x2 to 21x21.
Other inventions
In addition to Rubik's Cube, Rubik is also the inventor of Rubik's Magic, Rubik's Snake and Rubik's 360 among others.
Later career and other works
In the early 1980s, he became the editor of a game and puzzle journal called ..És játék (...And games), then became self-employed in 1983, founding the Rubik Stúdió, where he designed furniture and games. In 1987, he became a professor with full tenure; in 1990 he became the president of the Hungarian Engineering Academy (Magyar Mérnöki Akadémia). At the academy, he created the International Rubik Foundation to support especially talented young engineers and industrial designers.
He attended the 2007 World Speedcubing Championship in Budapest. He also gave a lecture and autograph session at the "Bridges-Pecs" conference ("Bridges between Mathematics and the Arts") in July 2010.
In 2009, he was appointed as an honorary professor of Keimyung University, Daegu, South Korea.
In the 2010s, Rubik has recently spent much of his time working on Beyond Rubik's Cube, a Science, Technology, Engineering, Mathematics (STEM fields) based exhibition, which would travel the globe over the next six years. The grand opening of the exhibit was held on 26 April 2014 at the Liberty Science Center in New Jersey. At the exhibition, Rubik gave several lectures, tours, and engaged with the public and several members of the speedcubing crowd in attendance, including Anthony Michael Brooks, a world-class speedcuber.
Rubik is a member of the USA Science and Engineering Festival's advisory board.
Influences
Ernő Rubik has listed several individuals who, as he has said, "exerted a great influence over me through their work." These include Leonardo da Vinci, whom Rubik regards as the Renaissance man; Michelangelo, whom he respects as a polymath, painter, and sculptor; and artist M. C. Escher, who drew impossible constructions and grappled with explorations of infinity. As regards to philosophers and writers, Rubik admires Voltaire, Stendhal, Thomas Mann, Jean-Paul Sartre, Hungarian poet Attila József, Jules Verne, and Isaac Asimov. In the field of architecture, Rubik is an admirer of Frank Lloyd Wright and Le Corbusier.
Personal life
Rubik describes himself as a lifelong bibliophile, saying "books offered me the possibility of gaining knowledge of the world, nature and people." He has a special interest in science fiction. As well, he is fond of nature walks, sports, sailing on Lake Balaton — and gardening, saying "collecting succulents is my favourite pastime."
Prizes and awards
1978 – Budapest International Trade Fair, Prize for the Cube
1980 – Toy of the Year: Federal Republic of Germany, United Kingdom, France, USA
1981 – Toy of the Year: Finland, Sweden, Italy
1982 – Toy of the Year: United Kingdom (second time)
1982 – The Museum of Modern Art, New York selected Rubik's Cube into its permanent collection
1983 – Hungarian State Prize for demonstrating and teaching 3D structures and for the various solutions that inspired scientific researches in several ways
1988 – Juvenile Prize from the State Office of Youth and Sport
1995 – Dénes Gabor Prize from the Novofer Foundation as an acknowledgement of achievements in the field of innovation
1996 – Ányos Jedlik Prize from the Hungarian Patent Office
1997 – Prize for the Reputation of Hungary (1997)
2007 – Kossuth Prize the most prestigious cultural award in Hungary
2008 – Moholy-Nagy Prize – from the Moholy-Nagy University of Arts and Design
2009 – EU Ambassador of the Year of Creativity and Innovation
2010 – USA Science and Engineering Festival Award (Outstanding Contribution to Science Education)
2014 – Hungarian Order of Saint Stephen (The highest Hungarian state honour)
2014 – Honorary Citizen of Budapest
Publications
Co-author of The Rubik's Cube Compendium (written by David Singmaster, Ernő Rubik, Gerzson Kéri, György Marx, Tamás Varga and Tamás Vekerdy), Oxford University Press, 1987.
Author of Cubed – The Puzzle of Us All, Flatiron Books/Orion Publishing Group /Hachette UK/Libri, 2020.
References
External links
An interview with Ernő Rubik
His biography at Hungary.hu
His first print interview in ten years (archived 1 February 2009)
An exclusive video interview about the new Rubik's 360
1944 births
Living people
Academic staff of the Moholy-Nagy University of Art and Design
Hungarian architects
20th-century Hungarian inventors
Recreational mathematicians
Puzzle designers
Rubik's Cube
Toy inventors | Ernő Rubik | [
"Mathematics"
] | 2,025 | [
"Recreational mathematics",
"Recreational mathematicians"
] |
335,354 | https://en.wikipedia.org/wiki/Ingot | An ingot is a piece of relatively pure material, usually metal, that is cast into a shape suitable for further processing. In steelmaking, it is the first step among semi-finished casting products. Ingots usually require a second procedure of shaping, such as cold/hot working, cutting, or milling to produce a useful final product. Non-metallic and semiconductor materials prepared in bulk form may also be referred to as ingots, particularly when cast by mold based methods. Precious metal ingots can be used as currency (with or without being processed into other shapes), or as a currency reserve, as with gold bars.
Types
Ingots are generally made of metal, either pure or alloy, heated past its melting point and cast into a bar or block using a mold chill method.
A special case are polycrystalline or single crystal ingots made by pulling from a molten melt.
Single crystal
Single crystal ingots (called boules) of materials are grown (crystal growth) using methods such as the Czochralski process or Bridgman technique.
The boules may be either semiconductor (e.g. electronic chip wafers, photovoltaic cells) or non-conducting inorganic compounds for industrial and jewelry use (e.g., synthetic ruby, sapphire).
Single crystal ingots of metal are produced in similar fashion to that used to produce high purity semiconductor ingots, i.e. by vacuum induction refining. Single crystal ingots of engineering metals are of interest due to their very high strength due to lack of grain boundaries. The method of production is via single crystal dendrite and not via simple casting. Possible uses include turbine blades.
Copper alloys
In the United States, the brass and bronze ingot making industry started in the early 19th century. The US brass industry grew to be the number one producer by the 1850s. During colonial times the brass and bronze industries were almost non-existent because the British demanded all copper ore be sent to Britain for processing. Copper based alloy ingots weighed approximately .
Manufacture
Ingots are manufactured by the cooling of a molten liquid (known as the melt) in a mold. The manufacture of ingots has several aims.
Firstly, the mold is designed to completely solidify and form an appropriate grain structure required for later processing, as the structure formed by the cooling of the melt controls the physical properties of the material.
Secondly, the shape and size of the mold is designed to allow for ease of ingot handling and downstream processing. Finally, the mold is designed to minimize melt wastage and aid ejection of the ingot, as losing either melt or ingot increases manufacturing costs of finished products.
A variety of designs exist for the mold, which may be selected to suit the physical properties of the liquid melt and the solidification process. Molds may exist in the top, horizontal or bottom-up pouring and may be fluted or flat walled. The fluted design increases heat transfer owing to a larger contact area. Molds may be either solid "massive" design, sand cast (e.g. for pig iron), or water-cooled shells, depending upon heat transfer requirements. Ingot molds are tapered to prevent the formation of cracks due to uneven cooling. A crack or void formation occurs as the liquid to solid transition has an associated volume change for a constant mass of material. The formation of these ingot defects may render the cast ingot useless and may need to be re-melted, recycled, or discarded.
The physical structure of a crystalline material is largely determined by the method of cooling and precipitation of the molten metal. During the pouring process, metal in contact with the ingot walls rapidly cools and forms either a columnar structure or possibly a "chill zone" of equiaxed dendrites, depending upon the liquid being cooled and the cooling rate of the mold.
For a top-poured ingot, as the liquid cools within the mold, differential volume effects cause the top of the liquid to recede leaving a curved surface at the mold top which may eventually be required to be machined from the ingot. The mold cooling effect creates an advancing solidification front, which has several associated zones, closer to the wall there is a solid zone that draws heat from the solidifying melt, for alloys there may exist a "mushy" zone, which is the result of solid-liquid equilibrium regions in the alloy's phase diagram, and a liquid region. The rate of front advancement controls the time that dendrites or nuclei have to form in the solidification region. The width of the mushy zone in an alloy may be controlled by tuning the heat transfer properties of the mold or adjusting the liquid melt alloy compositions.
Continuous casting methods for ingot processing also exist, whereby a stationary front of solidification is formed by the continual take-off of cooled solid material, and the addition of a molten liquid to the casting process.
Approximately 70 percent of aluminium ingots in the U.S. are cast using the direct chill casting process, which reduces cracking. A total of 5 percent of ingots must be scrapped because of stress induced cracks and butt deformation.
Historical ingots
Plano-convex ingots are widely distributed archaeological artifacts which are studied to provide information on the history of metallurgy.
See also
Bullion
Gold bar
Oxhide ingot
Sycee, traditional Chinese ingots
Tin ingot
Bar stock
Wafer etching
References
Further reading
External links
Casting (manufacturing)
Metallic objects | Ingot | [
"Physics"
] | 1,117 | [
"Metallic objects",
"Physical objects",
"Matter"
] |
335,391 | https://en.wikipedia.org/wiki/Kikuchi%20disease | Kikuchi disease was described in 1972 in Japan. It is also known as histiocytic necrotizing lymphadenitis, Kikuchi necrotizing lymphadenitis, phagocytic necrotizing lymphadenitis, subacute necrotizing lymphadenitis, and necrotizing lymphadenitis. Kikuchi disease occurs sporadically in people with no family history of the condition.
It was first described by Dr Masahiro Kikuchi (1935–2012) in 1972 and independently by Y. Fujimoto.
Signs and symptoms
The signs and symptoms of Kikuchi disease are fever, enlargement of the lymph nodes (lymphadenopathy), skin rashes, and headache. In sixty to ninety percent of cases, lymphadenopathy presents in the posterior cervical lymph nodes with diameter enlargement typically being between one and two centimeters, but up to seven centimeters has been reported in literature. Occasionally, the supraclavicular and axillary lymph nodes become swollen as well. Rarely, enlargement of the liver and spleen and nervous system involvement resembling meningitis are seen. Often a bout of extreme fatigue can occur - often taking hold during latter parts of the day and the affected person can be more prone to fatigue from exercise.
Pathophysiology
Some studies have suggested a genetic predisposition to the proposed autoimmune response. Several infectious candidates have been associated with Kikuchi disease.
Many theories exist about the cause of KFD. Microbial/viral or autoimmune causes have been suggested. Mycobacterium szulgai and Yersinia and Toxoplasma species have been implicated. More recently, growing evidence suggests a role for Epstein-Barr virus, as well as other viruses (HHV6, HHV8, parvovirus B19, HIV and HTLV-1) in the pathogenesis of KFD. However, many independent studies have failed to identify the presence of these infectious agents in cases of Kikuchi lymphadenopathy. In addition, serologic tests including antibodies to a host of viruses have consistently proven noncontributory and no viral particles have been identified ultrastructurally.
KFD is now proposed to be a nonspecific hyperimmune reaction to a variety of infectious, chemical, physical, and neoplastic agents. Other autoimmune conditions and manifestations such as antiphospholipid syndrome, polymyositis, systemic juvenile idiopathic arthritis, bilateral uveitis, arthritis and cutaneous necrotizing vasculitis have been linked to KFD. KFD may represent an exuberant T-cell-mediated immune response in a genetically susceptible individual to a variety of nonspecific stimuli.
Certain Human leukocyte antigen class II genes appear more frequently in patients with Kikuchi disease, suggesting that there may be a genetic predisposition to the proposed autoimmune response.
Diagnosis
It is diagnosed by lymph node excision biopsy. Kikuchi disease is a self-limiting illness which has symptoms which may overlap with Hodgkin's lymphoma leading to misdiagnosis in some patients. Antinuclear antibodies, antiphospholipid antibodies, anti-dsDNA, and rheumatoid factor are usually negative, and may help in differentiation from systemic lupus erythematosus.
Differential diagnosis
The differential diagnosis of Kikuchi disease includes systemic lupus erythematosus (SLE), disseminated tuberculosis, lymphoma, sarcoidosis, and viral lymphadenitis. Clinical findings sometimes may include positive results for IgM/IgG/IgA antibodies. For other causes of lymph node enlargement, see lymphadenopathy.
Management
No specific cure is known. Treatment is largely supportive. Nonsteroidal anti-inflammatory drugs (NSAIDs) are indicated for tender lymph nodes and fever, and corticosteroids are useful in severe extranodal or generalized disease.
Symptomatic measures aimed at relieving the distressing local and systemic complaints have been described as the main line of management of KFD. Analgesics, antipyretics, NSAIDs, and corticosteroids have been used. If the clinical course is more severe, with multiple flares of bulky enlarged cervical lymph nodes and fever, then a low-dose corticosteroid treatment has been suggested.
Epidemiology
Kikuchi-Fujimoto disease (KFD) is a rare, self-limiting disorder that typically affects the cervical lymph nodes. Recognition of this condition is crucial, especially because it can easily be mistaken for tuberculosis, lymphoma, or even adenocarcinoma. Awareness of this disorder helps prevent misdiagnosis and inappropriate treatment.
Kikuchi's disease is a very rare disease mainly seen in Japan. Isolated cases are reported in North America, Europe, Asia, England, and at least two cases in New Zealand. It is possible that the prevalence of KFD is greater than is reported given lymphadenopathy can be overlooked and the disease's self-limiting nature. That a definite identification of KFD can only be done via a biopsy of affected tissues further suggests that cases go unrecognized or undiagnosed. It is mainly a disease of young adults (20–30 years), with a slight bias towards females. The cause of this disease is not known, although infectious and autoimmune causes have been proposed. The course of the disease is generally benign and self-limiting. Lymph node enlargement usually resolves over several weeks to six months. The recurrence rate is about 3%. Death from Kikuchi disease is extremely rare and usually occurs due to liver, respiratory, or heart failure.
See also
Cutaneous lymphoid hyperplasia
References
External links
Histopathology
Lymphatic organ diseases
Lymphoid-related cutaneous conditions
Rare diseases | Kikuchi disease | [
"Chemistry"
] | 1,295 | [
"Histopathology",
"Microscopy"
] |
335,487 | https://en.wikipedia.org/wiki/Hydrographic%20office | A hydrographic office is an organization which is devoted to acquiring and publishing hydrographic information.
Historically, the main tasks of hydrographic offices were the conduction of hydrographic surveys and the publication of nautical charts. In many countries, various navigation-related services are now concentrated in large governmental organizations, sometimes termed "maritime administration" (however, the International Hydrographic Organization uses the term "hydrographic offices" for its member organizations).
Besides nautical charts, many hydrographic offices publish a body of books and periodicals that are collectively known as nautical publications. The most important of these are:
Sailing Directions (or pilots): detailed descriptions of areas of the sea, shipping routes, harbours, aids to navigation, regulations etc.
lists of lights: descriptions of lighthouses and lightbouys
tide tables and tidal stream atlases
ephemerides and nautical almanacs for celestial navigation
Notice to Mariners: periodical (often weekly) updates and corrections for nautical charts and publications
Hydrographic organizations may also be involved in services such as:
pilotage
search and rescue
maintenance of lighthouses and other aids to navigation
ice breaking
weather observation and information
sea traffic information and surveillance
maritime research
regulatory affairs of ship safety
History
In the development of hydrographic services, shipping organizations played a part, but the major players were the naval powers. Recognizing hydrographic information was a military advantage these naval organizations, usually under the direction of a "Hydrographer," utilized the expertise of naval officers in collecting hydrographic data that was incorporated into the navy's collection. In order to distribute the processed information (charts, directions, notices, and such) these organizations often developed specialized printing capabilities.
Hydrographic organisations of some countries
Australia
Hydrographic tasks in Australian waters were performed by the United Kingdom's Royal Navy since the 19th century. In 1920 the Australian Hydrographic Service was formed as a part of the Royal Australian Navy.
Brazil
Hydrographic tasks in Brazilian waters were performed by the (DHN) since 02/02/1876.
Canada
Starting in 1883, the "Georgian Bay Survey" was responsible for hydrographic surveying of Georgian Bay and Lake Huron. Its geographic area of responsibility increased and in 1904 the name was changed to the "Hydrographic Survey of Canada." The current name Canadian Hydrographic Service (CHS) was adopted in 1928.
In 1951, Canada became a State Member of the International Hydrographic Organization (IHO) and the Dominion Hydrographer is Canada's representative.
Today, the mandate of CHS is found in the Canada Oceans Act, the Canada Shipping Act (Charts and Publications Regulations) and the Navigable Waters Protection Act.
With its headquarters office located in Ottawa, Ontario there are regional offices in Sidney (British Columbia), Burlington (Ontario), Mont-Joli (Quebec), Halifax (Nova Scotia), and a branch office in St. John's Newfoundland. CHS has 300 staff across the country.
The national chart folio consists of 950 paper charts, 541 S-57 vector Electronic Navigation Charts and 651 raster charts in the BSB format. CHS produces and maintains seven volumes of Tides and Water Levels books, 25 Sailing Directions books, and prints and distributes a number of publications such as the Annual Notices to Mariners and Radio Aids to Marine Navigation.
In addition to significant hydrographic data holdings (single & multibeam), CHS operates 78 permanent water level stations, a real time water level and forecast system in the St. Lawrence River, and participates in the operation of Atlantic & Pacific tsunami warning systems.
CHS is directly responsible for the sales and distribution of all its products, in paper and digital form. A network of 850 dealers (domestic and international) distributes CHS paper and digital products. Products and data are also made available to Value Added Resellers, under licence.
Chile
Since 1874, the Navy's Hydrographic and Oceanographic Service ("SHOA", as acronym of"Servicio Hidrográfico y Oceanográfico de la Armada") has been the Chilean official authority on drawing and publishing nautical charts of the South Pacific Ocean for Military and Civil navigation.
This institution is also the main authority on controlling the official hour of the country.
Denmark
In Denmark (including Greenland and the Faroe Islands), hydrographic surveying and charting is conducted by "Geodatastyrelsen" also known as, Danish Geodata Agency, a division of the Danish Ministry of Climate, Energy and Utilities.
France
In France, the first official organization, the French Dépôt des Cartes, Plans, Journaux et Mémoires Relatifs à la Navigation, was formed in 1720.
Today, the SHOM is the official French hydrographic office, it stands for 'Service Hydrographique et Océanographique de la Marine' and means Naval Hydrographic and Oceanographic Service.
Germany
The "Bundesamt für Seeschiffahrt und Hydrographie" (BSH) is the German federal hydrographic office. Its offices are located in Hamburg and Rostock. The BSH is responsible for a wide variety of services, among them hydrographic surveys, nautical publications, ship registration, testing and approval of technical equipment, oceanographic research, development of nautical information systems, and maritime pollution surveillance. The BSH runs six ships for survey and research purposes.
In 1945 the tasks of various predecessor organisations (among them the German Navy's hydrographic service, the Wilhelmshaven maritime observatory, and the "Deutsche Seewarte" under Georg von Neumayer) were concentrated in the newly created "Deutsches Hydrographisches Institut" (DHI) in Hamburg. In 1990 the DHI and the corresponding East German organisation, the "Seehydrographische Dienst der DDR" in Rostock were integrated to form the BSH in its present form.
Greece
The Hellenic Navy Hydrographic Service (HNHS, ), an independent service of the Hellenic Navy General Staff, is responsible for hydrographic surveying and production and sale of charts. The first naval hydrographic office was created in 1905 and its first mission was the hydrographic survey of Maliakos Gulf. Its first nautical chart was issued in 1909 and in 1919 the Hellenic Navy became a founding member of the International Hydrographic Organization (IHO). The hydrographic office evolved into the independent naval Hydrographic Service in 1921. Today the HNHS operates three naval hydrographic vessels: HS OS Nautilos (A-478), HS OS Pytheas (A-474) and HS Stravon (A-476).
Hong Kong
The Hong Kong Hydrographic Office is responsible for hydrographic surveying and production of nautical charts covering the waters of Hong Kong. It also produced electronic navigational charts and made available the prediction of tidal stream digitally on the internet.
Iceland
The Hydrographic department of the Icelandic Coast Guard is responsible for hydrographic surveying and production of nautical charts of Icelandic waters.
India
The Indian Naval Hydro-graphic Department (INHD) headed by Chief Hydrographer to the Government of India is an Indian Government agency responsible for hydro-graphic surveys and nautical charting in India.
New Zealand
Land Information New Zealand (LINZ) is responsible for hydrographic surveying, production of nautical charts, and provision of tidal information covering the waters of New Zealand through the New Zealand Hydrographic Authority (NZHA). Nautical charts can no longer be purchased directly from LINZ but must be purchased from Bluestar Group or from an authorised agent.
Norway
The Norwegian Hydrographic Service is responsible for hydrographic surveying and production of nautical charts covering the waters of Norway. Also operates the Primar ENC Service.
Republic of Ireland
The Republic of Ireland is actively undertaking the largest civilian seabed mapping programme in the world, as a joint venture by the Marine Institute and the Geological Survey of Ireland. Total mapping coverage of the INSS to end of 2005 was 432,000 km2 and taken along with an earlier DCENR Petroleum Affairs Division, over 81% of the Irish designated seabed area (at end 2005) has been mapped. The INtegrated Mapping FOr the Sustainable Development of Ireland's MArine Resource (INFOMAR) programme is a successor to the Irish National Seabed Survey (INSS) and concentrates on creating a range of integrated mapping products of the physical, chemical and biological features of the seabed in the near-shore area. The programme is being funded by the Irish Government through the Department of Communications, Energy and Natural Resources as part of the National Development Plan, 2007 – 2013 [11]. Data are passed on to the United Kingdom Hydrographic Office (UKHO) for subsequent production of nautical charts.
Sweden
"Sjöfartsverket", Swedish Maritime Administration, includes the Swedish national hydrographic organisation. Established in 1956 and governed by the Ministry of Industry, Employment and Communications, Sjöfartsverket is responsible for most aspects of safe navigation in Sweden. This includes maintenance and marking of fairways, surveying and charting Swedish waters, pilotage, search-and-rescue (in cooperation with other organisations), ice-breaking, and safety inspections.
United Kingdom
The office of Hydrographer was created in 1795. Royal Navy charts and the related surveys were reputedly officially started as a result of the loss of Admiral Sir Cloudesley Shovell on an uncharted reef off the Scilly Isles which happened in October 1707 (see main article Scilly naval disaster of 1707).
The United Kingdom Hydrographic Office (UKHO) is now a part of the Ministry of Defence rather than a naval department and is located in Taunton, Somerset, near Creechbarrow hill. It is best known for producing the well-known Admiralty chart series of nautical charts that covers almost every navigable stretch of water on Earth. The UKHO also calculates tide tables for the UK.
In contrast to the US government, all of whose creative work is placed into the public domain, British government policy requires agencies such as the UKHO and the Ordnance Survey to be self-funding through the sale of the information they create. The Hydrographic Office therefore actively protects the copyright of all of its data including paper charts, electronic charts, tidal data and other data and has been known to take measures to ensure that its copyrighted information is used appropriately.
In 2013 the UKHO added an important new service for users of its paper charts by allowing its authorized agents to Print on Demand most paper charts.
UKHO attracted worldwide attention in February 2005 when it published in-depth pictures of the ocean floor in the vicinity of the Indian Ocean tsunami disaster of December 26, 2004.
United States
In the United States, the Survey of the Coast (America's first scientific agency) was established through an 1807 Congressional resolution and signed into law by President Jefferson. It subsequently became the United States Coast Survey in 1836 and the United States Coast and Geodetic Survey in 1878, and in May 1917 incorporated a new uniformed service of the United States, the Coast and Geodetic Survey Corps so that surveyors had a status as commissioned officers could not be shot as spies if captured during time of war. The U.S. Coast and Geodetic Survey was abolished and its responsibilities, personnel, facilities, and fleet incorporated into the new National Oceanic and Atmospheric Administration (NOAA) when NOAA was established in 1970. As the successor to the Coast and Geodetic Survey, NOAA's Office of Coast Survey is the national hydrographic office of the United States.
Non-domestic hydrographic and bathymetric surveys are conducted by the United States Navy′s Naval Oceanographic Office, which started with the establishment of the Depot of Charts and Instruments in 1830, which by 1854 was designated the United States Naval Observatory and Hydrographical Office. The hydrographic portion became the United States Hydrographic Office under the Hydrographer of the Navy, appointed from among uniformed U.S. Navy personnel from 1870 through 1961. With the popularization of oceanography in the early 1960s (partly due to President John F. Kennedy's interest), the name was changed to the U.S. Naval Oceanographic Office in 1962. That office, as a matter of historical and semantic interest, and the United States Naval Observatory are still part of the command overseen by the Oceanographer of the Navy, who replaced the Hydrographer of the Navy, with headquarters at the Naval Observatory in Washington, D.C. In 2001, the position of Hydographer of the Navy was re-established.
Uruguay
Hydrographic tasks in Uruguayan waters have been performed by the SOHMA since 1916.
See also
Matthew Fontaine Maury
George W. Littlehales
References
Ehlers, P. (1999). Die Geschichte maritimer Dienste in Deutschland - Das BSH und seine Vorgänger. Retrieved Oct. 14, 2003 from http://www.bsh.de/de/Das%20BSH/Organisation/Geschichte/Geschichte.pdf
Swedish Maritime Administration (2003). Swedish Maritime Administration - Accessibility, Safety, Environment. Retrieved Oct 15, 2003 from http://www.sjofartsverket.se/tabla-a-eng/pdf/tabla-a-eng.pdf
External links
International Hydrographic Organization
Australian Hydrographic Office
Hong Kong Hydrographic Office
Indian Naval Hydrographic Department
Hydrographic and Oceanographic Department (Japan)
Bundesamt für Seeschiffahrt und Hydrographie (Germany)
United Kingdom Hydrographic Office
National Oceanic and Atmospheric Administration (United States)
Servicio Hidrográfico y Oceanográfico de la Armada (Chile)
Sjöfartsverket (Sweden)
Service Hydrographique et Océanographique de la Marine (France)
Canadian Hydrographic Service
Primar ENC Service
Hydrography division of Kort & Matrikelstyrelsen (Denmark)
Hellenic Navy Hydrographic Service
Navigation
Hydrography
Maritime safety | Hydrographic office | [
"Environmental_science"
] | 2,808 | [
"Hydrography",
"Hydrology"
] |
335,528 | https://en.wikipedia.org/wiki/Open%20Society%20Foundations | Open Society Foundations (OSF), formerly the Open Society Institute, is a US-based grantmaking network founded by business magnate George Soros. Open Society Foundations financially supports civil society groups around the world, with the stated aim of advancing justice, education, public health and independent media. The group's name was inspired by Karl Popper's 1945 book The Open Society and Its Enemies.
As of 2015, the OSF had branches in 37 countries, encompassing a group of country and regional foundations, such as the Open Society Initiative for West Africa, and the Open Society Initiative for Southern Africa. The organization’s headquarters is located at 224 West 57th Street in Midtown Manhattan, New York City. In 2018, OSF announced it was closing its European office in Budapest and moving to Berlin, in response to legislation passed by the Hungarian government targeting the foundation's activities. As of 2021, OSF has reported expenditures in excess of US$16 billion since its establishment in 1993, mostly in grants to non-governmental organizations (NGOs) aligned with the organization's mission.
History
On May 28, 1984, George Soros signed a contract between the Soros Foundation/New York City and the Hungarian Academy of Sciences, the founding document of the Soros Foundation/Budapest. This was followed by several foundations in the region to help countries move away from Soviet-style socialism in the Eastern Bloc.
In 1991, the foundation merged with the ("Foundation for European Intellectual Mutual Aid"), an affiliate of the Congress for Cultural Freedom, created in 1966 to imbue 'non-conformist' Eastern European scientists with anti-totalitarian and capitalist ideas.
In 1993, the Open Society Institute was created in the United States to support the Soros foundations in Central and Eastern Europe and Russia.
In August 2010, it started using the name Open Society Foundations (OSF) to better reflect its role as a benefactor for civil society groups in countries around the world.
In 1995, Soros stated that he believed there can be no absolute answers to political questions because the same principle of reflexivity applies as in financial markets.
In 2012, Christopher Stone joined the OSF as the second president. He replaced Aryeh Neier, who served as president from 1993 to 2012. Stone announced in September 2017 that he was stepping down as president. In January 2018, Patrick Gaspard was appointed president of the Open Society Foundations. He announced in December 2020 that he was stepping down as president. In January 2021, Mark Malloch-Brown was appointed president of the Open Society Foundations. On March 11, 2024, OSF announced that Binaifer Nowrojee would start as the group's new president on June 1, 2024.
In 2016, the OSF was reportedly the target of a cyber security breach. Documents and information reportedly belonging to the OSF were published by a website. The cyber security breach has been described as sharing similarities with Russian-linked cyberattacks that targeted other institutions, such as the Democratic National Committee.
In 2017, Soros transferred $18 billion to the foundation.
In 2020, Soros announced that he was creating the Open Society University Network (OSUN), endowing the network with $1 billion.
In 2023, George Soros handed over the leadership of the foundation to his son Alexander Soros, who soon announced layoffs of 40 percent of staff and "significant changes" to the operating model.
Activities
The Library of Congress Soros Foundation Visiting Fellows Program was initiated in 1990.
Its $873 million budget in 2013 ranked as the second-largest private philanthropy budget in the United States, after the Bill and Melinda Gates Foundation budget of $3.9 billion. As of 2020, its budget increased to $1.2 billion.
In August 2013, the foundation partly sponsored an Aromanian cultural event in Malovište (), North Macedonia.
The foundation reported granting at least $33 million to civil rights and social justice organizations in the United States. This funding included groups such as the Organization for Black Struggle and Missourians Organizing for Reform and Empowerment that supported protests in the wake of the killing of Trayvon Martin, the death of Eric Garner, the shooting of Tamir Rice and the shooting of Michael Brown. According to OpenSecrets, the OSF spends much of its resources on democratic causes around the world, and has also contributed to groups such as the Tides Foundation.
The OSF has been a major financial supporter of US immigration reform, including establishing a pathway to citizenship for undocumented immigrants.
OSF projects have included the National Security and Human Rights Campaign and the Lindesmith Center, which conducted research on drug reform.
The OSF became a partner of the National Democratic Institute, a charitable organization which partnered with pro-democracy groups like the Gov2U project run by Scytl.
On January 23, 2020, the OSF announced a contribution of $1 billion from George Soros for the new Open Society University Network (OSUN), which supports Western university faculty in providing university courses, programs, and research to serve neglected student populations worldwide at institutions needing international partners. The founding institutions were Bard College and Central European University.
In April 2022, OSF announced a grant of $20 million to the International Crisis Group in support of efforts to analyze global issues fuelling violence, climate injustice and economic inequality and providing recommendations to address them.
OSF has given grants to Jewish Voice for Peace.
Critical reception
In 2007, Nicolas Guilhot (a senior research associate at the French National Centre for Scientific Research) wrote in Critical Sociology that the Open Society Foundations is functionally conservative in supporting institutions that reinforce the existing social order, as the Ford Foundation and Rockefeller Foundation have done before them. Guilhot argues that control over the social sciences by moneyed interests, rather than by public officials, reinforced a neoliberal view of modernization.
An OSF effort in 2008 in the African Great Lakes region aimed at spreading human rights awareness among prostitutes in Uganda and other nations in the area was rejected by Ugandan authorities, who considered it an effort to legalize and legitimize prostitution.
Open Society Foundations has been criticized in the pro-Israel publications Tablet, Arutz Sheva and Jewish Press for funding the activist groups Adalah and I'lam, they accuse of being anti-Israel and supporting the Boycott, Divestment and Sanctions movement. Among the documents released in 2016 by DCleaks, an OSF report reads "For a variety of reasons, we wanted to construct a diversified portfolio of grants dealing with Israel and Palestine, funding both Israeli Jewish and PCI (Palestinian Citizens of Israel) groups as well as building a portfolio of Palestinian grants and in all cases to maintain a low profile and relative distance—particularly on the advocacy front."
In 2013, NGO Monitor, an Israeli NGO, reported that "Soros has been a frequent critic of Israeli government policy, and does not consider himself a Zionist, but there is no evidence that he or his family holds any special hostility or opposition to the existence of the state of Israel. This report will show that their support, and that of the Open Society Foundations, has nevertheless gone to organizations with such agendas." The report says its objective is to inform the OSF, claiming: "The evidence demonstrates that Open Society funding contributes significantly to anti-Israel campaigns in three important respects:
Active in the Durban strategy;
Funding aimed at weakening United States support for Israel by shifting public opinion regarding the Israeli-Palestinian conflict and Iran;
Funding for Israeli political opposition groups on the fringes of Israeli society, which use the rhetoric of human rights to advocate for marginal political goals."
The report concludes, "Yet, to what degree Soros, his family, and the Open Society Foundations are aware of the cumulative impact on Israel and of the political warfare conducted by many of their beneficiaries is an open question."
In November 2015, Russia banned the group on its territory, declaring "It was found that the activity of the Open Society Foundations and the Open Society Institute Assistance Foundation represents a threat to the foundations of the constitutional system of the Russian Federation and the security of the state".
In 2017, Open Society Foundations and other NGOs for open government and refugee assistance were targeted by authoritarian and populist governments emboldened by the first Trump Administration. Several right-leaning politicians in eastern Europe regard many of the NGO groups to be irritants if not threats, including Liviu Dragnea in Romania, Szilard Nemeth in Hungary, Nikola Gruevski in North Macedonia (who called for "de-Sorosization"), and Jarosław Kaczyński of Poland (who has said that Soros-funded groups want "societies without identity"). Some of the Soros-funded advocacy groups in the region said the harassment and intimidation became more open after the 2016 election of Donald Trump in the United States. Stefania Kapronczay of the Hungarian Civil Liberties Union, which received half of its funding from Soros-backed foundations, claimed that Hungarian officials were "testing the waters" in an effort to see "what they can get away with."
In 2017, the government of Pakistan ordered the Open Society Foundations to cease operations in the country.
In May 2018, Open Society Foundations announced they will move its office from Budapest to Berlin, amid Hungarian government interference.
In November 2018, Open Society Foundations announced they are ceasing operations in Turkey and closing their Istanbul and Ankara offices due to "false accusations and speculations beyond measure", amid pressure from the Turkish government including detention of liberal Turkish intellectuals and academics even tangentially associated with the foundation.
See also
Alliance for Open Society International
Blinken Open Society Archives
Budapest Open Access Initiative
Central European University
Colour revolution
Directory of Open Access Journals
Open society
Open Society Foundations–Armenia
Open Society Institute-Baltimore
Transparency International
Transparify
References
Further reading
.
.
.
.
.
.
.
Stone, Diana (2013) Knowledge Actors and Transnational Governance: The Private-Public Policy Nexus in the Global Agora. Palgrave Macmillan
External links
Open Society Foundations official website
Blinken Open Society Archives
Civic organizations
George Soros
Non-profit technology
Political and economic research foundations in the United States
Organizations established in 1993
Global policy organizations
Grants (money)
Organizations listed in Russia as undesirable | Open Society Foundations | [
"Technology"
] | 2,087 | [
"Information technology",
"Non-profit technology"
] |
335,612 | https://en.wikipedia.org/wiki/Nuclear%20Overhauser%20effect | The nuclear Overhauser effect (NOE) is the transfer of nuclear spin polarization from one population of spin-active nuclei (e.g. 1H, 13C, 15N etc.) to another via cross-relaxation. A phenomenological definition of the NOE in nuclear magnetic resonance spectroscopy (NMR) is the change in the integrated intensity (positive or negative) of one NMR resonance that occurs when another is saturated by irradiation with an RF field. The change in resonance intensity of a nucleus is a consequence of the nucleus being close in space to those directly affected by the RF perturbation.
The NOE is particularly important in the assignment of NMR resonances, and the elucidation and confirmation of the structures or configurations of organic and biological molecules. The 1H two-dimensional NOE spectroscopy (NOESY) experiment and its extensions are important tools to identify stereochemistry of proteins and other biomolecules in solution, whereas in solid form crystal x-ray diffraction typically used to identify stereochemistry. The heteronuclear NOE is particularly important in 13C NMR spectroscopy to identify carbons bonded to protons, to provide polarization enhancements to such carbons to increase signal-to-noise, and to ascertain the extent the relaxation of these carbons is controlled by the dipole-dipole relaxation mechanism.
History
The NOE developed from the theoretical work of American physicist Albert Overhauser who in 1953 proposed that nuclear spin polarization could be enhanced by the microwave irradiation of the conduction electrons in certain metals. The electron-nuclear enhancement predicted by Overhauser was experimentally demonstrated in 7Li metal by T. R. Carver and C. P. Slichter also in 1953. A general theoretical basis and experimental observation of an Overhauser effect involving only nuclear spins in the HF molecule was published by Ionel Solomon in 1955. Another early experimental observation of the NOE was used by Kaiser in 1963 to show how the NOE may be used to determine the relative signs of scalar coupling constants, and to assign spectral lines in NMR spectra to transitions between energy levels. In this study, the resonance of one population of protons (1H) in an organic molecule was enhanced when a second distinct population of protons in the same organic molecule was saturated by RF irradiation. The application of the NOE was used by Anet and Bourn in 1965 to confirm the assignments of the NMR resonances for β,β-dimethylacrylic acid and dimethyl formamide, thereby showing that conformation and configuration information about organic molecules in solution can be obtained. Bell and Saunders reported direct correlation between NOE enhancements and internuclear distances in 1970 while quantitative measurements of internuclear distances in molecules with three or more spins was reported by Schirmer et al.
Richard R. Ernst was awarded the 1991 Nobel Prize in Chemistry for developing Fourier transform and two-dimensional NMR spectroscopy, which was soon adapted to the measurement of the NOE, particularly in large biological molecules. In 2002, Kurt Wuthrich won the Nobel Prize in Chemistry for the development of nuclear magnetic resonance spectroscopy for determining the three-dimensional structure of biological macromolecules in solution, demonstrating how the 2D NOE method (NOESY) can be used to constrain the three-dimensional structures of large biological macromolecules. Professor Anil Kumar was the first to apply the two-dimensional Nuclear Overhauser Effect (2D-NOE now known as NOESY) experiment to a biomolecule, which opened the field for the determination of three-dimensional structures of biomolecules in solution by NMR spectroscopy.
Relaxation
The NOE and nuclear spin-lattice relaxation are closely related phenomena. For a single spin- nucleus in a magnetic field there are two energy levels that are often labeled α and β, which correspond to the two possible spin quantum states, + and -, respectively. At thermal equilibrium, the population of the two energy levels is determined by the Boltzmann distribution with spin populations given by Pα and Pβ. If the spin populations are perturbed by an appropriate RF field at the transition energy frequency, the spin populations return to thermal equilibrium by a process called spin-lattice relaxation. The rate of transitions from α to β is proportional to the population of state α, Pα, and is a first order process with rate constant W. The condition where the spin populations are equalized by continuous RF irradiation (Pα = Pβ) is called saturation and the resonance disappears since transition probabilities depend on the population difference between the energy levels.
In the simplest case where the NOE is relevant, the resonances of two spin- nuclei, I and S, are chemically shifted but not J-coupled. The energy diagram for such a system has four energy levels that depend on the spin-states of I and S corresponding to αα, αβ, βα, and ββ, respectively. The W'''s are the probabilities per unit time that a transition will occur between the four energy levels, or in other terms the rate at which the corresponding spin flips occur. There are two single quantum transitions, W1I, corresponding to αα ➞ βα and αβ ➞ ββ; W1S, corresponding to αα ➞ αβ and βα ➞ ββ; a zero quantum transition, W0, corresponding to βα ➞ αβ, and a double quantum transition corresponding to αα ➞ ββ.
While rf irradiation can only induce single-quantum transitions (due to so-called quantum mechanical selection rules) giving rise to observable spectral lines, dipolar relaxation may take place through any of the pathways. The dipolar mechanism is the only common relaxation mechanism that can cause transitions in which more than one spin flips. Specifically, the dipolar relaxation mechanism gives rise to transitions between the αα and ββ states (W2) and between the αβ and the βα states (W0).
Expressed in terms of their bulk NMR magnetizations, the experimentally observed steady-state NOE for nucleus I when the resonance of nucleus S is saturated () is defined by the expression:
where is the magnetization (resonance intensity) of nucleus at thermal equilibrium. An analytical expression for the NOE can be obtained by considering all the relaxation pathways and applying the Solomon equations to obtain
where
and .
is the total longitudinal dipolar relaxation rate () of spin I due to the presence of spin s, is referred to as the cross-relaxation rate, and and are the magnetogyric ratios characteristic of the and nuclei, respectively.
Saturation of the degenerate W1S transitions disturbs the equilibrium populations so that Pαα = Pαβ and Pβα = Pββ. The system's relaxation pathways, however, remain active and act to re-establish an equilibrium, except that the W1S transitions are irrelevant because the population differences across these transitions are fixed by the RF irradiation while the population difference between the WI transitions does not change from their equilibrium values. This means that if only the single quantum transitions were active as relaxation pathways, saturating the resonance would not affect the intensity of the resonance. Therefore to observe an NOE on the resonance intensity of I, the contribution of and must be important. These pathways, known as cross-relaxation pathways, only make a significant contribution to the spin-lattice relaxation when the relaxation is dominated by dipole-dipole or scalar coupling interactions, but the scalar interaction is rarely important and is assumed to be negligible. In the homonuclear case where , if is the dominant relaxation pathway, then saturating increases the intensity of the resonance and the NOE is positive, whereas if is the dominant relaxation pathway, saturating decreases the intensity of the resonance and the NOE is negative.
Molecular motion
Whether the NOE is positive or negative depends sensitively on the degree of rotational molecular motion. The three dipolar relaxation pathways contribute to differing extents to the spin-lattice relaxation depending a number of factors. A key one is that the balance between ω2, ω1 and ω0 depends crucially on molecular rotational correlation time, , the time it takes a molecule to rotate one radian. NMR theory shows that the transition probabilities are related to and the Larmor precession frequencies, , by the relations:
where is the distance separating two spin- nuclei.
For relaxation to occur, the frequency of molecular tumbling must match the Larmor frequency of the nucleus. In mobile solvents, molecular tumbling motion is much faster than . The so-called extreme-narrowing limit where ). Under these conditions the double-quantum relaxation W2 is more effective than W1 or W0, because τc and 2ω0 match better than τc and ω1. When ω2 is the dominant relaxation process, a positive NOE results.
This expression shows that for the homonuclear case where I = S, most notably for 1H NMR, the maximum NOE that can be observed is 1\2 irrespective of the proximity of the nuclei. In the heteronuclear case where I ≠ S, the maximum NOE is given by 1\2 (γS/γI), which, when observing heteronuclei under conditions of broadband proton decoupling, can produce major sensitivity improvements. The most important example in organic chemistry is observation of 13C while decoupling 1H, which also saturates the 1J resonances. The value of γS/γI is close to 4, which gives a maximum NOE enhancement of 200% yielding resonances 3 times as strong as they would be without NOE. In many cases, carbon atoms have an attached proton, which causes the relaxation to be dominated by dipolar relaxation and the NOE to be near maximum. For non-protonated carbon atoms the NOE enhancement is small while for carbons that relax by relaxation mechanisms by other than dipole-dipole interactions the NOE enhancement can be significantly reduced. This is one motivation for using deuteriated solvents (e.g. CDCl3) in 13C NMR. Since deuterium relaxes by the quadrupolar mechanism, there are no cross-relaxation pathways and NOE is non-existent. Another important case is 15N, an example where the value of its magnetogyric ratio is negative. Often 15N resonances are reduced or the NOE may actually null out the resonance when 1H nuclei are decoupled. It is usually advantageous to take such spectra with pulse techniques that involve polarization transfer from protons to the 15N to minimize the negative NOE.
Structure elucidation
While the relationship of the steady-state NOE to internuclear distance is complex, depending on relaxation rates and molecular motion, in many instances for small rapidly tumbling molecules in the extreme-narrowing limit, the semiquantitative nature of positive NOE's is useful for many structural applications often in combination with the measurement of J-coupling constants. For example, NOE enhancements can be used to confirm NMR resonance assignments, distinguish between structural isomers, identify aromatic ring substitution patterns and aliphatic substituent configurations, and determine conformational preferences.
Nevertheless, the inter-atomic distances derived from the observed NOE can often help to confirm the three-dimensional structure of a molecule. In this application, the NOE differs from the application of J-coupling in that the NOE occurs through space, not through chemical bonds. Thus, atoms that are in close proximity to each other can give a NOE, whereas spin coupling is observed only when the atoms are connected by 2–3 chemical bonds. However, the relation ηIS(max)= obscures how the NOE is related to internuclear distances because it applies only for the idealized case where the relaxation is 100% dominated by dipole-dipole interactions between two nuclei I and S. In practice, the value of ρI contains contributions from other competing mechanisms, which serve only to reduce the influence of W0 and W2 by increasing W1. Sometimes, for example, relaxation due to electron-nuclear interactions with dissolved oxygen or paramagnetic metal ion impurities in the solvent can prohibit the observation of weak NOE enhancements. The observed NOE in the presence of other relaxation mechanisms is given by
where ρ⋇ is the additional contribution to the total relaxation rate from relaxation mechanisms not involving cross relaxation. Using the same idealized two-spin model for dipolar relaxation in the extreme narrowing limit:
It is easy to show that
Thus, the two-spin steady-state NOE depends on internuclear distance only when there is a contribution from external relaxation. Bell and Saunders showed that following strict assumptions ρ⋇/τc is nearly constant for similar molecules in the extreme narrowing limit. Therefore, taking ratio's of steady-state NOE values can give relative values for the internuclear distance r. While the steady-state experiment is useful in many cases, it can only provide information on relative internuclear distances. On the other hand, the initial rate at which the NOE grows is proportional to rIS−6, which provides other more sophisticated alternatives for obtaining structural information via transient experiments such as 2D-NOESY.
Two-dimensional NMR
The motivations for using two-dimensional NMR for measuring NOE's are similar as for other 2-D methods. The maximum resolution is improved by spreading the affected resonances over two dimensions, therefore more peaks are resolved, larger molecules can be observed and more NOE's can be observed in a single measurement. More importantly, when the molecular motion is in the intermediate or slow motional regimes when the NOE is either zero or negative, the steady-state NOE experiment fails to give results that can be related to internuclear distances.
Nuclear Overhauser Effect Spectroscopy (NOESY) is a 2D NMR spectroscopic method used to identify nuclear spins undergoing cross-relaxation and to measure their cross-relaxation rates. Since 1H dipole-dipole couplings provide the primary means of cross-relaxation for organic molecules in solution, spins undergoing cross-relaxation are those close to one another in space. Therefore, the cross peaks of a NOESY spectrum indicate which protons are close to each other in space. In this respect, the NOESY experiment differs from the COSY experiment that relies on J-coupling to provide spin-spin correlation, and whose cross peaks indicate which 1H's are close to which other 1H's through the chemical bonds of the molecule.
The basic NOESY sequence consists of three 90° pulses. The first pulse creates transverse spin magnetization. The spins precess during the evolution time t1, which is incremented during the course of the 2D experiment. The second pulse produces longitudinal magnetization equal to the transverse magnetization component orthogonal to the pulse direction. Thus, the idea is to produce an initial condition for the mixing period τm. During the NOE mixing time, magnetization transfer via cross-relaxation can take place. For the basic NOESY experiment, τm is kept constant throughout the 2D experiment, but chosen for the optimum cross-relaxation rate and build-up of the NOE. The third pulse creates transverse magnetization from the remaining longitudinal magnetization. Data acquisition begins immediately following the third pulse and the transverse magnetization is observed as a function of the pulse delay time t2. The NOESY spectrum is generated by a 2D Fourier transform with respect to t1 and t2. A series of experiments are carried out with increasing mixing times, and the increase in NOE enhancement is followed. The closest protons show the most rapid build-up rates of the NOE.
Inter-proton distances can be determined from unambiguously assigned, well-resolved, high signal-to-noise NOESY spectra by analysis of cross peak intensities. These may be obtained by volume integration and can be converted into estimates of interproton distances. The distance between two atoms and can be calculated from the cross-peak volumes and a scaling constant
where can be determined based on measurements of known fixed distances. The range of distances can be reported based on known distances and volumes in the spectrum, which gives a mean and a standard deviation , a measurement of multiple regions in the NOESY spectrum showing no peaks, i.e. noise , and a measurement error . The parameter is set so that all known distances are within the error bounds. This shows that the lower range of the NOESY volume can be shown
and that the upper bound is
Such fixed distances depend on the system studied. For example, locked nucleic acids have many atoms whose distance varies very little in the sugar, which allows estimation of the glycosidic torsion angles, which allowed NMR to benchmark LNA molecular dynamics predictions. RNAs, however, have sugars that are much more conformationally flexible, and require wider estimations of low and high bounds.
In protein structural characterization, NOEs are used to create constraints on intramolecular distances. In this method, each proton pair is considered in isolation and NOESY cross peak intensities are compared with a reference cross peak from a proton pair of fixed distance, such as a geminal methylene proton pair or aromatic ring protons. This simple approach is reasonably insensitive to the effects of spin diffusion or non-uniform correlation times and can usually lead to definition of the global fold of the protein, provided a sufficiently large number of NOEs have been identified. NOESY cross peaks can be classified as strong, medium or weak and can be translated into upper distance restraints of around 2.5, 3.5 and 5.0 Å, respectively. Such constraints can then be used in molecular mechanics optimizations to provide a picture of the solution state conformation of the protein. Full structure determination relies on a variety of NMR experiments and optimization methods utilizing both chemical shift and NOESY constraints.
Heteronuclear NOE
Some experimental methods
Some examples of one and two-dimensional NMR experimental techniques exploiting the NOE include:
NOESY, Nuclear Overhauser effect Spectroscopy
HOESY, Heteronuclear Overhauser effect spectroscopy
ROESY, Rotational frame nuclear Overhauser effect spectroscopy
TRNOE, Transferred nuclear Overhauser effect
DPFGSE-NOE, Double pulsed field gradient spin echo NOE experiment
NOESY is the determination of the relative orientations of atoms in a molecule, for example a protein or other large biological molecule, producing a three-dimensional structure. HOESY is NOESY cross-correlation between atoms of different elements. ROESY involves spin-locking the magnetization to prevent it from going to zero, applied for molecules for which regular NOESY is not applicable. TRNOE measures the NOE between two different molecules interacting in the same solution, as in a ligand binding to a protein. In a DPFGSE-NOE experiment, a transient experiment that allows for suppression of strong signals and thus detection of very small NOEs.
Examples of nuclear Overhauser effect
The figure (top) displays how Nuclear Overhauser Effect Spectroscopy can elucidate the structure of a switchable compound. In this example, the proton designated as {H} shows two different sets of NOEs depending on the isomerization state (cis or trans) of the switchable azo groups. In the trans state proton {H} is far from the phenyl group showing blue coloured NOEs; while the cis'' state holds proton {H} in the vicinity of the phenyl group resulting in the emergence of new NOEs (show in red).
Another example (bottom) where application where the NOE is useful to assign resonances and determine configuration is polysaccharides. For instance, complex glucans possess a multitude of overlapping signals, especially in a proton spectrum. Therefore, it is advantageous to utilize 2D NMR experiments including NOESY for the assignment of signals. See, for example, NOE of carbohydrates.
See also
Dynamic nuclear polarization
Magnetization
Nuclear magnetic resonance
Nuclear magnetic resonance spectroscopy
Nuclear magnetic resonance spectroscopy of proteins
Proton nuclear magnetic resonance
Spin polarization
Two-dimensional nuclear magnetic resonance spectroscopy
References
External links
Hans J. Reich: The Nuclear Overhauser Effect
Eugene E. Kwan: Lecture12: The Nuclear Overhauser Effect
Williams, Martin and Rovnyak Vol 2: R. R. Gil and A. Navarro-Vázquez: Chapter 1 Application of the Nuclear Overhauser Effect to the Structural Elucidation of Natural Products
James Keeler: 8 Relaxation
YouTube: James Keeler, Lecture 10, Relaxation II. 2013 Cambridge lecture on NOE
Nuclear magnetic resonance spectroscopy
Nuclear magnetic resonance
Chemical physics | Nuclear Overhauser effect | [
"Physics",
"Chemistry"
] | 4,326 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance spectroscopy",
"nan",
"Nuclear physics",
"Spectroscopy",
"Chemical physics"
] |
335,634 | https://en.wikipedia.org/wiki/Turret%20%28architecture%29 | In architecture, a turret is a small circular tower, usually notably smaller than the main structure, that projects outwards from a wall or corner of that structure. Turret also refers to the small towers built atop larger tower structures.
Etymology
The word turret originated in around the year 1300 from touret which meant “small tower rising from a city wall, castle, or other larger building.” Touret came from the Old French term torete which is the diminutive form of tour, meaning “tower.” Tour dates back to the Latin word turris which also means “tower.”
There is a record from 1862 of turret being used to mean “low, flat gun tower on a warship.” Around this time, the word split into two separate definitions, with this definition being the one that goes on to describe gun turrets, a separate idea from the architectural element.
Uses
Turrets initially arose on castles out of a defensive need for greater visibility. Since they project outwards from the main structure, turrets gave garrisons a better line of sight to spot possible attackers. Thus, they also provided a better defensive position for defensive military forces to originate from. Turrets constructed above the rest of a structure only improve visibility, providing 360-degree views of the surrounding land allowing enemies to be spotted from further away. This provided more time for a fortress’s defenders to prepare for an attack. Turrets offered greater resilience to attacks and were less vulnerable than free-standing watch towers.
As their defensive necessity lessened, turrets began to be used as ornamental elements instead. Turrets were sometimes used to house staircases, and towards the end of the thirteenth century they became important in this fashion. They allowed for the staircases to occupy smaller spaces without affecting the layout of the structure to which they were attached. Since turrets project outward from a structure, they directed attention, and more ornamentation was focused on them than the rest of the facade.
Structure
Turrets could vary in size, although they all shared the appearance of small towers, either built into walls or atop larger towers. They projected outward from the structure they were incorporated into, greatly contributing to the characteristics discussed in the "Uses" section. Turrets do not extend down to the ground like full-sized towers. When built into walls, turrets are generally found at the corner of structures where two walls meet. Sometimes, however, they are found in the middle of a wall. Since turrets projected outward from a structure, they had to be supported either by weight-bearing corbels or be cantilevered. This put a restriction on how large a turret could be constructed. Turrets were expensive to build, as hoisting stones high above the ground to construct them was highly laborious. It is thought that many were timber-framed and cladded in stone which would have reduced the weight needed to be supported by corbels/cantilevers and reduced the cost of construction. Turrets were traditionally supported by a corbel. The top of a turret could be finished with a pointed roof or another type of apex or might have had crenellations, such as in the image above.
Turrets on homes
In the modern day, turrets are most commonly found on homes. These turrets are still towers that project outwardly from the main structure, not extending down to the ground. Residential turrets were greatly popularized in the Queen Anne residential style, and can often be found on a variety of Victorian and Queen Anne home designs today. Some residential turrets are designed to be open-air balconies as well. Turrets can help to bring in more natural light and are often used to create more space in a home. These elements make a property more interesting to prospective buyers and homes with a turret generally appraise higher than without one. Alternatively, turrets usually increase construction costs of a home as they are more difficult to frame and support than more common elements.
Gallery
See also
Bartizan, an overhanging, wall-mounted turret found particularly on French and Spanish fortifications between the early 14th and the 16th century. They returned to prominence in the 19th century with their popularity in Scottish baronial style.
Bay window
Oriel window
Turret (Hadrian's Wall)
References
Architectural elements
Fortification (architectural elements)
Castle architecture | Turret (architecture) | [
"Technology",
"Engineering"
] | 855 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
335,736 | https://en.wikipedia.org/wiki/Ferdinand%20von%20Lindemann | Carl Louis Ferdinand von Lindemann (12 April 1852 – 6 March 1939) was a German mathematician, noted for his proof, published in 1882, that (pi) is a transcendental number, meaning it is not a root of any polynomial with rational coefficients.
Life and education
Lindemann was born in Hanover, the capital of the Kingdom of Hanover. His father, Ferdinand Lindemann, taught modern languages at a Gymnasium in Hanover. His mother, Emilie Crusius, was the daughter of the Gymnasium's headmaster. The family later moved to Schwerin, where young Ferdinand attended school.
He studied mathematics at Göttingen, Erlangen, and Munich. At Erlangen he received a doctorate, supervised by Felix Klein, on non-Euclidean geometry. Lindemann subsequently taught in Würzburg and at the University of Freiburg. During his time in Freiburg, Lindemann devised his proof that is a transcendental number (see Lindemann–Weierstrass theorem). After his time in Freiburg, Lindemann transferred to the University of Königsberg. While a professor in Königsberg, Lindemann acted as supervisor for the doctoral theses of the mathematicians David Hilbert, Hermann Minkowski, and Arnold Sommerfeld.
Transcendence proof
In 1882, Lindemann published the result for which he is best known, the transcendence of . His methods were similar to those used nine years earlier by Charles Hermite to show that e, the base of natural logarithms, is transcendental. Before the publication of Lindemann's proof, it was known that was irrational, as Johann Heinrich Lambert proved was irrational in the 1760s.
References
External links
Lindemann, F. "Über die Zahl ", Mathematische Annalen 20 (1882): pp. 213–225.
1852 births
1939 deaths
19th-century German mathematicians
20th-century German mathematicians
Squaring the circle
German number theorists
Scientists from Hanover
People from the Kingdom of Hanover
University of Göttingen alumni
University of Erlangen-Nuremberg alumni
Ludwig Maximilian University of Munich alumni
Academic staff of the University of Königsberg | Ferdinand von Lindemann | [
"Mathematics"
] | 440 | [
"Geometry problems",
"Squaring the circle",
"Euclidean plane geometry",
"Planes (geometry)",
"Mathematical problems",
"Pi"
] |
335,773 | https://en.wikipedia.org/wiki/Christian%20Friedrich%20Sch%C3%B6nbein | Christian Friedrich Schönbein HFRSE (18 October 1799 – 29 August 1868) was a German-Swiss chemist who is best known for inventing the fuel cell (1838) at the same time as William Robert Grove and his discoveries of guncotton and ozone. He also created the concept of geochemistry in 1838.
Life
Schönbein (Schoenbein) related to Michael Schoenbein was born at Metzingen in the Duchy of Württemberg. Around the age of 13 he was apprenticed to a chemical and pharmaceutical firm at Böblingen. Through his own efforts, he acquired sufficient scientific skills and knowledge to ask for, and receive, an examination by the professor of chemistry at Tübingen. Schönbein passed the exam and, after a series of moves and university studies, eventually acquired a position at the University of Basel in 1828, becoming a full professor in 1835. He remained there until his death in 1868, and was buried in Basel.
Fuel cell
In 1839, Schönbein published the principle of the fuel cell in the "Philosophical Magazine".
Ozone
While doing experiments on the electrolysis of water at the University of Basel, Schönbein first began to notice a distinctive odor in his laboratory. This smell gave Schönbein the clue to the presence of a new product from his experiments. Because of the pronounced smell, Schönbein coined the term "ozone" for the new gas, from the Greek word "ozein", meaning "to smell". Schönbein described his discoveries in publications in 1840. He later found that the smell of ozone was similar to that produced by the slow oxidation of white phosphorus.
The ozone smell Schönbein detected is the same as that occurring in the vicinity of lightning storms, an odor that indicates the presence of ozone in the atmosphere.
Explosives
Although his wife had forbidden him to do so, Schönbein occasionally experimented at home in the kitchen. One day in 1845, when his wife was away, he spilled a mixture of nitric acid and sulfuric acid. After using his wife's cotton apron to mop it up, he hung the apron over the stove to dry, only to find that the cloth spontaneously ignited and burned so quickly that it seemed to disappear. Schönbein, in fact, had converted the cellulose of the apron, with the nitro groups (added from the nitric acid) serving as an internal source of oxygen; when heated, the cellulose was completely and suddenly oxidized.
Schönbein recognized the possibilities of the new compound. Ordinary black gunpowder, which had reigned supreme in the battlefield for the past 500 years, exploded into thick smoke, blackening the gunners, fouling cannons and small arms, and obscuring the battlefield. Nitrocellulose was perceived as a possible "smokeless powder" and a propellant for artillery shells thus it received the name of guncotton.
Attempts to manufacture guncotton for military use failed at first because the factories were prone to explode and, above all else, the burning speed of straight guncotton was always too high. It was not until 1884 that Paul Vieille tamed guncotton into a successful progressive smokeless gunpowder called Poudre B. Later on, in 1891, James Dewar and Frederick Augustus Abel also managed to transform gelatinized guncotton into a safe mixture, called cordite because it could be extruded into long thin cords before being dried.
Legacy
In 1990 an asteroid was named after him.
Selected writings
The Letters of Faraday and Schoenbein 1836-1862 London: Williams & Norgate 1899.
The Letters of Jöns Jakob Berzelius and Christian Friedrich Schönbein, 1830-1847 London: Williams & Norgate 1900.
See also
Timeline of hydrogen technologies
References
Further reading
Brown G. I. The Big Bang: A History of Explosives, Sutton Publishing; 1998 ()
External links
1799 births
1868 deaths
People from Metzingen
People from the Duchy of Württemberg
19th-century German chemists
Members of the French Academy of Sciences
Fellows of the Royal Society of Edinburgh
19th-century German inventors
19th-century Swiss inventors
Immigrants to Switzerland
Geochemists
Electrochemists
Academic staff of the University of Basel | Christian Friedrich Schönbein | [
"Chemistry"
] | 871 | [
"Geochemists",
"Electrochemistry",
"Electrochemists"
] |
335,864 | https://en.wikipedia.org/wiki/Restriction%20modification%20system | The restriction modification system (RM system) is found in bacteria and archaea, and provides a defense against foreign DNA, such as that borne by bacteriophages.
Bacteria have restriction enzymes, also called restriction endonucleases, which cleave double-stranded DNA at specific points into fragments, which are then degraded further by other endonucleases. This prevents infection by effectively destroying the foreign DNA introduced by an infectious agent (such as a bacteriophage). Approximately one-quarter of known bacteria possess RM systems and of those about one-half have more than one type of system.
As the sequences recognized by the restriction enzymes are very short, the bacterium itself will almost certainly contain some within its genome. In order to prevent destruction of its own DNA by the restriction enzymes, methyl groups are added. These modifications must not interfere with the DNA base-pairing, and therefore, usually only a few specific bases are modified on each strand.
Endonucleases cleave internal/non-terminal phosphodiester bonds. They do so only after recognising specific sequences in DNA which are usually 4–6 base pairs long, and often palindromic.
History
The RM system was first discovered by Salvatore Luria and Mary Human in 1952 and 1953. They found that a bacteriophage growing within an infected bacterium could be modified, so that upon their release and re-infection of a related bacterium the bacteriophage's growth is restricted (inhibited; also described by Luria in his autobiography on pages 45 and 99 in 1984). In 1953, Jean Weigle and Giuseppe Bertani reported similar examples of host-controlled modification using different bacteriophage system. Later work by Daisy Roulland-Dussoix and Werner Arber in 1962 and many other subsequent workers led to the understanding that restriction was due to attack and breakdown of the modified bacteriophage's DNA by specific enzymes of the recipient bacteria. Further work by Hamilton O. Smith isolated HinDII, the first of the class of enzymes now known as restriction enzymes, while Daniel Nathans showed that it can be used for restriction mapping. When these enzymes were isolated in the laboratory they could be used for controlled manipulation of DNA, thus providing the foundation for the development of genetic engineering. Werner Arber, Daniel Nathans, and Hamilton Smith were awarded the Nobel Prize in Physiology or Medicine in 1978 for their work on restriction-modification.
Types
There are four categories of restriction modification systems: type I, type II, type III and type IV. All have restriction enzyme activity and a methylase activity (except for type IV that has no methylase activity). They were named in the order of discovery, although the type II system is the most common.
Type I systems are the most complex, consisting of three polypeptides: R (restriction), M (modification), and S (specificity). The resulting complex can both cleave and methylate DNA. Both reactions require ATP, and cleavage often occurs a considerable distance from the recognition site. The S subunit determines the specificity of both restriction and methylation. Cleavage occurs at variable distances from the recognition sequence, so discrete bands are not easily visualized by gel electrophoresis.
Type II systems are the simplest and the most prevalent. Instead of working as a complex, the methyltransferase and endonuclease are encoded as two separate proteins and act independently (there is no specificity protein). Both proteins recognize the same recognition site, and therefore compete for activity. The methyltransferase acts as a monomer, methylating the duplex one strand at a time. The endonuclease acts as a homodimer, which facilitates the cleavage of both strands. Cleavage occurs at a defined position close to or within the recognition sequence, thus producing discrete fragments during gel electrophoresis. For this reason, Type II systems are used in labs for DNA analysis and gene cloning.
Type III systems have R (res) and M (mod) proteins that form a complex of modification and cleavage. The M protein, however, can methylate on its own. Methylation also only occurs on one strand of the DNA unlike most other known mechanisms. The heterodimer formed by the R and M proteins competes with itself by modifying and restricting the same reaction. This results in incomplete digestion.
Type IV systems are not true RM systems because they only contain a restriction enzyme and not a methylase. Unlike the other types, type IV restriction enzymes recognize and cut only modified DNA.
Function
Neisseria meningitidis has multiple type II restriction endonuclease systems that are employed in natural genetic transformation. Natural genetic transformation is a process by which a recipient bacterial cell can take up DNA from a neighboring donor bacterial cell and integrate this DNA into its genome by recombination. Although early work on restriction modification systems focused on the benefit to bacteria of protecting themselves against invading bacteriophage DNA or other foreign DNA, it is now known that these systems can also be used to restrict DNA introduced by natural transformation from other members of the same, or related species.
In the pathogenic bacterium Neisseria meningitidis (meningococci), competence for transformation is a highly evolved and complex process where multiple proteins at the bacterial surface, in the membranes and in the cytoplasm interact with the incoming transforming DNA. Restriction-modification systems are abundant in the genus Neisseria. N. meningitidis has multiple type II restriction endonuclease systems. The restriction modification systems in N. meningitidis vary in specificity between different clades. This specificity provides an efficient barrier against DNA exchange between clades. Luria, on page 99 of his autobiography, referred to such a restriction behavior as "an extreme instance of unfriendliness." Restriction-modification appears to be a major driver of sexual isolation and speciation in the meningococci. Caugant and Maiden suggested that restriction-modification systems in meningococci may act to allow genetic exchange among very close relatives while reducing (but not completely preventing) genetic exchange among meningococci belonging to different clonal complexes and related species.
RM systems can also act as selfish genetic elements, forcing their maintenance on the cell through postsegregational cell killing.
Some viruses have evolved ways of subverting the restriction modification system, usually by modifying their own DNA, by adding methyl or glycosyl groups to it, thus blocking the restriction enzymes. Other viruses, such as bacteriophages T3 and T7, encode proteins that inhibit the restriction enzymes.
To counteract these viruses, some bacteria have evolved restriction systems which only recognize and cleave modified DNA, but do not act upon the host's unmodified DNA. Some prokaryotes have developed multiple types of restriction modification systems.
R-M systems are more abundant in promiscuous species, wherein they establish preferential paths of genetic exchange within and between lineages with cognate R-M systems. Because the repertoire and/or specificity of R-M systems in bacterial lineages vary quickly, the preferential fluxes of genetic transfer within species are expected to constantly change, producing time-dependent networks of gene transfer.
Applications
Molecular biology
(a) Cloning: RM systems can be cloned into plasmids and selected because of the resistance provided by the methylation enzyme. Once the plasmid begins to replicate, the methylation enzyme will be produced and methylate the plasmid DNA, protecting it from a specific restriction enzyme.
(b) Restriction fragment length polymorphisms: Restriction enzymes are also used to analyse the composition of DNA in regard to presence or absence of mutations that affect the REase cleavage specificity. When wild-type and mutants are analysed by digestion with different REases, the gel-electrophoretic products vary in length, largely because mutant genes will not be cleaved in a similar pattern as wild-type for presence of mutations that render the REases non-specific to the mutant sequence.
Gene therapy
The bacteria R-M system has been proposed as a model for devising human anti-viral gene or genomic vaccines and therapies since the RM system serves an innate defense-role in bacteria by restricting tropism of bacteriophages. Research is on REases and ZFN that can cleave the DNA of various human viruses, including HSV-2, high-risk HPVs and HIV-1, with the ultimate goal of inducing target mutagenesis and aberrations of human-infecting viruses. The human genome already contains remnants of retroviral genomes that have been inactivated and harnessed for self-gain. Indeed, the mechanisms for silencing active L1 genomic retroelements by the three prime repair exonuclease 1 (TREX1) and excision repair cross complementing 1 (ERCC) appear to mimic the action of RM-systems in bacteria, and the non-homologous end-joining (NHEJ) that follows the use of ZFN without a repair template.
A major advance is the creation of artificial restriction enzymes created by linking the FokI DNA cleavage domain with an array of DNA binding proteins or zinc finger arrays, denoted now as zinc finger nucleases (ZFN). ZFNs are a powerful tool for host genome editing due to their enhanced sequence specificity. ZFN work in pairs, their dimerization being mediated in-situ through the FoKI domain. Each zinc finger array (ZFA) is capable of recognizing 9–12 base-pairs, making for 18–24 for the pair. A 5–7 bp spacer between the cleavage sites further enhances the specificity of ZFN, making them a safe and more precise tool that can be applied in humans. A recent Phase I clinical trial of ZFN for the targeted abolition of the CCR5 co-receptor for HIV-1 has been undertaken.
Relation with mobile genetic elements
R-M systems are major players in the co-evolutionary interaction between mobile genetic elements (MGEs) and their hosts. Genes encoding R-M systems have been reported to move between prokaryotic genomes within MGEs such as plasmids, prophages, insertion sequences/transposons, integrative conjugative elements (ICEs) and integrons. However, it was recently found that there are relatively few R-M systems in plasmids, some in prophages, and practically none in phages. On the other hand, all these MGEs encode a large number of solitary R-M genes, notably MTases. In light of this, it is likely that R-M mobility may be less dependent on MGEs and more dependent, for example, on the existence of small genomic integration hotspots. It is also possible that R-M systems frequently exploit other mechanisms such as natural transformation, vesicles, nanotubes, gene transfer agents or generalized transduction in order to move between genomes.
See also
Methylation
Restriction enzyme
References
Bacteriophages
Molecular biology
Immune system | Restriction modification system | [
"Chemistry",
"Biology"
] | 2,324 | [
"Biochemistry",
"Organ systems",
"Immune system",
"Molecular biology"
] |
336,050 | https://en.wikipedia.org/wiki/Paul%20Lauterbur | Paul Christian Lauterbur (May 6, 1929 – March 27, 2007) was an American chemist who shared the Nobel Prize in Physiology or Medicine in 2003 with Peter Mansfield for his work which made the development of magnetic resonance imaging (MRI) possible.
Lauterbur was a professor at Stony Brook University from 1963 until 1985, where he conducted his research for the development of the MRI. In 1985 he became a professor along with his wife Joan at the University of Illinois at Urbana-Champaign for 22 years until his death in Urbana. He never stopped working with undergraduates on research, and he served as a professor of chemistry, with appointments in bioengineering, biophysics, the College of Medicine at Urbana-Champaign and computational biology at the Center for Advanced Study.
Early life
Lauterbur was of Luxembourgish ancestry. Born and raised in Sidney, Ohio, Lauterbur graduated from Sidney High School, where a new Chemistry, Physics, and Biology wing was dedicated in his honor. As a teenager, he built his own laboratory in the basement of his parents' house. His chemistry teacher at school understood that he enjoyed experimenting on his own, so the teacher allowed him to do his own experiments at the back of class.
When he was drafted into the United States Army in the 1950s, his superiors allowed him to spend his time working on an early nuclear magnetic resonance (NMR) machine; he had published four scientific papers by the time he left the Army. Paul became an atheist later on.
Education and career
Lauterbur received a BS in chemistry from the Case Institute of Technology, now part of Case Western Reserve University in Cleveland, Ohio where he became a Brother of the Alpha Delta chapter of Phi Kappa Tau fraternity. He then went to work at the Mellon Institute laboratories of the Dow Corning Corporation, with a 2-year break to serve at the Army Chemical Center in Edgewood, Maryland. While working at Mellon Institute he pursued graduate studies in chemistry at the University of Pittsburgh. Earning his PhD in 1962, the following year Lauterbur accepted a position as associate professor at Stony Brook University. As a visiting faculty in chemistry at Stanford University during the 1969–1970 academic year, he undertook NMR-related research with the help of local businesses Syntex and Varian Associates. Lauterbur returned to Stony Brook, continuing there until 1985 when he moved to the University of Illinois.
The development of the MRI
Lauterbur credits the idea of the MRI to a brainstorm one day at a suburban Pittsburgh Eat'n Park Big Boy Restaurant, with the MRI's first model scribbled on a table napkin while he was a student and researcher at both the University of Pittsburgh and the Mellon Institute of Industrial Research. The further research that led to the Nobel Prize was performed at Stony Brook University in the 1970s.
The Nobel Prize in Physics in 1952, which went to Felix Bloch and Edward Purcell, was for the development of nuclear magnetic resonance (NMR), the scientific principle behind MRI. However, for decades magnetic resonance was used mainly for studying the chemical structure of substances. It wasn't until the 1970s with Lauterbur's and Mansfield's developments that NMR could be used to produce images of the body.
Lauterbur used the idea of Robert Gabillard (developed in his doctoral thesis, 1952) of introducing gradients in the magnetic field which allows for determining the origin of the radio waves emitted from the nuclei of the object of study. This spatial information allows two-dimensional pictures to be produced.
While Lauterbur conducted his work at Stony Brook, the best NMR machine on campus belonged to the chemistry department; he had to visit it at night to use it for experimentation and would carefully change the settings so that they would return to those of the chemists' as he left. The original MRI machine is located at the Chemistry building on the campus of Stony Brook University in Stony Brook, New York.
Some of the first images taken by Lauterbur included those of a 4-mm-diameter clam his daughter had collected on the beach at the Long Island Sound, green peppers and two test tubes of heavy water within a beaker of ordinary water; no other imaging technique in existence at that time could distinguish between two different kinds of water. This last achievement is particularly important as the human body consists mostly of water.
When Lauterbur first submitted his paper with his discoveries to Nature, the paper was rejected by the editors of the journal. Lauterbur persisted and requested them to review it again, upon which time it was published and is now acknowledged as a classic Nature paper. The Nature editors pointed out that the pictures accompanying the paper were too fuzzy, although they were the first images to show the difference between heavy water and ordinary water. Lauterbur said of the initial rejection: "You could write the entire history of science in the last 50 years in terms of papers rejected by Science or Nature."
Peter Mansfield of the University of Nottingham in the United Kingdom took Lauterbur's initial work another step further, replacing the slow (and prone to artefacts) projection-reconstruction method used by Lautebur's original technique with a method that used frequency and phase encoding by spatial gradients of magnetic field. Owing to Larmor precession, a mathematical technique called a Fourier transformation could then be used to recover the desired image, greatly speeding up the imaging process.
Lauterbur unsuccessfully attempted to file patents related to his work to commercialize the discovery. The State University of New York chose not to pursue patents, with the rationale that the expense would not pay off in the end. "The company that was in charge of such applications decided that it would not repay the expense of getting a patent. That turned out not to be a spectacularly good decision," Lauterbur said in 2003. He attempted to get the federal government to pay for an early prototype of the MRI machine for years in the 1970s, and the process took a decade. The University of Nottingham did file patents which later made Mansfield wealthy.
Nobel Prize
Lauterbur was awarded the Nobel Prize along with Mansfield in the fall of 2003. Controversy occurred when Raymond Damadian took out full-page ads in The New York Times, The Washington Post and The Los Angeles Times headlined "The Shameful Wrong That Must Be Righted" saying that the Nobel committee had not included him as a Prize winner alongside Lauterbur and Mansfield for his early work on the MRI. Damadian claimed that he discovered MRI and the two Nobel-winning scientists refined his technology.
The New York Times published an editorial saying that while scientists credit Damadian for holding an early patent in MRI technology, Lauterbur and Mansfield expanded upon Herman Carr's technique in order to produce first 2D and then 3D MR images. The editorial deems this to be worthy of a Nobel prize even though it states clearly in Alfred Nobel's will that prizes are not to be given out solely on the basis of improving an existing technology for commercial use. The newspaper then points out a few cases in which precursor discoveries had been awarded with a Nobel, along with a few deserving cases in which it had not, such as Rosalind Franklin, Oswald Avery, .
Death
Lauterbur died aged 77 in March 2007 of kidney disease at his home in Urbana, Illinois. University of Illinois Chancellor Richard Herman said, "Paul's influence is felt around the world every day, every time an MRI saves the life of a daughter or a son, a mother or a father."
Other awards and honors
Albert Lasker Award for Clinical Medical Research, 1984
General Motors Cancer Research Foundation Kettering Prize, 1985
Gairdner Foundation International Award, 1985
The Harvey Prize, 1986
National Medal of Science, 1987
National Medal of Technology, 1988, (with Raymond Damadian)
Bower Award, Franklin Institute of Philadelphia, 1990 (first recipient)
Carnegie Mellon Dickson Prize in Science in 1993.
NAS Award for Chemistry in Service to Society of the National Academy of Sciences, 2001
Charter member, Phi Kappa Tau Hall of Fame in 2006.
National Inventors Hall of Fame, class of 2007
Asteroid 255598 Paullauterbur, discovered by Italian amateur astronomer Silvano Casulli in 2006, was named in his honor. The official was published by the Minor Planet Center on 12 January 2017 ().
Honorary Degrees
Carnegie-Mellon University in Pittsburgh
University of Liège in Belgium
Nicolaus Copernicus University Medical School in Kraków, Poland
See also
Nobel Prize controversies
Luxembourg American
References
Further reading
Dawson, M. Joan. Paul Lauterbur and the Invention of MRI, Boston: MIT Press, 2013.
"Paul C. Lauterbur - Biographical". Nobelprize.org. Nobel Media AB.
External links
Paul C. Lauterbur, Genesis of the MRI (Magnetic Resonance Imaging) notebook, September 1971 (all pages freely available for download in variety of formats from Science History Institute Digital Collections at digital.sciencehistory.org)
Nobel Prize 2003 Press Release
University of Pittsburgh Medical School article on alumnus Lauterbur
Paul C. Lauterbur Patents
National Academy of Sciences Biographical Memoir
Nuclear magnetic resonance
1929 births
2007 deaths
American atheists
American biophysicists
20th-century American chemists
American Nobel laureates
American people of Luxembourgian descent
IEEE Medal of Honor recipients
Winners of the Heineken Prize
Members of the United States National Academy of Sciences
National Medal of Science laureates
National Medal of Technology recipients
Nobel laureates in Physiology or Medicine
People from Sidney, Ohio
University of Illinois Urbana-Champaign faculty
University of Pittsburgh alumni
United States Army personnel
Howard N. Potts Medal recipients
Recipients of the Lasker–DeBakey Clinical Medical Research Award
Kyoto laureates in Advanced Technology | Paul Lauterbur | [
"Physics",
"Chemistry"
] | 1,989 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
336,052 | https://en.wikipedia.org/wiki/Peter%20Mansfield | Sir Peter Mansfield (9 October 1933 – 8 February 2017) was an English physicist who was awarded the 2003 Nobel Prize in Physiology or Medicine, shared with Paul Lauterbur, for discoveries concerning Magnetic Resonance Imaging (MRI). Mansfield was a professor at the University of Nottingham.
Early life
Mansfield was born in Lambeth, London on 9 October 1933, to Sidney George (b. 1904, d. 1966) and Lillian Rose Mansfield (b. 1905, d. 1984; née Turner). Mansfield was the youngest of three sons, Conrad (b. 1925) and Sidney (b. 1927).
Mansfield grew up in Camberwell. During World War II he was evacuated from London, initially to Sevenoaks and then twice to Torquay, Devon, where he was able to stay with the same family on both occasions. On returning to London after the war he was told by a school master to take the 11+ exam. Having never heard of the exam before, and having no time to prepare, Mansfield failed to gain a place at the local Grammar school. His mark was, however, high enough for him to go to a Central School in Peckham. At the age of 15 he was told by a careers teacher that science wasn't for him. He left school shortly afterwards to work as a printer's assistant.
At the age of 18, having developed an interest in rocketry, Mansfield took up a job with the Rocket Propulsion Department of the Ministry of Supply in Westcott, Buckinghamshire. Eighteen months later he was called up for National Service.
Education
After serving in the army for two years, Mansfield returned to Westcott and started studying for A-levels at night school. Two years later he was admitted to study physics at Queen Mary College, University of London.
Mansfield graduated with a BSc from Queen Mary in 1959. His final-year project, supervised by Jack Powles, was to construct a portable, transistor-based spectrometer to measure the Earth's magnetic field. Towards the end of this project Powles offered Mansfield a position in his NMR (Nuclear Magnetic Resonance) research group. Powles' interest was in studying molecular motion, mainly liquids. Mansfield's project was to build a pulsed NMR spectrometer to study solid polymer systems. He received his PhD in 1962; his thesis was titled Proton magnetic resonance relaxation in solids by transient methods.
Career
Following his PhD, Mansfield was invited to postdoctoral research with Charlie Slichter at the University of Illinois at Urbana–Champaign, where he carried out an NMR study of doped metals.
In 1964, Mansfield returned to England to take up a place as a lecturer at Nottingham University where he could continue his studies in multiple-pulse NMR. He was successively appointed Senior Lecturer in 1968 and Reader in 1970. During this period his team developed the MRI equipment with the help of grants from the Medical Research Council. It was not until the 1970s with Paul Lauterbur's and Mansfield's developments that NMR could be used to produce images of the body. In 1979 Mansfield was appointed Professor of the Department of Physics until his retirement in 1994.
1962: Research Associate, Department of Physics, University of Illinois
1964: Lecturer, Department of Physics, University of Nottingham
1968: Senior Lecturer, Department of Physics, University of Nottingham
1970: Reader, Department of Physics, University of Nottingham
1972–73: Senior Visitor, Max Planck Institute for Medical Research, Heidelberg
1979: Professor, Department of Physics, University of Nottingham
Mansfield is credited with inventing 'slice selection' for MRI - i.e. the method by which a localised axial slice of a subject can be selectively imaged, rather than the entire subject - and understanding how the radio signals from MRI can be mathematically analysed, making interpretation of the signals into a useful image a possibility. He is also credited with discovering how fast imaging could be possible by developing the MRI protocol called echo-planar imaging. Echo-planar imaging allows T2* weighted images to be collected many times faster than previously possible. It also has made functional magnetic resonance imaging (fMRI) feasible.
Whilst working at Nottingham University, Mansfield tested the first full body prototype, installed just before Christmas, 1978. Mansfield was so keen, that he volunteered to test it himself and produced the first scan of a live patient. The prototype machine is now an exhibit, in the Medical Section of the Science Museum.
Awards and honours
1983 Gold Medal of the Society of Magnetic Resonance in Medicine
1984 Joint award of the Royal Society Welcome Foundation Gold Medal and Prize.
1986 Elected Fellow of Queen Mary College (now Queen Mary University of London)
1987 Elected Fellow of the Royal Society (FRS)
1987 Elected President of the Society of Magnetic Resonance in Medicine
1988 Awarded Duddell Medal and Prize by the Institute of Physics
1988 Awarded Silvanus Thompson Medal by the British Institute of Radiology
1989 Antoine Béclère Medal from the International Society of Radiology and the Antoine Béclère Institute in Paris
1990 Royal Society Mullard Award (joint with John Mallard & Jim Hutchinson)
1992 International Society of Magnetic Resonance (ISMAR) prize (joint with P. Lauterbur)
1993 Knighted
1993 Silver Plaque of the European Society of Magnetic Resonance in Medicine and Biology
1993 Elected Honorary Fellow of the Royal College of Radiology and Honorary Member of the British Institute of Radiology
1994 Elected Honorary Member of the Society of Magnetic Resonance Imaging and Fellow of the Society of Magnetic Resonance
1995 Garmisch-Partenkirchen Prize for MRI
1995 Gold Medal of the European Congress of Radiology and the European Association of Radiology
1997 Honorary Fellow of the Institute of Physics
2003 Nobel Prize in Physiology or Medicine for Medicine with Paul Lauterbur
2009 was presented with the Lifetime Achievement Award by the Prime Minister, Gordon Brown, ceremony broadcast on ITV's Pride of Britain Awards
2016 Asteroid 262972 Petermansfield, discovered by astronomer Vincenzo Silvano Casulli in 2007, was named in his honour. The official was published by the Minor Planet Center on 22 April 2016 ().
Private life
Mansfield married Jean Margaret Kibble (b. 1935) on 1 September 1962. He had two daughters.
Mansfield died in Nottingham on 8 February 2017, aged 83.
References
1933 births
2017 deaths
Academics of the University of Nottingham
Alumni of Queen Mary University of London
English biophysicists
English inventors
English Nobel laureates
British Nobel laureates
English physicists
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Knights Bachelor
Nobel laureates in Physiology or Medicine
People from Peckham
Nuclear magnetic resonance
People from Camberwell | Peter Mansfield | [
"Physics",
"Chemistry"
] | 1,333 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
336,076 | https://en.wikipedia.org/wiki/Look%20and%20feel | In software design, the look and feel of a graphical user interface comprises aspects of its design, including elements such as colors, shapes, layout, and typefaces (the "look"), as well as the behavior of dynamic elements such as buttons, boxes, and menus (the "feel"). The term can also refer to aspects of a non-graphical user interface (such as a command-line interface), as well as to aspects of an API – mostly to parts of an API that are not related to its functional properties. The term is used in reference to both software and websites.
Look and feel applies to other products. In documentation, for example, it refers to the graphical layout (document size, color, font, etc.) and the writing style. In the context of equipment, it refers to consistency in controls and displays across a product line.
Look and feel in operating system user interfaces serves two general purposes. First, it provides branding, helping to identify a set of products from one company. Second, it increases ease of use, since users will become familiar with how one product functions (looks, reads, etc.) and can translate their experience to other products with the same look and feel.
In widget toolkits
Contrary to operating system user interfaces, for which look and feel is a part of the product identification, widget toolkits often allow users to specialize their application look and feel, by deriving the default look and feel of the toolkit, or by completely defining their own. This specialization can go from skinning (that only deals with the look, or visual appearance of the graphical control elements) to completely specializing the way the user interacts with the software (that is, the feel).
The definition of the look and feel to associate with the application is often done at initialization, but some Widget toolkits, such as the Swing widget toolkit that is part of the Java API, allow users to change the look and feel at runtime (see Pluggable look and feel).
Some examples of Widget toolkits that support setting a specialized look and feel are:
XUL (XML User Interface Language): The look and feel of the user interface can be specialized in a CSS file associated with the XUL definition files. Properties that can be specialized from the default are, for example, background or foreground colors of widgets, fonts, size of widgets, and so on.
Swing supports specializing the look and feel of widgets by deriving from the default, another existing one, creating one from scratch, or, beginning with J2SE 5.0, in an XML property file called synth (skinnable look and feel).
Lawsuits
Some companies try to assert copyright of trade dress over their look and feel.
The Broderbund v. Unison (1986) case was an early software copyright case that attempted to apply U.S. copyright law to the look and feel presented by a software product.
In 1987 Lotus sued Paperback Software and Mosaic for copyright infringement, false and misleading advertising, and unfair competition over their low-cost clones of 1-2-3, VP Planner and Twin, and sued Borland over its Quattro spreadsheet.
In December 1989, Xerox
sued Apple over the Macintosh copyright.
Apple Computer was notable for its use of the term look and feel in reference to their Mac OS operating system. The firm tried, with some success, to block other software developers from creating software that had a similar look and feel. Apple argued that they had a copyright claim on the look and feel of their software, and even went so far as to sue Microsoft, alleging that the Windows operating system was illegally copying their look and feel.
Although provoking a vehement reaction from some in the software community, and causing Richard Stallman to form the League for Programming Freedom, the expected landmark ruling never happened, as most of the issues were resolved based on a license that Apple had granted Microsoft for Windows 1.0. See: Apple v. Microsoft. The First Circuit Court of Appeals rejected a copyright claim on the feel of a user interface in Lotus v. Borland.
More recent reactions
In 2012 and 2014, Apple Inc. has filed lawsuits against competing manufacturers of smartphones and tablet computers, claiming that those manufacturers copied the look and feel of Apple's popular iPhone and iPad products.
In APIs
An API, which is an interface to software which provides some sort of functionality, can also have a certain look and feel. Different parts of an API (e.g. different classes or packages) are often linked by common syntactic and semantic conventions (e.g. by the same asynchronous execution model, or by the same way object attributes are accessed). These elements are rendered either explicitly (i.e. are part of the syntax of the API), or implicitly (i.e. are part of the semantics of the API).
See also
Design language
Lotus "look and feel" lawsuit
Skeuomorph
Structure, sequence and organization
Trade dress
References
External links
Java Look and Feel collection
The Java Tutorials: Modifying the Look and Feel
Usability
User interfaces
Graphical user interfaces
Graphical user interface elements
Legal research | Look and feel | [
"Technology"
] | 1,071 | [
"User interfaces",
"Interfaces",
"Components",
"Graphical user interface elements"
] |
336,103 | https://en.wikipedia.org/wiki/Messier%205 | Messier 5 or M5 (also designated NGC 5904) is a globular cluster in the constellation Serpens. It was discovered by Gottfried Kirch in 1702.
Discovery and visibility
M5 is, under extremely good conditions, just visible to the naked eye as a faint "star" 0.37 of a degree (22' (arcmin)) north-west of star 5 Serpentis. Binoculars and/or small telescopes resolve the object as non-stellar; larger telescopes will show some individual stars, some of which are as bright as apparent magnitude 10.6.
M5 was discovered by German astronomer Gottfried Kirch in 1702 when he was observing a comet. Charles Messier noted it in 1764 and—a studier of comets—cast it as one of his nebulae. William Herschel was the first to resolve individual stars in the cluster in 1791, counting roughly 200. Messier 5 is receding from the Solar System at a speed over 50 km/s.
Notable features
Within M5, there are 105 known variable stars, 97 of them belonging to the RR Lyrae type. RR Lyrae stars, sometimes referred to as "Cluster Variables", are somewhat similar to Cepheid type variables and as such can be used as a tool to measure distances in outer space since the relation between their luminosities and periods are well known. The brightest and most easily observed variable in M5 varies from magnitude 10.6 to 12.1 in a period of just under 26.5 days.
The cluster contains two millisecond pulsars, one of which is in a binary, allowing the proper motion of the cluster to be measured. The binary could help our understanding of neutron degenerate matter; the current median mass, if confirmed, would exclude any "soft" equation of state for such matter. The cluster has been used to test for magnetic dipole moments in neutrinos, which could shed light on some hypothetical particles such as the axion.
A dwarf nova has also been observed in this cluster.
See also
List of Messier objects
References
External links
SIMBAD: M5
M5, SEDS Messier pages
M5, Galactic Globular Clusters Database page
Historic observations of M5
Image of M5 by Waid Observatory
Messier 005
Messier 005
Messier 005
005
17020505 | Messier 5 | [
"Astronomy"
] | 485 | [
"Constellations",
"Serpens"
] |
336,112 | https://en.wikipedia.org/wiki/Physikalisch-Technische%20Bundesanstalt | The Physikalisch-Technische Bundesanstalt (PTB) is the national metrology institute of the Federal Republic of Germany, with scientific and technical service tasks. It is a higher federal authority and a public-law institution directly under federal government control, without legal capacity, under the auspices of the Federal Ministry for Economic Affairs and Climate Action.
Tasks
Together with NIST in the USA and the NPL in Great Britain, PTB ranks among the leading metrology institutes in the world. As the National Metrology Institute of Germany, PTB is Germany's highest and only authority in terms of correct and reliable measurements. The Units and Time Act Bundesgesetzblatt (Federal Law Gazette), volume 2008, part I, No. 28, p. 1185 ff., 11 July 2008] assigns all tasks which are related with the realization and dissemination of the units to PTB. All legally relevant aspects regarding the units as well as PTB’s responsibilities have been combined in this Act. Previously, all questions regarding the units as well as the role of PTB had been distributed among three laws: the Units Act, the Time Act, and the Verification Act.
PTB consists of nine technical-scientific divisions (two of them in Berlin), which are subdivided into approx. 60 departments. These again are subdivided into more than 200 working groups. PTB's tasks are as follows: the determination of fundamental and natural constants; the realization, maintenance and dissemination of the legal units of the SI; and safety technology. This spectrum of tasks is supplemented by services such as the German Calibration Service (, DKD) and by metrology for the area regulated by law, metrology for industry, and metrology for technology transfer. As the basis for its tasks, PTB conducts fundamental research and development in the field of metrology in close cooperation with universities, other research institutions, and industry. PTB employs approximately 1900 staff members. It has a total budget of approx. €183 million at its disposal; in 2012, approx. €15 million was, in addition, canvassed as third-party funds for research projects.
The Units and Time Act entrusts PTB also especially with the dissemination of legal time in Germany. To have a time basis for this, PTB operates several atomic clocks (currently two cesium clocks and, since 1999 and 2009, respectively, two cesium fountain clocks). By order of PTB, the synchronization of clocks via radio is performed via the time signal transmitter DCF77 operated by Media Broadcast. Computers which are connected to the Internet can obtain the time also via the three public NTP time servers operated by PTB.
In Berlin-Adlershof, PTB operates the MLS (Metrology Light Source) electron storage ring for calibrations in the field from the infrared (THz) to the extreme ultraviolet (EUV).
Department Q.5 "Technical Cooperation" realizes projects of the German and international development cooperation in the field of quality infrastructure. These activities promote competitiveness as well as environmental protection and consumer protection in developing countries and in countries in transition. One of the tasks of PTB’s "Metrological Information Technology" Department – in accordance with the German Gambling Ordinance (§ 11 ff. SpielV) – is to grant type approvals for gaming machines which offer the possibility to make winnings. Also, according to the Federal Ordinance on Voting Machines, PTB is in charge of the type approval of voting computers. This is, however, moot as, in a judgment of 3 March 2009, the Federal Constitutional Court has declared the use of such voting machines to be inadmissible.
Weapons which may be carried with the Minor Firearms Certificate, i.e. weapons for shooting blanks or irritants and weapons used as signaling devices, require a PTB test mark for their approval. Occasionally, these weapons are also jointly referred to as "PTB weapons" and bear the PTA or PTB proof mark F (see also: Act on the Proof Testing of Arms and Ammunition).
Sites and structure
The main site of PTB is in Braunschweig (Lehndorf-Watenbüttel). Other sites are in Berlin-Charlottenburg and Berlin-Adlershof. Divisions 1 to 6 as well as Division Q are located in Braunschweig. In Berlin-Charlottenburg Divisions 7 and 8 are located, and in Berlin-Adlershof the two electron storage rings BESSY II and the Metrology Light Source (MLS); the latter is located in the Willy Wien Laboratory.
PTB is headed by the Presidential Board in Braunschweig, which is composed of the President, the Vice-President and a further member. Another executive committee is the Directors' Conference, with the Presidential Board and the Heads of the Divisions as members. PTB is advised by the (PTB's Advisory Board), which is composed of representatives from science, the economy and politics.
PTB is composed of the following nine divisions:
Division 1: Mechanics and Acoustics (site: Braunschweig) with the following departments: Mass, Solid Mechanics, Velocity, Gas Flow, Liquid Flow, Sound, Acoustics and Dynamics
Division 2: Electricity (site: Braunschweig) with the following departments: Direct Current and Low Frequency, High Frequency and Electromagnetic Fields, Electrical Energy Measuring Techniques, Quantum Electronics, Semiconductor Physics and Magnetism, Quantum Electrical Metrology
Division 3: Chemical Physics and Explosion Protection (site: Braunschweig) with the following departments: Metrology in Chemistry, Analytics and Thermodynamic State Behavior of Gases, Thermophysical Quantities, Physical Chemistry, Explosion Protection in Energy Technology, Explosion Protection in Sensor Technology and Instrumentation, Fundamentals of Explosion Protection
Division 4: Optics (site: Braunschweig) with the following departments: Photometry and Applied Radiometry, Imaging and Wave Optics, Quantum Optics and Unit of Length, Time and Frequency
Division 5: Precision Engineering (site: Braunschweig) with the following departments: Surface Metrology, Dimensional Nanometrology, Coordinate Metrology, Interferometry on Material Measures, Scientific Instrumentation Department
Division 6: Ionizing Radiation (site: Braunschweig) with the following departments: Radioactivity, Dosimetry for Radiation Therapy and Diagnostic Radiology, Radiation Protection Dosimetry, Ion and Neutron Radiation, Fundamentals of Dosimetry, Operational Radiation Protection
Division 7: Temperature and Synchrotron Radiation (site: Berlin-Charlottenburg and Adlershof) with the following departments: Radiometry with Synchrotron Radiation, Cryophysics and Spectrometry, Detector Radiometry and Radiation Thermometry, Temperature, Heat and Vacuum
Division 8: Medical Physics and Metrological Information Technology (site: Berlin-Charlottenburg) with the following departments: Medical Metrology, Biosignals, Biomedical Optics, Mathematical Modeling and Data Analysis, Metrological Information Technology
The Presidential Staff Office and the Press and Information Office as well as the Divisions Z (Administrative Services) and Q (Scientific-technical Cross-sectional Tasks) report directly to the Presidential Board. Division Q comprises, among other things, the Academic Library, the Legal Metrology and Technology Transfer Departments, the Technical Services, and the Technical Cooperation Department.
History
Two essential factors which led to the founding of the Physikalisch-Technische Reichsanstalt (Imperial Physical Technical Institute – PTR) were the determination of internationally valid, uniform measures in the Meter Convention of 1875 and the dynamic industrial development in Germany in the 19th century. Already in the Franco-German War (1870/71), the stagnation in scientific mechanics and in the science of instruments had become evident in Germany. Increasingly precise metrology was required for industrial production. A considerable impact on the initiative for the founding of a state institute for metrology in order to promote the national interests of the economy, of trade and of the military was made – in particular – by the upcoming electrical industry under the direction of the inventor and industrialist Werner von Siemens. In contrast to the units of length and weight, no recognized methods and standards existed at that time in the field of electrical metrology. The lack of reliable and verifiable measurement methods for the realization of electrical (and other) measurement units was a pressing scientific and economic problem.
In 1872, some Prussian natural scientists joined forces and demanded the establishment of a state institute in order to solve this problem. The reason for this was that such a task was scientifically too ambitious for industrial laboratories and, in addition, not profitable for them, and classical training institutes were not suited for the task either. Among the supporters of the "Schellbach Memorandum" (named after its author Karl Heinrich Schellbach) ranked, among others, Hermann von Helmholtz and the mathematician and physicist Wilhelm Foerster. Prussia, however, initially rejected their demands.
Not until some years later were Werner von Siemens and Hermann von Helmholtz, the "founding fathers" of the PTR, able to make their vision – the establishment of a research institute which was to link scientific, technical and industrial interests in an optimal way – come true. Finally, on 28 March 1887, the Imperial Diet approved the first annual budget of the PTR – the founding of the first state-financed, university-external, major research institution in Germany which combined free fundamental research with services for industry. Werner von Siemens ceded private land in Berlin-Charlottenburg to the Reichsanstalt. Hermann von Helmholtz became its first president. At that time, 65 persons were employed at the PTR – among them more than a dozen physicists – who disposed of a budget of 263,000 Reichsmarks. In its first decades, the PTR succeeded in attracting important scientists and members of the Kuratorium as employees, among them Wilhelm Wien, Friedrich Kohlrausch, Walther Nernst, Emil Warburg, Walther Bothe, Albert Einstein and Max Planck.
Birth of quantum physics
The first outstanding scientific achievement at the PTR was also closely connected with Max Planck. To decide whether electricity or gas would be more economic for street lighting in Berlin, the PTR was to develop a more precise standard for luminous intensity. For this purpose, in 1895, Otto Lummer and Wilhelm Wien developed the first cavity radiator for the practical generation of thermal radiation. Their measurements of the spectrum of the black-body radiation were so precise that they contradicted Wien's radiation law at long wavelengths. This caused one of the cornerstones of classical physics of that time to totter. The measurements prompted a decisive impulse on the part of Max Planck to divide thermal radiation – in an "act of despair", as he later declared – into separate portions. This was the birth of quantum physics.
New structure and new physics
In 1914, the PTR President Emil Warburg discontinued the subdivision into a physical and a technical division and re-structured the PTR into divisions for optics, electricity and heat, with sub-divisions of a purely scientific and technical nature. Under Warburg's successor Walther Nernst, the Reichsanstalt für Maß und Gewicht (Imperial Institute for Weights and Measures – RMG) was, in addition, integrated into the PTR. A newly established division took over from the RMG extensive tasks with regard to the verification system as well as the measurements of length, weight and volume associated with the verification system. The profile of tasks was thus similar to that of PTB today: Through its own research and development, and through services building on this, the PTR was to ensure the uniformity of metrology and its continuous further development. As regards contents, the PTB was dedicated at that time to the so-called New Physics. This included, among other things, research on the newly discovered X-rays, new atomic models, Einstein's Special Theory of Relativity, quantum physics (based on the already mentioned work on the black-body radiator), and the investigation of the properties of the electron. Scientists like Hans Geiger, who established the first radioactivity laboratory of PTR, were involved in this research work. Walther Meißner succeeded in liquefying helium, which led him to the discovery of the superconductivity of a series of metals. In this connection, he recognized some years later – together with his colleague Robert Ochsenfeld – that superconductors have the property of displacing from their interior a magnetic field which has been applied from the outside – the Meißner-Ochsenfeld Effect.
Nazi Germany
With the appointment of Johannes Stark as president on 1 May 1933, the ideology of National Socialism found its way into the PTR. The convinced advocate of a German Physics terminated diverse research projects on issues of modern physics to which he referred to as "Jewish", among them, in particular, works on quantum physics and on the theory of relativity. Stark also tried to enforce the "Führer principle" (Führerprinzip) at the PTR: in 1935, he dissolved the Kuratorium and took over its competences himself. Jewish employees and critics of the NSDAP (such as Max von Laue) were dismissed. After World War II, Von Laue participated in the re-founding of PTB. Albert Einstein, who had been thrown out of the Kuratorium already before its dissolution, broke ties to PTR/PTB.
Under Stark and – after 1939 – under his successor Abraham Esau, the PTR strongly dedicated itself to armament research. A newly founded laboratory for acoustics was not only to investigate general – but mainly also military – fields of application. This included, among other things, the acoustic finding of artillery, the military utilization of ultrasound and the development of decoding procedures. In addition, researchers of PTR developed acoustic mines and a steering system for torpedoes which orientated itself on the sound field of traveling ships. Due to its classical metrological tasks, the PTR was also closely connected with the armament industry of the Third Reich. Since exact measures are a basic requirement for the manufacture of military equipment, the PTR gained a key role in armament production and defense. The extent to which the PTR was also involved in the German nuclear weapons project is controversial. It is, however, known that – prior to his time as PTR president – Abraham Esau conducted – until 1939 – a group of researchers dealing with nuclear fission. Later, he took over the specialist area "nuclear fission" in the Reich Research Council which supervised, from spring 1942 on, the German uranium project. Shortly after that, Hermann Göring subordinated the working group under the former PTR physicist Kurt Diebner to Division V for atomic physics at the PTR. Esau received the title "Authorized Representative of the Reichsmarschall for Nuclear Physics", a post which he, however, ceded to Walther Gerlach already at the end of 1943.
To escape the bombing raids of the allies, the PTR was, in 1943, relocated at the initiative of the president and Thuringian privy councillor Abraham Esau to different places in Germany (for example to Weida and Ronneburg in Thuringia and to Bad Warmbrunn in Lower Silesia). During the attacks on Berlin, the buildings of the PTR were heavily damaged. In 1945, the Reichsanstalt was virtually destroyed and the few departments which still existed were scattered all over the country.
Re-founding of PTB in Braunschweig and other PTR successors
Approximately from 1947 on, successor institutes were developed in addition to the PTR in Berlin-Charlottenburg, i.e. one in East Berlin – for the Soviet Occupation Zone – and one in the Bizone – and later Trizone. With the well-meaning support of the British Military Government, parts of the old Reichsanstalt were established in Braunschweig. The idea for this re-founding had been developed by the former PTR advisor for theoretical physics, Max von Laue, already during his internment in Farm Hall. In 1947, he succeeded in convincing the British authorities to make the former Luftfahrtforschungsanstalt (Aeronautical Research Institute) in Völkenrode near Braunschweig available to the PTR successor. In 1948, Wilhelm Kösters, who had been the director of Division 1 in Berlin for many years, became its first president. Many former PTR employees from Berlin, Weida and Heidelberg followed him to Braunschweig. The new institute was named ꞋꞋPhysikalisch-Technische AnstaltꞋꞋ (PTA) and, since 1 April 1950, ꞋꞋPhysikalisch-Technische BundesanstaltꞋꞋ. In 1953, the West Berlin PTR was integrated into this institute as ꞋꞋBerlin InstituteꞋꞋ while respecting the four-power status of Berlin.
In the German Democratic Republic (GDR), the Deutsches Amt für Maß und Gewicht (DAMG) had established itself with its principle seat in Berlin. After several renamings, this institute was designated Amt für Standardisierung, Meßwesen und Warenprüfung (Office for Standardization, Metrology and Quality Control – ASMW) during the last GDR years; the name already indicates that this office of the GDR had more extensive tasks than PTB in the Federal Republic of Germany (FRG), namely additional tasks in the field of standardization and quality assurance and in the area of activity of the Bundesanstalt für Materialforschung und -prüfung (BAM).
Growth and reunification
The young PTB grew rapidly in the years after its founding – both in terms of staff and in terms of financial resources. Not only its scientific metrological profile was extended, but also its palette of services rendered to industry, in particular in the form of calibrations of measuring instruments. In the 1970s, this led to the founding of the Deutscher Kalibrierdienst (German Calibration Service), which delegated service tasks to accredited, privately run laboratories and allowed PTB to concentrate itself on more demanding measurement tasks.
From 1967 to 1995, PTB operated the Experimental and Research Reactor Braunschweig. This reactor served in particular as neutron source for fundamental research, not for the investigation of nuclear energy. PTB dealt with this controversial subject from 1977 to 1989, above all due to the fact that the task "long-term management and disposal of radioactive waste" had been assigned to it. Later on, this field of work passed over to the ″Bundesamt für Strahlenschutz″ (Federal Office for Radiation Protection) after same had been newly established. Today, PTB’s Division 6 deals with ionizing radiation in general. This also includes a highly sensitive trace survey station for radionuclides which has been measuring radioactive substances in ground-level air for more than 50 years.
The "Wende" ("political change") in Germany in 1990 also led to a "reunification in metrology". PTB took over parts of the ASMW (Office for Standardization, Metrology and Quality Control of the former German Democratic Republic), among them 400 employees, and the site Berlin-Friedrichshagen as additional field office (this has since been given up again). Other parts of the ASMW were integrated into the BAM. Despite a phase of staff reductions – after the strong expansion following reunification – PTB ranks today among the largest national metrology institutes in the world. As such, it is in charge of the realization and dissemination of the physical units and promotes the worldwide uniformity of metrology.
Journals
The PTB magazine , which is published approximately once a year, can be subscribed to free of charge or it can be downloaded from the Internet pages of PTB. It contains articles about the quantities of physics. These articles are intended to be generally understandable and informative for the broad public.
In addition, PTB publishes the scientific information bulletin PTB-news three times a year. On four pages, it contains news from the fields of work "Fundamentals of Metrology", "Applied Metrology for Industry", "Medicine and Environmental Protection", "Metrology for Society" and "International Affairs". The PTB-news are published in German and in English.
PTB-Mitteilungen is the metrological specialist journal and the official information bulletin of PTB. It is published four times a year and contains original scientific articles as well as overview articles on metrological subjects from PTB's fields of activity. Each volume focuses on a main topic. As an official information bulletin, the journal stands in a long tradition which goes back to the beginnings of the Physikalisch-Technische Reichsanstalt (Imperial Technical Physical Institute - PTR, founded in 1887). Until 2014, "PTB-Mitteilungen" was also the official bulletin in which the type approvals granted by PTB as well as the tests and conformity assessments carried out by PTB were published in a section of its own [named "Amtliche Bekanntmachungen" ("Official Notes")]. With the new Measures and Verification Act which has been in force since 1 January 2015 and with the new Measures and Verification Ordinance, there is no longer a legal basis for these notices. From 2015 onwards, "PTB-Mitteilungen" is, therefore, a purely metrological specialist journal and does not publish any "Official Notes" any more.
Presidents
Presidents of PTB and of the Physikalisch-Technische Reichsanstalt Berlin-Charlottenburg:
1888–1894: Hermann von Helmholtz, founding president
1895–1905: Friedrich Kohlrausch
1905–1922: Emil Warburg
1922–1924: Walther Nernst
1924–1933: Friedrich Paschen
1933–1939: Johannes Stark
1939–1945: Abraham Esau
1945: (for a short time, until the dissolution of the PTR)
1947: (temporary director of the re-founded PTB in Braunschweig)
1948–1950:
1951–1961:
1961–1969:
1970–1975:
1975–1995:
1995–2011:
2012-2022:
2022–present: Cornelia Denz
Employees
Employees of PTR and PTB were, among others: Udo Adelsberger, Walther Bothe, Kurt Diebner, Gerhard Wilhelm Becker, Ernst Engelhard, Abraham Esau, Ernst Gehrcke, Hans Geiger, Werner Gitt, Eugen Goldstein, Ernst Carl Adolph Gumlich, Hermann von Helmholtz, Fritz Hennin, Friedrich Georg Houtermans, Max Jakob, Hellmut Keiter, Dieter Kind, Hans Otto Kneser, Friedrich Wilhelm Kohlrausch, Wilhelm Kösters, Bernhard Anton Ernst Kramer, Johannes Kramer, August Kundt, Max von Laue, Carl von Linde, Leopold Loewenherz, Otto Lummer, Walter Meidinger, Walther Meißner, Franz Mylius, Walther Hermann Nernst, Robert Ochsenfeld, Friedrich Paschen, Matthias Scheffler, Adolf Scheibe, Harald Schering, Reinhard Scherm, Johannes Stark, Ulrich Stille, Ida Tacke, Gotthold Richard Vieweg, Richard Wachsmuth, Emil Warburg, Wilhelm Wien.
Similar organisations
Eidgenössisches Institut für Metrologie (Switzerland)
Bundesamt für Eich- und Vermessungswesen (Austria)
National Measurement Institute, Australia (Australia)
National Physical Laboratory (NPL) (UK)
National Institute of Standards and Technology (formerly: "National Bureau of Standards") (USA)
Internationales Büro für Maß und Gewicht, Paris (Bureau International des Poids et Mesures, BIPM)
References
Literature
Hermann von Helmholtz: Zählen und Messen, erkenntnistheoretisch betrachtet. Original publication in: Philosophische Aufsätze, Eduard Zeller zu seinem fünfzigjährigen Doctorjubiläum gewidmet (dedicated to Eduard Zeller on the occasion of the 50th anniversary of his doctoral degree. Leipzig 1887. Fues’ Verlag. pp. 17–52. Digital edition: Heidelberg University Library, Heidelberg, 2010.
Johannes Stark (editor): Forschung und Prüfung. 50 Jahre Physikalisch-Technische Reichsanstalt. S. Hirzel, Leipzig 1937.
H. Moser (editor): Forschung und Prüfung. 75 Jahre Physikalisch-Technische Bundesanstalt/Reichsanstalt. Vieweg, Braunschweig 1962.
Jürgen Bortfeld, W. Hauser, Helmut Rechenberg (Ed.): 100 Jahre Physikalisch-Technische Reichsanstalt/Bundesanstalt 1887–1987. (= Forschen – Messen – Prüfen. Vol. 1) Braunschweig 1987, .
David Cahan: Meister der Messung. Die Physikalisch-Technische Reichsanstalt im Deutschen Kaiserreich. Wirtschaftsverlag NW, Bremerhaven 2011, .
Ulrich Kern: Forschung und Präzisionsmessung. Die Physikalisch-Technische Reichsanstalt zwischen 1918 und 1948. Wirtschaftsverlag NW, Bremerhaven 2011, .
Dieter Kind: Herausforderung Metrologie. Die Physikalisch-Technische Bundesanstalt und die Entwicklung seit 1945. in: Forschen – Messen – Prüfen. Wirtschaftsverlag, Bremerhaven 2002, .
Rudolf Huebener, Heinz Lübbig: A Focus of Discoveries. World Scientific, Singapur 2008, .
Rudolf Huebener, Heinz Lübbig: Die Physikalisch-Technische Reichsanstalt. Ihre Bedeutung beim Aufbau der modernen Physik. Vieweg+Teubner, Wiesbaden 2011, .
Brigitte Jacob, Wolfgang Schäche, Norbert Szymanski: Bauten für die Wissenschaft – 125 Jahre Physikalisch-Technische Reichsanstalt/Bundesanstalt in Berlin-Charlottenburg 1887–2012. JOVIS Verlag, Berlin 2012, .
Imke Frischmuth, Jens Simon (Eds.): A Metrological Textbook. The Art of Measuring at PTB – in the Past, Present and Future. Wirtschaftsverlag NW, Bremerhaven 2012, .
External links
Indication of the atomic time of the Physikalisch-Technische Bundesanstalt
'maßstäbe', the popular science magazine of PTB
German federal agencies
Research institutes in Lower Saxony
Standards organisations in Germany
Organisations based in Braunschweig
1887 establishments in Germany
Radiation protection organizations | Physikalisch-Technische Bundesanstalt | [
"Engineering"
] | 5,556 | [
"Nuclear organizations",
"Radiation protection organizations"
] |
336,123 | https://en.wikipedia.org/wiki/Dumbbell%20Nebula | The Dumbbell Nebula (also known as the Apple Core Nebula, Messier 27, and NGC 6853) is a planetary nebula (nebulosity surrounding a white dwarf) in the constellation Vulpecula, at a distance of about 1360 light-years. It was the first such nebula to be discovered, by Charles Messier in 1764. At its brightness of visual magnitude 7.5 and diameter of about 8 arcminutes, it is easily visible in binoculars and is a popular observing target in amateur telescopes.
The Dumbbell Nebula appears shaped like a prolate spheroid and is viewed from our perspective along the plane of its equator. In 1992, Moreno-Corral et al. computed that its rate of expansion angularly was, viewed from our distance, no more than (″) per century. From this, an upper limit to the age of 14,600 years may be determined. In 1970, Bohuski, Smith, and Weedman found an expansion velocity of . Given its semi-minor axis radius of , this implies that the kinematic age of the nebula is 9,800 years.
Like many nearby planetary nebulae, the Dumbbell contains knots. Its central region is marked by a pattern of dark and bright cusped knots and their associated dark tails (see picture). The knots vary in appearance from symmetric objects with tails to rather irregular tail-less objects. Similarly to the Helix Nebula and the Eskimo Nebula, the heads of the knots have bright cusps which are local photoionization fronts.
The central star, a white dwarf progenitor, is estimated to have a radius which is (0.13 light seconds) which gives it a size larger than most other known white dwarfs. Its mass was estimated in 1999 by Napiwotzki to be .
Gallery
The Dumbbell nebula can be easily seen in binoculars in a dark sky, just above the small constellation of Sagitta.
See also
Messier object
List of Messier objects
List of planetary nebulae
New General Catalogue
Notes
Radius = distance × sin(angular size / 2) = * sin(8′.0 / 2) = ly
Semi minor axis = distance × sin(minor axis size / 2) = × sin(5′.6 / 2) = ly
Kinematic age = semi-minor axis / expansion rate = ly / 31 km/s = / 31 km/s = s = yr
7.5 apparent magnitude - 5 × (log10( distance) - 1) = absolute magnitude
References
External links
SEDS: Messier Object 27
M27 on astro-pics.com
M27
Planetary nebulae
Vulpecula
Messier objects
NGC objects
Orion–Cygnus Arm
17640712
Discoveries by Charles Messier | Dumbbell Nebula | [
"Astronomy"
] | 569 | [
"Vulpecula",
"Constellations"
] |
336,128 | https://en.wikipedia.org/wiki/Messier%20107 | Messier 107 or M107, also known as NGC 6171 or the Crucifix Cluster, is a very loose globular cluster in a very mildly southern part of the sky close to the equator in Ophiuchus, and is the last such object in the Messier Catalogue.
Observational history, namings and guide
It was discovered by Pierre Méchain in April 1782, then independently by William Herschel in 1793. Herschel's son, John, in his 1864 General Catalogue, described it as a "globular cluster of stars, large, very rich, very much compressed, round, well resolved, clearly consisting of stars". It was not until 1947 that Helen Sawyer Hogg added it and three other objects found by Méchain to the modern Catalogue, the latter having contributed several of the suggested observation objects which Messier had verified and added. The cluster is to be found 2.5° south and slightly west of the star Zeta Ophiuchi.
Properties
M107 is close to the galactic plane and about 20,900 light-years from Earth and from the Galactic Center. Its orbit is partly as far out as the galactic halo, as is between from the Galactic Center, the lower figure, the "perigalactic distance" sees it enter and leave the galactic bar.
It is an Oosterhoff type I cluster with a metallicity of −0.95 and it conforms with the bulk of the halo population. There are 22 known RR Lyrae variable stars in this cluster and a probable SX Phoenicis variable.
Gallery
See also
List of Messier objects
References and footnotes
External links
SEDS: Globular Cluster M107
Messier 107, Galactic Globular Clusters Database page
Globular clusters
Ophiuchus
107
Messier 107
Astronomical objects discovered in 1782
Discoveries by Pierre Méchain | Messier 107 | [
"Astronomy"
] | 385 | [
"Ophiuchus",
"Constellations"
] |
336,138 | https://en.wikipedia.org/wiki/Amphidromic%20point | An amphidromic point, also called a tidal node, is a geographical location where there is little or no difference in sea height between high tide and low tide; it has zero tidal amplitude for one harmonic constituent of the tide. The tidal range (the peak-to-peak amplitude, or the height difference between high tide and low tide) for that harmonic constituent increases with distance from this point, though not uniformly. As such, the concept of amphidromic points is crucial to understanding tidal behaviour. The term derives from the Greek words amphi ("around") and dromos ("running"), referring to the rotary tides which circulate around amphidromic points. It was first discovered by William Whewell, who extrapolated the cotidal lines from the coast of the North Sea and found that the lines must meet at some point.
Amphidromic points occur because interference within oceanic basins, seas and bays, combined with the Coriolis effect, creates a wave pattern — called an amphidromic system — which rotates around the amphidromic point. At the amphidromic points of the dominant tidal constituent, there is almost no vertical change in sea level from tidal action; that is, there is little or no difference between high tide and low tide at these locations. There can still be tidal currents since the water levels on either side of the amphidromic point are not the same. A separate amphidromic system is created by each periodic tidal component.
In most locations the "principal lunar semi-diurnal", known as M2, is the largest tidal constituent. Cotidal lines connect points which reach high tide at the same time and low tide at the same time. In Figure 1, the low tide lags or leads by 1 hr 2 min from its neighboring lines. Where the lines meet are amphidromes, and the tide rotates around them; for example, along the Chilean coast, and from southern Mexico to Peru, the tide propagates southward, while from Baja California to Alaska the tide propagates northward.
Formation of amphidromic points
Tides are generated as a result of gravitational attraction by the Sun and Moon. This gravitational attraction results in a tidal force that acts on the ocean. The ocean reacts to this external forcing by generating, in particular relevant for describing tidal behaviour, Kelvin waves and Poincaré waves (also known as Sverdrup waves). These tidal waves can be considered wide, relative to the Rossby radius of deformation (~3000 km in the open ocean), and shallow, as the water depth (D, on average ~4 kilometre deep) in the ocean is much smaller (i.e. D/λ <1/20) than the wavelength (λ) which is in the order of thousands of kilometres.
In real oceans, the tides cannot endlessly propagate as progressive waves. The waves reflect due to changes in water depth (for example when entering shelf seas) and at coastal boundaries. The result is a reflected wave that propagates in the opposite direction to the incident wave. The combination of the reflected wave and the incident wave is the total wave. Due to resonance between the reflected and the incident wave, the amplitude of the total wave can either be suppressed or amplified. The points at which the two waves amplify each other are known as antinodes and the points at which the two waves cancel each other out are known as nodes. Figure 2 shows a λ resonator. The first node is located at λ of the total wave, followed by the next node reoccurring λ farther at λ.
A long, progressive wave travelling in a channel on a rotating Earth behaves differently from a wave travelling along a non-rotating channel. Due to the Coriolis force, the water in the ocean is deflected towards the right in the northern hemisphere and conversely in the southern hemisphere. This side-way component of the flow due to the Coriolis force causes a build-up of water that results in a pressure gradient. The resulting slope develops until it is equilibrium with the Coriolis force; resulting in geostrophic balance. As a result of this geostrophic balance, Kelvin waves (originally described by Lord Kelvin) and Poincaré waves are generated. The amplitude of a Kelvin wave is highest near the coast and, when considering a wave on the northern hemisphere, decreases to further away from its right-hand coastal boundary. The propagation of Kelvin waves is always alongshore and its amplification falls off according to the Rossby radius of deformation. In contrast, Poincaré waves are able to propagate both alongshore as a free wave with a propagating wave pattern and cross-shore as a trapped wave with a standing wave pattern.
Infinitely long channel
In an infinitely long channel, which can be viewed upon as a simplified approximation of the Atlantic Ocean and Pacific Ocean, the tide propagates as an incident and a reflective Kelvin wave. The amplitude of the waves decreases further away from the coast and at certain points in the middle of the basin, the amplitude of the total wave becomes zero. Moreover, the phase of the tide seems to rotate around these points of zero amplitude. These points are called amphidromic points. The sense of rotation of the wave around the amphidromic point is in the direction of the Coriolis force; anticlockwise in the northern hemisphere and clockwise in the southern hemisphere.
Semi-enclosed basin
In a semi-enclosed basin, such as the North Sea, Kelvin waves, though being the dominant tidal wave propagating in alongshore direction, are not able to propagate cross shore as they rely on the presence of lateral boundaries or the equator. As such, the tidal waves observed cross-shore are predominantly Poincaré waves. The tides observed in a semi-enclosed basin are therefore chiefly the summation of the incident Kelvin wave, reflected Kelvin wave and cross-shore standing Poincaré wave. An animation of the tidal amplitude, tidal currents and its amphidromic behaviour is shown in Animation 2.
Position of amphidromic points
Figure 2 shows that the first node of the total wave is located at λ with reoccurring nodes at intervals of λ. In an idealized situation, amphidromic points can be found at the position of these nodes of the total tidal wave. When neglecting friction, the position of the amphidromic points would be in the middle of the basin, as the initial amplitude and the amplitude decay of the incident wave and the reflected wave are equal, this can be seen in Animations 1 and 2 However, tidal waves in the ocean are subject to friction from the seabed and from interaction with coastal boundaries. Moreover, variation in water depth influences the spacing between amphidromic points.
Firstly, the distance between amphidromic points is dependent on the water depth:
Where g is the gravitational acceleration, D is the water depth and T is the period of the wave.
Locations with more shallow water depth have their amphidromic points closer to each other as the distance of the interval (λ) of the nodes decreases. Secondly, energy losses due to friction in shallow seas and coastal boundaries result in additional adjustments of the tidal pattern. Tidal waves are not perfectly reflected, resulting in energy loss which causes a smaller reflected wave compared to the incoming wave. Consequently, on the northern hemisphere, the amphidromic point will be displaced from the centre line of the channel towards the left of the direction of the incident wave.
The degree of displacement on the northern hemisphere for the first amphidrome is given by:
Where γ is the displacement of the amphidrome from the centre of the channel (γ=0), g is the gravitational acceleration, D is the water depth, f is the Coriolis frequency and α is the ratio between amplitudes of the reflected wave and the incident wave. Because the reflected wave is smaller than the incident wave, α will be smaller than 1 and lnα will be negative. Hence the amphidromic displacement γ is to the left of the incident wave on the northern hemisphere.
Furthermore, a study has shown than there is a pattern of amphidrome movement related to spring-neap cycles in the Irish Sea. The maximum displacement of the amphidrome from the centre coincides with spring tides, whereas the minimum occurs at neaps. During spring tides, more energy is absorbed from the tidal wave compared to neap tides. As a result, the reflection coefficient α is smaller and the displacement of the amphidromic point from the centre is larger. Similar amphidromic movement is expected in other seas where energy dissipation due to friction is high.
It can occur that the amphidromic point moves inland of the coastal boundary. In this case, the amplitude and the phase of the tidal wave will still rotate around an inland point, which is called a virtual or degenerate amphidrome.
Amphidromic points and sea level rise
The position of amphidromic points and their movement predominantly depends on the wavelength of the tidal wave and friction. As a result of enhanced greenhouse gas emissions, the oceans in the world are becoming subject to sea-level rise. As the water depth increases, the wavelength of the tidal wave will increase. Consequently the position of the amphidromic points located at λ in semi-enclosed systems will move further away from the cross-shore coastal boundary. Furthermore, amphidromic points will move further away from each other as the interval of λ increases. This effect will be more pronounced in shallow seas and coastal regions, as the relative water depth increase due to sea-level rise will be larger, when compared to the open ocean. Moreover, the amount of sea-level rise differs per region. Some regions will be subject to a higher rate of sea-level rise than other regions and nearby amphidromic points will be more susceptible to change location. Lastly, sea-level rise results in less bottom friction and therefore less energy dissipation. This causes the amphidromic points to move further away from the coastal boundaries and more towards the centre its channel/basin.
In the M2 tidal constituent
Based on Figure 1, there are the following clockwise and anticlockwise amphidromic points:
Clockwise amphidromic points
north of the Seychelles
near Enderby Land
off Perth
east of New Guinea
south of Easter Island
west of the Galapagos Islands
north of Queen Maud Land
Counterclockwise amphidromic points
near Sri Lanka
north of New Guinea
at Tahiti
between Mexico and Hawaii
near the Leeward Islands
east of Newfoundland
midway between Rio de Janeiro and Angola
east of Iceland
Outside Eigersund in southwestern Norway
The islands of Madagascar and New Zealand are amphidromic points in the sense that the tide goes around them in about 12 and a half hours, but the amplitude of the tides on their coasts is in some places large.
See also
Kelvin wave
Tides
Theory of tides
References and notes
Wave mechanics
Tides | Amphidromic point | [
"Physics"
] | 2,269 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
336,175 | https://en.wikipedia.org/wiki/Jute | Jute ( ) is a long, rough, shiny bast fibre that can be spun into coarse, strong threads. It is produced from flowering plants in the genus Corchorus, of the mallow family Malvaceae. The primary source of the fiber is Corchorus olitorius, but such fiber is considered inferior to that derived from Corchorus capsularis.
Jute fibers, composed primarily of cellulose and lignin, are collected from bast (the phloem of the plant, sometimes called the "skin") of plants like kenaf, industrial hemp, flax (linen), and ramie. The industrial term for jute fiber is raw jute. The fibers are off-white to brown and range from long. In Bangladesh, jute is called the "golden fiber" for its color and monetary value.
The bulk of the jute trade is centered in South Asia, with India and Bangladesh as the primary producers. The majority of jute is used for durable and sustainable packaging, such as burlap sacks. Its production and usage declined as disposable plastic packaging became common, but this trend has begun to reverse as merchants and even nations phase out or ban single-use plastics.
Cultivation
The jute plant needs plain alluvial soil and standing water. During the monsoon season, the monsoon climate offers a warm and wet environment which is suitable for growing jute. Temperatures from and relative humidity of 70%–80% are favorable for successful cultivation. Jute requires of rainfall weekly, and more during the sowing time. Soft water is necessary for jute production.
White jute (Corchorus capsularis)
Historical documents (including Ain-e-Akbari by Abu'l-Fazl ibn Mubarak in 1590) state that the poor villagers of India used to wear clothing made of jute. The weavers used simple hand-spinning wheels and hand looms, which they also used to spin cotton yarns. History also suggests that Indians, especially Bengalis, used ropes and twines made of white jute from ancient times for household and other uses. Jute is highly functional for carrying grains or other agricultural products.
Tossa jute (Corchorus olitorius)
Tossa jute (Corchorus olitorius) is a variety thought to be native to South Asia. It is grown for both fiber and culinary purposes. People use the leaves as an ingredient in a mucilaginous potherb called "molokhiya" (, of uncertain etymology), which is mainly used in some Arabic countries such as Egypt, Jordan, and Syria as a soup-based dish, sometimes with meat over rice or lentils. The King James translation of the Book of Job (chapter 30, verse 4), in the Hebrew Bible, mistranslates the word maluaḥ, which means Atriplex as "mallow", which in turn has led some to identify this jute species as that what was meant by the translators, and led it to be called 'Jew's mallow' in English. It is high in protein, vitamin C, beta-carotene, calcium, and iron.
Bangladesh and other countries in Southeast Asia, and the South Pacific mainly use jute for its fiber. Tossa jute fiber is softer, silkier, and stronger than white jute. This variety shows good sustainability in the Ganges Delta climate. Along with white jute, tossa jute has also been cultivated in the soil of Bengal where has been known as paat since the start of the 19th century. Coremantel, Bangladesh, is the largest global producer of the tossa jute variety. In India, West Bengal is the largest producer of jute.
History
Jute has been used for making textiles in the Indus valley civilization since the 3rd millennium BC.
For centuries, jute has been a part of the culture of Bangladesh and some parts of West Bengal and Assam. The British started trading in jute during the seventeenth century. During the reign of the British Empire, jute was also used in the military. British jute barons grew rich by processing jute and selling manufactured products made from it. Dundee Jute Barons and the British East India Company set up many jute mills in Bengal, and by 1895 jute industries in Bengal overtook the Scottish jute trade. Many Scots emigrated to Bengal to set up jute factories. More than a billion jute sandbags were exported from Bengal to the trenches of World War I, and to the American South for bagging cotton. It was used in multiple industries, including the fishing, construction, art, and arms industries.
Due to its coarse and tough texture, jute could initially only be processed by hand, until someone in Dundee discovered that treating it with whale oil made it machine processable. The industry boomed throughout the eighteenth and nineteenth centuries ("jute weaver" was a recognized trade occupation in the 1901 UK census), but this trade largely ceased by about 1970, being substituted for by synthetic fibres. In the 21st century, jute has become a large export again, mainly in Bangladesh.
Production
The jute fiber comes from the stem and ribbon (outer skin) of the jute plant. The fibers are first extracted by retting, a process in which jute stems are bundled together and immersed in slow running water. There are two types of retting: stem and ribbon. After the retting process, stripping begins. In the stripping process, workers scrape off non-fibrous matter, then dig in and grab the fibers from within the jute stem.
Jute is a rain-fed crop with little need for fertilizer or pesticides, in contrast to cotton's heavy requirements. Production in India is concentrated mostly in West Bengal. India is the world's largest producer of jute, but imported approximately 162,000 tonnes of raw fiber and 175,000 tonnes of jute products in 2011. India, Pakistan, and China import significant quantities of jute fiber and products from Bangladesh, as do the United Kingdom, Japan, United States, France, Spain, Ivory Coast, Germany and Brazil. Jute and jute products formerly held the top position among Bangladesh's most exported goods, although now they stand second after ready-made apparel. Annually, Bangladesh produces 7 to 8 million bales of raw jute, out of which 0.6 to 0.8 million bales are exported to international markets. China, India, and Pakistan are the primary importers of Bangladeshi raw jute.
Genome
In 2002, Bangladesh commissioned a consortium of researchers from University of Dhaka, Bangladesh Jute Research Institute (BJRI) and private software firm DataSoft Systems Bangladesh Ltd., in collaboration with the Centre for Chemical Biology, University of Science Malaysia and University of Hawaii, to research different fibers and hybrid fibers of jute. The draft genome of jute (Corchorus olitorius) was completed.
Uses
Jutes are relatively cheap and versatile fiber and have a wide variety of uses in cordage and cloth. It is commonly used to make burlap sacks.
The jute plant also has some culinary uses, which are generally focused on the leaves.
Due to its durability and biodegradability, jute matting is used as a temporary solution to prevent flood erosion.
Researchers have also investigated the possibility of using jute and glucose to build aeroplane panels.
Fibers
Individual jute fibers can range from very fine to very coarse, and the varied fibers are suited for a variety of uses.
The coarser fibers, which are called jute butts, are used alone or combined with other fibers to make many products:
Hessian cloth
Sacking
Agricultural wrapping cloth, most notably wrapping for bales of raw cotton
Sandbags
Cloth backing for flooring, such as linoleum or carpet
Cordage, such as twine or rope
Pulp (for paper production)
Finer jute fibers can be processed for use in:
Shoes, such as espadrilles
Sweaters and cardigans
Imitation silk
Curtains
Chair coverings
Carpets
Rugs
Jute was historically used in traditional textile machinery because jute fibers contain cellulose (vegetable fiber) and lignin (wood fiber). Later, several industries, such as the automotive, pulp and paper, furniture, and bedding industries, started to use jute and its allied fibers with their non-woven and composite technology to manufacture nonwoven fabric, technical textiles, and composites.
Jute is used in the manufacture of fabrics, such as Hessian cloth, sacking, scrim, carpet backing cloth (CBC), and canvas. Hessian is lighter than sacking, and it is used for bags, wrappers, wall-coverings, upholstery, and home furnishings. Sacking, which is a fabric made of heavy jute fibers, has its use in the name. CBC made of jute comes in two types: primary and secondary. Primary CBC provides a tufting surface, while secondary CBC is bonded onto the primary backing for an overlay. Jute packaging is sometimes used as an environmentally friendly substitute for plastic.
Other jute consumer products include floor coverings, high performance technical textiles, geotextiles, and composites. Jute has been used as a home textile due to its anti-static and color- and light-fast properties, as well as its strength, durability, UV protection, sound and heat insulation, and low thermal conductivity.
Culinary uses
Corchous olitorius leaves are used to make mulukhiya, which is sometimes considered the Egyptian national dish, and is also consumed in Cyprus and other Middle Eastern countries. These leaves are an ingredient in stews, typically cooked with lamb or chicken.
In India (West Bengal) and Bangladesh, in the Bengali cuisine, the fresh leaves are stir fried and eaten as path saak bhaja (পাঠ শাক ভাজা) along with a mustard sauce called kasundi (কাসুন্দি). The leaves are also eaten by making pakoras (পাঠ পাতার বড়া) with rice flour or Gram flour batter.
In Nigeria, leaves of Corchorus olitorius are prepared in sticky soup called ewedu together with ingredients such as sweet potato, dried small fish, or shrimp. The leaves are rubbed until foamy or sticky before they are added to the soup. Among the Yoruba people of Nigeria, the leaves are called Ewedu, and in the Hausa-speaking northern Nigeria, the leaves are called turgunuwa or lallo. The cook shreds the jute leaves adds them to the soup, which generally also contains meat or fish, onions, pepper, and other spices. The Lugbara of Northwestern Uganda also eat jute leaves in a soup called pala bi. Jute is also a totem for Ayivu, one of the Lugbara clans.
In the Philippines, especially in Ilocano-dominated areas, this vegetable, which is locally known as saluyot, can be mixed with bitter gourd, bamboo shoots, loofah, or a combination of these ingredients, which have a slimy and slippery texture.
Vietnamese cuisine also use edible jute known as rau đay. It is usually used in canh cooked with crab and loofah.
In Haiti, a dish called "Lalo" is made with jute leaves and other ingredients. One version of Lalo includes lalo with crab and meat (such as pork or beef) served on a bed of rice.
Environmental impact
Fabrics made of jute fibers are carbon neutral and biodegradable, which make jute a candidate material for high performance technical textiles.
As global concern over forest destruction increases, jute may begin to replace wood as a primary pulp ingredient.
Cultural significance
See also
Cash crop
Economy of Bangladesh
International Jute Study Group
International Year of Natural Fibres
Kenaf
Ministry of Textiles and Jute
Spinning (textiles)
References
Further reading
Basu, G., A. K. Sinha, and S. N. Chattopadhyay. "Properties of Jute Based Ternary Blended Bulked Yarns". Man-Made Textiles in India. Vol. 48, no. 9 (Sep. 2005): 350–353. (AN 18605324)
Chattopadhyay, S. N., N. C. Pan, and A. Day. "A Novel Process of Dyeing of Jute Fabric Using Reactive Dye". Textile Industry of India. Vol. 42, no. 9 (Sep. 2004): 15–22. (AN 17093709)
Doraiswamy, I., A. Basu, and K. P. Chellamani. "Development of Fine Quality Jute Fibers". Colourage. Nov. 6–8, 1998, 2p. (AN TDH0624047199903296)
Kozlowski, R., and S. Manys. "Green Fibers". The Textile Institute. Textile Industry: Winning Strategies for the New Millennium—Papers Presented at the World Conference. Feb. 10–13, 1999: 29 (13p). (AN TDH0646343200106392)
Madhu, T. "Bio-Composites—An Overview". Textile Magazine. Vol. 43, no. 8 (Jun. 2002): 49 (2 pp). (AN TDH0656367200206816)
Maulik, S. R. "Chemical Modification of Jute". Asian Textile Journal. Vol. 10, no. 7 (Jul. 2001): 99 (8 pp). (AN TDH0648424200108473)
Moses, J. Jeyakodi, and M. Ramasamy. "Quality Improvement on Jute and Jute Cotton Materials Using Enzyme Treatment and Natural Dyeing". Man-Made Textiles in India. Vol. 47, no. 7 (Jul. 2004): 252–255. (AN 14075527)
Pan, N. C., S. N. Chattopadhyay, and A. Day. "Dyeing of Jute Fabric with Natural Dye Extracted from Marigold Flower". Asian Textile Journal. Vol. 13, no. 7 (Jul. 2004): 80–82. (AN 15081016)
Pan, N. C., A. Day, and K. K. Mahalanabis. "Properties of Jute". Indian Textile Journal. Vol. 110, no. 5 (Feb. 2000): 16. (AN TDH0635236200004885)
Roy, T. K. G., S. K. Chatterjee, and B. D. Gupta. "Comparative Studies on Bleaching and Dyeing of Jute after Processing with Mineral Oil in Water Emulsion vis-a-vis Self-Emulsifiable Castor Oil". Colourage. Vol. 49, no. 8 (Aug. 2002): 27 (5 pp). (AN TDH0657901200208350)
Shenai, V. A. "Enzyme Treatment". Indian Textile Journal. Vol. 114, no. 2 (Nov. 2003): 112–113. (AN 13153355)
Srinivasan, J., A. Venkatachalam, and P. Radhakrishnan. "Small-Scale Jute Spinning: An Analysis". Textile Magazine. Vol. 40, no. 4 (Feb. 1999): 29. (ANTDH0624005199903254)
Tomlinson, Jim. Carlo Morelli and Valerie Wright. The Decline of Jute: Managing Industrial Decline (London: Pickering and Chatto, 2011) 219 pp. . focus on Dundee, Scotland
Vijayakumar, K. A., and P. R. Raajendraa. "A New Method to Determine the Proportion of Jute in a Jute/Cotton Blend". Asian Textile Journal, Vol. 14, no. 5 (May 2005): 70–72. (AN 18137355)
External links
Jute Genome Project
Bangladesh Jute Research Institute
International Jute Study Group (IJSG) Resources about jute, kenaf, and roselle plants. jute.org
Department of Horticulture & Landscape Architecture, Purdue University Some chemistry and medicinal information on tossa jute. purdue.edu
National Library of Scotland: SCOTTISH SCREEN ARCHIVE (selection of archive films about the jute industry in Dundee)
Biodegradable materials
Packaging materials
Leaf vegetables | Jute | [
"Physics",
"Chemistry"
] | 3,450 | [
"Biodegradation",
"Biodegradable materials",
"Materials",
"Matter"
] |
336,254 | https://en.wikipedia.org/wiki/De%20Sitter%20universe | A de Sitter universe is a cosmological solution to the Einstein field equations of general relativity, named after Willem de Sitter. It models the universe as spatially flat and neglects ordinary matter, so the dynamics of the universe are dominated by the cosmological constant, thought to correspond to dark energy in our universe or the inflaton field in the early universe. According to the models of inflation and current observations of the accelerating universe, the concordance models of physical cosmology are converging on a consistent model where our universe was best described as a de Sitter universe at about a time after the fiducial Big Bang singularity, and far into the future.
Mathematical expression
A de Sitter universe has no ordinary matter content but with a positive cosmological constant () that sets the expansion rate, . A larger cosmological constant leads to a larger expansion rate:
where the constants of proportionality depend on conventions.
It is common to describe a patch of this solution as an expanding universe of the FLRW form where the scale factor is given by
where the constant is the Hubble expansion rate and is time. As in all FLRW spaces, , the scale factor, describes the expansion of physical spatial distances.
Unique to universes described by the FLRW metric, a de Sitter universe has a Hubble Law that is not only consistent through all space, but also through all time (since the deceleration parameter is ), thus satisfying the perfect cosmological principle that assumes isotropy and homogeneity throughout space and time. There are ways to cast de Sitter space with static coordinates (see de Sitter space), so unlike other FLRW models, de Sitter space can be thought of as a static solution to Einstein's equations even though the geodesics followed by observers necessarily diverge as expected from the expansion of physical spatial dimensions. As a model for the universe, de Sitter's solution was not considered viable for the observed universe until models for inflation and dark energy were developed. Before then, it was assumed that the Big Bang implied only an acceptance of the weaker cosmological principle, which holds that isotropy and homogeneity apply spatially but not temporally.
Relative expansion
The exponential expansion of the scale factor means that the physical distance between any two non-accelerating observers will eventually be growing faster than the speed of light. At this point those two observers will no longer be able to make contact. Therefore, any observer in a de Sitter universe would have cosmological horizons beyond which that observer can never see nor learn any information. If our universe is approaching a de Sitter universe then eventually we will not be able to observe any galaxies other than our own Milky Way (and any others in the gravitationally bound Local Group, assuming they were to somehow survive to that time without merging).
Role in the Benchmark Model
The Benchmark Model is a model consisting of a universe made of three components – radiation, ordinary matter, and dark energy – that fit current data about the history of the universe. These components make different contributions to the expansion of the universe as time elapses. Specifically, when the universe is radiation dominated, the expansion factor scales as , and when the universe is matter dominated . Since both of these grow slower than the exponential, in the future the scale factor will be dominated by the exponential factor representing the pure de Sitter universe. The point at which this starts to occur is known as the matter–lambda equivalence point and the modern-day universe is believed to be relatively close to this point.
See also
Cosmic inflation
De Sitter space – for more mathematical properties
Deceleration parameter
Causal patch
Lambda-CDM model
References
Physical cosmology
Exact solutions in general relativity
Inflation (cosmology)
ru:Модель де Ситтера | De Sitter universe | [
"Physics",
"Astronomy",
"Mathematics"
] | 793 | [
"Exact solutions in general relativity",
"Theoretical physics",
"Mathematical objects",
"Astrophysics",
"Equations",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
336,271 | https://en.wikipedia.org/wiki/Approximation | An approximation is anything that is intentionally similar but not exactly equal to something else.
Etymology and usage
The word approximation is derived from Latin approximatus, from proximus meaning very near and the prefix ad- (ad- before p becomes ap- by assimilation) meaning to. Words like approximate, approximately and approximation are used especially in technical or scientific contexts. In everyday English, words such as roughly or around are used with a similar meaning. It is often found abbreviated as approx.
The term can be applied to various properties (e.g., value, quantity, image, description) that are nearly, but not exactly correct; similar, but not exactly the same (e.g., the approximate time was 10 o'clock).
Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.
In science, approximation can refer to using a simpler process or model when the correct model is difficult to use. An approximate model is used to make calculations easier. Approximations might also be used if incomplete information prevents use of exact representations.
The type of approximation used depends on the available information, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation.
Mathematics
Approximation theory is a branch of mathematics, and a quantitative part of functional analysis. Diophantine approximation deals with approximations of real numbers by rational numbers.
Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. For example, 1.5 × 106 means that the true value of something being measured is 1,500,000 to the nearest hundred thousand (so the actual value is somewhere between 1,450,000 and 1,550,000); this is in contrast to the notation 1.500 × 106, which means that the true value is 1,500,000 to the nearest thousand (implying that the true value is somewhere between 1,499,500 and 1,500,500).
Numerical approximations sometimes result from using a small number of significant digits. Calculations are likely to involve rounding errors and other approximation errors. Log tables, slide rules and calculators produce approximate answers to all but the simplest calculations. The results of computer calculations are normally an approximation expressed in a limited number of significant digits, although they can be programmed to produce more precise results. Approximation can occur when a decimal number cannot be expressed in a finite number of binary digits.
Related to approximation of functions is the asymptotic value of a function, i.e. the value as one or more of a function's parameters becomes arbitrarily large. For example, the sum is asymptotically equal to k. No consistent notation is used throughout mathematics and some texts use ≈ to mean approximately equal and ~ to mean asymptotically equal whereas other texts use the symbols the other way around.
Typography
The approximately equals sign, ≈, was introduced by British mathematician Alfred Greenhill in 1892, in his book Applications of Elliptic Functions.
LaTeX symbols
Symbols used in LaTeX markup.
(\approx), usually to indicate approximation between numbers, like .
(\not\approx), usually to indicate that numbers are not approximately equal ().
(\simeq), usually to indicate asymptotic equivalence between functions, like .
So writing would be wrong under this definition, despite wide use.
(\sim), usually to indicate proportionality between functions, the same of the line above will be .
(\cong), usually to indicate congruence between figures, like .
(\eqsim), usually to indicate that two quantities are equal up to constants.
(\lessapprox) and (\gtrapprox), usually to indicate that either the inequality holds or the two values are approximately equal.
Unicode
Symbols used to denote items that are approximately equal are wavy or dotted equals signs.
Science
Approximation arises naturally in scientific experiments. The predictions of a scientific theory can differ from actual measurements. This can be because there are factors in the real situation that are not included in the theory. For example, simple calculations may not include the effect of air resistance. Under these circumstances, the theory is an approximation to reality. Differences may also arise because of limitations in the measuring technique. In this case, the measurement is an approximation to the actual value.
The history of science shows that earlier theories and laws can be approximations to some deeper set of laws. Under the correspondence principle, a new scientific theory should reproduce the results of older, well-established, theories in those domains where the old theories work. The old theory becomes an approximation to the new theory.
Some problems in physics are too complex to solve by direct analysis, or progress could be limited by available analytical tools. Thus, even when the exact representation is known, an approximation may yield a sufficiently accurate solution while reducing the complexity of the problem significantly. Physicists often approximate the shape of the Earth as a sphere even though more accurate representations are possible, because many physical characteristics (e.g., gravity) are much easier to calculate for a sphere than for other shapes.
Approximation is also used to analyze the motion of several planets orbiting a star. This is extremely difficult due to the complex interactions of the planets' gravitational effects on each other. An approximate solution is effected by performing iterations. In the first iteration, the planets' gravitational interactions are ignored, and the star is assumed to be fixed. If a more precise solution is desired, another iteration is then performed, using the positions and motions of the planets as identified in the first iteration, but adding a first-order gravity interaction from each planet on the others. This process may be repeated until a satisfactorily precise solution is obtained.
The use of perturbations to correct for the errors can yield more accurate solutions. Simulations of the motions of the planets and the star also yields more accurate solutions.
The most common versions of philosophy of science accept that empirical measurements are always approximations — they do not perfectly represent what is being measured.
Law
Within the European Union (EU), "approximation" refers to a process through which EU legislation is implemented and incorporated within Member States' national laws, despite variations in the existing legal framework in each country. Approximation is required as part of the pre-accession process for new member states, and as a continuing process when required by an EU Directive. Approximation is a key word generally employed within the title of a directive, for example the Trade Marks Directive of 16 December 2015 serves "to approximate the laws of the Member States relating to trade marks". The European Commission describes approximation of law as "a unique obligation of membership in the European Union".
See also
Double tilde (disambiguation)Various meanings of ~~ or ≈
References
External links
Numerical analysis
Equivalence (mathematics)
Comparison (mathematical) | Approximation | [
"Mathematics"
] | 1,444 | [
"Computational mathematics",
"Arithmetic",
"Mathematical relations",
"Comparison (mathematical)",
"Numerical analysis",
"Approximations"
] |
336,349 | https://en.wikipedia.org/wiki/Egyptian%20fraction | An Egyptian fraction is a finite sum of distinct unit fractions, such as
That is, each fraction in the expression has a numerator equal to 1 and a denominator that is a positive integer, and all the denominators differ from each other. The value of an expression of this type is a positive rational number ; for instance the Egyptian fraction above sums to . Every positive rational number can be represented by an Egyptian fraction. Sums of this type, and similar sums also including and as summands, were used as a serious notation for rational numbers by the ancient Egyptians, and continued to be used by other civilizations into medieval times. In modern mathematical notation, Egyptian fractions have been superseded by vulgar fractions and decimal notation. However, Egyptian fractions continue to be an object of study in modern number theory and recreational mathematics, as well as in modern historical studies of ancient mathematics.
Applications
Beyond their historical use, Egyptian fractions have some practical advantages over other representations of fractional numbers.
For instance, Egyptian fractions can help in dividing food or other objects into equal shares. For example, if one wants to divide 5 pizzas equally among 8 diners, the Egyptian fraction
means that each diner gets half a pizza plus another eighth of a pizza, for example by splitting 4 pizzas into 8 halves, and the remaining pizza into 8 eighths. Exercises in performing this sort of fair division of food are a standard classroom example in teaching students to work with unit fractions.
Egyptian fractions can provide a solution to rope-burning puzzles, in which a given duration is to be measured by igniting non-uniform ropes which burn out after a unit time. Any rational fraction of a unit of time can be measured by expanding the fraction into a sum of unit fractions and then, for each unit fraction , burning a rope so that it always has simultaneously lit points where it is burning. For this application, it is not necessary for the unit fractions to be distinct from each other. However, this solution may need an infinite number of re-lighting steps.
Early history
Egyptian fraction notation was developed in the Middle Kingdom of Egypt. Five early texts in which Egyptian fractions appear were the Egyptian Mathematical Leather Roll, the Moscow Mathematical Papyrus, the Reisner Papyrus, the Kahun Papyrus and the Akhmim Wooden Tablet. A later text, the Rhind Mathematical Papyrus, introduced improved ways of writing Egyptian fractions. The Rhind papyrus was written by Ahmes and dates from the Second Intermediate Period; it includes a table of Egyptian fraction expansions for rational numbers , as well as 84 word problems. Solutions to each problem were written out in scribal shorthand, with the final answers of all 84 problems being expressed in Egyptian fraction notation. Tables of expansions for similar to the one on the Rhind papyrus also appear on some of the other texts. However, as the Kahun Papyrus shows, vulgar fractions were also used by scribes within their calculations.
Notation
To write the unit fractions used in their Egyptian fraction notation, in hieroglyph script, the Egyptians placed the hieroglyph:
(er, "[one] among" or possibly re, mouth) above a number to represent the reciprocal of that number. Similarly in hieratic script they drew a line over the letter representing the number. For example:
The Egyptians had special symbols for , , and that were used to reduce the size of numbers greater than when such numbers were converted to an Egyptian fraction series. The remaining number after subtracting one of these special fractions was written as a sum of distinct unit fractions according to the usual Egyptian fraction notation.
The Egyptians also used an alternative notation modified from the Old Kingdom to denote a special set of fractions of the form (for ) and sums of these numbers, which are necessarily dyadic rational numbers. These have been called "Horus-Eye fractions" after a theory (now discredited) that they were based on the parts of the Eye of Horus symbol.
They were used in the Middle Kingdom in conjunction with the later notation for Egyptian fractions to subdivide a hekat, the primary ancient Egyptian volume measure for grain, bread, and other small quantities of volume, as described in the Akhmim Wooden Tablet. If any remainder was left after expressing a quantity in Eye of Horus fractions of a hekat, the remainder was written using the usual Egyptian fraction notation as multiples of a ro, a unit equal to of a hekat.
Calculation methods
Modern historians of mathematics have studied the Rhind papyrus and other ancient sources in an attempt to discover the methods the Egyptians used in calculating with Egyptian fractions. In particular, study in this area has concentrated on understanding the tables of expansions for numbers of the form in the Rhind papyrus. Although these expansions can generally be described as algebraic identities, the methods used by the Egyptians may not correspond directly to these identities. Additionally, the expansions in the table do not match any single identity; rather, different identities match the expansions for prime and for composite denominators, and more than one identity fits the numbers of each type:
For small odd prime denominators , the expansion was used.
For larger prime denominators, an expansion of the form was used, where is a number with many divisors (such as a practical number) between and . The remaining term was expanded by representing the number as a sum of divisors of and forming a fraction for each such divisor in this sum. As an example, Ahmes' expansion fits this pattern with and , as and . There may be many different expansions of this type for a given ; however, as K. S. Brown observed, the expansion chosen by the Egyptians was often the one that caused the largest denominator to be as small as possible, among all expansions fitting this pattern.
For some composite denominators, factored as , the expansion for has the form of an expansion for with each denominator multiplied by . This method appears to have been used for many of the composite numbers in the Rhind papyrus, but there are exceptions, notably , , and .
One can also expand For instance, Ahmes expands . Later scribes used a more general form of this expansion, which works when is a multiple of .
The final (prime) expansion in the Rhind papyrus, , does not fit any of these forms, but instead uses an expansion that may be applied regardless of the value of . That is, . A related expansion was also used in the Egyptian Mathematical Leather Roll for several cases.
Later usage
Egyptian fraction notation continued to be used in Greek times and into the Middle Ages, despite complaints as early as Ptolemy's Almagest about the clumsiness of the notation compared to alternatives such as the Babylonian base-60 notation. Related problems of decomposition into unit fractions were also studied in 9th-century India by Jain mathematician Mahāvīra. An important text of medieval European mathematics, the Liber Abaci (1202) of Leonardo of Pisa (more commonly known as Fibonacci), provides some insight into the uses of Egyptian fractions in the Middle Ages, and introduces topics that continue to be important in modern mathematical study of these series.
The primary subject of the Liber Abaci is calculations involving decimal and vulgar fraction notation, which eventually replaced Egyptian fractions. Fibonacci himself used a complex notation for fractions involving a combination of a mixed radix notation with sums of fractions. Many of the calculations throughout Fibonacci's book involve numbers represented as Egyptian fractions, and one section of this book provides a list of methods for conversion of vulgar fractions to Egyptian fractions. If the number is not already a unit fraction, the first method in this list is to attempt to split the numerator into a sum of divisors of the denominator; this is possible whenever the denominator is a practical number, and Liber Abaci includes tables of expansions of this type for the practical numbers 6, 8, 12, 20, 24, 60, and 100.
The next several methods involve algebraic identities such as
For instance, Fibonacci represents the fraction by splitting the numerator into a sum of two numbers, each of which divides one plus the denominator: . Fibonacci applies the algebraic identity above to each these two parts, producing the expansion . Fibonacci describes similar methods for denominators that are two or three less than a number with many factors.
In the rare case that these other methods all fail, Fibonacci suggests a "greedy" algorithm for computing Egyptian fractions, in which one repeatedly chooses the unit fraction with the smallest denominator that is no larger than the remaining fraction to be expanded: that is, in more modern notation, we replace a fraction by the expansion
where represents the ceiling function; since , this method yields a finite expansion.
Fibonacci suggests switching to another method after the first such expansion, but he also gives examples in which this greedy expansion was iterated until a complete Egyptian fraction expansion was constructed: and .
Compared to ancient Egyptian expansions or to more modern methods, this method may produce expansions that are quite long, with large denominators, and Fibonacci himself noted the awkwardness of the expansions produced by this method. For instance, the greedy method expands
while other methods lead to the shorter expansion
Sylvester's sequence 2, 3, 7, 43, 1807, ... can be viewed as generated by an infinite greedy expansion of this type for the number 1, where at each step we choose the denominator instead of , and sometimes Fibonacci's greedy algorithm is attributed to James Joseph Sylvester.
After his description of the greedy algorithm, Fibonacci suggests yet another method, expanding a fraction by searching for a number c having many divisors, with , replacing by , and expanding ac as a sum of divisors of bc, similar to the method proposed by Hultsch and Bruins to explain some of the expansions in the Rhind papyrus.
Modern number theory
Although Egyptian fractions are no longer used in most practical applications of mathematics, modern number theorists have continued to study many different problems related to them. These include problems of bounding the length or maximum denominator in Egyptian fraction representations, finding expansions of certain special forms or in which the denominators are all of some special type, the termination of various methods for Egyptian fraction expansion, and showing that expansions exist for any sufficiently dense set of sufficiently smooth numbers.
One of the earliest publications of Paul Erdős proved that it is not possible for a harmonic progression to form an Egyptian fraction representation of an integer. The reason is that, necessarily, at least one denominator of the progression will be divisible by a prime number that does not divide any other denominator. The latest publication of Erdős, nearly 20 years after his death, proves that every integer has a representation in which all denominators are products of three primes.
The Erdős–Graham conjecture in combinatorial number theory states that, if the integers greater than 1 are partitioned into finitely many subsets, then one of the subsets has a finite subset of itself whose reciprocals sum to one. That is, for every , and every r-coloring of the integers greater than one, there is a finite monochromatic subset S of these integers such that The conjecture was proven in 2003 by Ernest S. Croot III.
Znám's problem and primary pseudoperfect numbers are closely related to the existence of Egyptian fractions of the form For instance, the primary pseudoperfect number 1806 is the product of the prime numbers 2, 3, 7, and 43, and gives rise to the Egyptian fraction .
Egyptian fractions are normally defined as requiring all denominators to be distinct, but this requirement can be relaxed to allow repeated denominators. However, this relaxed form of Egyptian fractions does not allow for any number to be represented using fewer fractions, as any expansion with repeated fractions can be converted to an Egyptian fraction of equal or smaller length by repeated application of the replacement if k is odd, or simply by replacing by if k is even. This result was first proven by .
Graham and Jewett proved that it is similarly possible to convert expansions with repeated denominators to (longer) Egyptian fractions, via the replacement This method can lead to long expansions with large denominators, such as had originally used this replacement technique to show that any rational number has Egyptian fraction representations with arbitrarily large minimum denominators.
Any fraction has an Egyptian fraction representation in which the maximum denominator is bounded by and a representation with at most terms. The number of terms must sometimes be at least proportional to ; for instance this is true for the fractions in the sequence , , , , , ... whose denominators form Sylvester's sequence. It has been conjectured that terms are always enough. It is also possible to find representations in which both the maximum denominator and the number of terms are small.
characterized the numbers that can be represented by Egyptian fractions in which all denominators are nth powers. In particular, a rational number q can be represented as an Egyptian fraction with square denominators if and only if q lies in one of the two half-open intervals
showed that any rational number has very dense expansions, using a constant fraction of the denominators up to N for any sufficiently large N.
Engel expansion, sometimes called an Egyptian product, is a form of Egyptian fraction expansion in which each denominator is a multiple of the previous one: In addition, the sequence of multipliers ai is required to be nondecreasing. Every rational number has a finite Engel expansion, while irrational numbers have an infinite Engel expansion.
study numbers that have multiple distinct Egyptian fraction representations with the same number of terms and the same product of denominators; for instance, one of the examples they supply is Unlike the ancient Egyptians, they allow denominators to be repeated in these expansions. They apply their results for this problem to the characterization of free products of Abelian groups by a small number of numerical parameters: the rank of the commutator subgroup, the number of terms in the free product, and the product of the orders of the factors.
The number of different n-term Egyptian fraction representations of the number one is bounded above and below by double exponential functions of n.
Open problems
Some notable problems remain unsolved with regard to Egyptian fractions, despite considerable effort by mathematicians.
The Erdős–Straus conjecture concerns the length of the shortest expansion for a fraction of the form . Does an expansion exist for every n? It is known to be true for all , and for all but a vanishingly small fraction of possible values of n, but the general truth of the conjecture remains unknown.
It is unknown whether an odd greedy expansion exists for every fraction with an odd denominator. If Fibonacci's greedy method is modified so that it always chooses the smallest possible odd denominator, under what conditions does this modified algorithm produce a finite expansion? An obvious necessary condition is that the starting fraction have an odd denominator y, and it is conjectured but not known that this is also a sufficient condition. It is known that every with odd y has an expansion into distinct odd unit fractions, constructed using a different method than the greedy algorithm.
It is possible to use brute-force search algorithms to find the Egyptian fraction representation of a given number with the fewest possible terms or minimizing the largest denominator; however, such algorithms can be quite inefficient. The existence of polynomial time algorithms for these problems, or more generally the computational complexity of such problems, remains unknown.
describes these problems in more detail and lists numerous additional open problems.
See also
List of sums of reciprocals
17-animal inheritance puzzle
Notes
References
External links
.
.
.
and , The Wolfram Demonstrations Project, based on programs by David Eppstein.
Recreational mathematics
Number theory | Egyptian fraction | [
"Mathematics"
] | 3,330 | [
"Recreational mathematics",
"Discrete mathematics",
"Number theory"
] |
336,451 | https://en.wikipedia.org/wiki/Fraser%20spiral%20illusion | The Fraser spiral illusion is an optical illusion that was first described by the British psychologist Sir James Fraser (1863–1936) in 1908.
The illusion is also known as the false spiral, or by its original name, the twisted cord illusion. The overlapping black arc segments appear to form a spiral; however, the arcs are a series of concentric circles.
The visual distortion is produced by combining a regular line pattern (the circles) with misaligned parts (the differently colored strands). Zöllner's illusion and the café wall illusion are based on a similar principle, like many other visual effects, in which a sequence of tilted elements causes the eye to perceive phantom twists and deviations.
The illusion is augmented by the spiral components in the checkered background. It is a unique illusion, where the observer can verify the concentric strands manually. When the strands are highlighted in a different colour, it becomes obvious to the observer that no spiral is present.
See also
Op-art
Mathematics and art
References
External links
Fraser's Spiral from MathWorld
An interactive Fraser Spiral
Optical illusions | Fraser spiral illusion | [
"Physics"
] | 219 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
336,555 | https://en.wikipedia.org/wiki/Jughead%20%28search%20engine%29 | Jughead is a search engine system for the Gopher protocol. It is distinct from Veronica in that it searches a single server at a time.
Jughead was developed by Rhett Jones in 1993 at the University of Utah.
The name "Jughead" was originally chosen to match the Archie search engine, as Jughead Jones is Archie Andrews' best friend in Archie Comics. Later a backronym was developed: Jonzy's Universal Gopher Hierarchy Excavation And Display.
It was released by the original author under the GNU General Public License in 2006, and its source code has been modernized to better run on current POSIX systems.
Due to trademark issues, the modified version was called Jugtail, and has been made available for download on GNU Savannah.
References
External links
Jughead Source
Jugtail Project
Internet protocols
Internet Standards
Unix Internet software
Gopher (protocol)
Internet search engines | Jughead (search engine) | [
"Technology"
] | 178 | [
"Computing stubs",
"World Wide Web stubs"
] |
336,557 | https://en.wikipedia.org/wiki/Blood%20test | A blood test is a laboratory analysis performed on a blood sample that is usually extracted from a vein in the arm using a hypodermic needle, or via fingerprick. Multiple tests for specific blood components, such as a glucose test or a cholesterol test, are often grouped together into one test panel called a blood panel or blood work. Blood tests are often used in health care to determine physiological and biochemical states, such as disease, mineral content, pharmaceutical drug effectiveness, and organ function. Typical clinical blood panels include a basic metabolic panel or a complete blood count. Blood tests are also used in drug tests to detect drug abuse.
Extraction
A venipuncture is useful as it is a minimally invasive way to obtain cells and extracellular fluid (plasma) from the body for analysis. Blood flows throughout the body, acting as a medium that provides oxygen and nutrients to tissues and carries waste products back to the excretory systems for disposal. Consequently, the state of the bloodstream affects or is affected by, many medical conditions. For these reasons, blood tests are the most commonly performed medical tests.
If only a few drops of blood are needed, a fingerstick is performed instead of a venipuncture.
Indwelling arterial, central venous and peripheral venous lines can also be used to draw blood.
Phlebotomists, laboratory practitioners and nurses are those in charge of extracting blood from a patient. However, in special circumstances, and/or emergency situations, paramedics and physicians extract the blood. Also, respiratory therapists are trained to extract arterial blood to examine arterial blood gases.
Types of tests
Biochemical analysis
A basic metabolic panel measures sodium, potassium, chloride, bicarbonate, blood urea nitrogen (BUN), magnesium, creatinine, glucose, and sometimes calcium. Tests that focus on cholesterol levels can determine LDL and HDL cholesterol levels, as well as triglyceride levels.
Some tests, such as those that measure glucose or a lipid profile, require fasting (or no food consumption) eight to twelve hours prior to the drawing of the blood sample.
For the majority of tests, blood is usually obtained from the patient's vein. Other specialized tests, such as the arterial blood gas test, require blood extracted from an artery. Blood gas analysis of arterial blood is primarily used to monitor carbon dioxide and oxygen levels related to pulmonary function, but is also used to measure blood pH and bicarbonate levels for certain metabolic conditions.
While the regular glucose test is taken at a certain point in time, the glucose tolerance test involves repeated testing to determine the rate at which glucose is processed by the body.
Blood tests are also used to identify autoimmune diseases and Immunoglobulin E-mediated food allergies (see also Radioallergosorbent test).
Normal ranges
Blood tests results should always be interpreted using the ranges provided by the laboratory that performed the test. Example ranges are shown below.
Common abbreviations
Upon completion of a blood test analysis, patients may receive a report with blood test abbreviations. Examples of common blood test abbreviations are shown below.
Molecular profiles
Protein electrophoresis (general technique—not a specific test)
Western blot (general technique—not a specific test)
Liver function tests
Polymerase chain reaction (DNA). DNA profiling is today possible with even very small quantities of blood: this is commonly used in forensic science, but is now also part of the diagnostic process of many disorders.
Northern blot (RNA)
Sexually transmitted diseases
Cellular evaluation
Full blood count (or "Complete Blood Count")
Hematocrit
MCV ("Mean Corpuscular Volume")
Mean corpuscular hemoglobin concentration (MCHC)
Erythrocyte sedimentation rate (ESR)
Cross-matching. Determination of blood type for blood transfusion or transplants
Blood cultures are commonly taken if infection is suspected. Positive cultures and resulting sensitivity results are often useful in guiding medical treatment.
Future alternatives
Saliva tests
In 2008, scientists announced that the more cost effective saliva testing could eventually replace some blood tests, as saliva contains 20% of the proteins found in blood. Saliva testing may not be appropriate or available for all markers. For example, lipid levels cannot be measured with saliva testing.
Microemulsion
In February 2011, Canadian researchers at the University of Calgary's Schulich School of Engineering announced a microchip for blood tests. Dubbed a microemulsion, a droplet of blood captured inside a layer of another substance. It can control the exact size and spacing of the droplets. The new test could improve the efficiency, accuracy, and speed of laboratory tests while also doing it cheaply.
SIMBAS
In March 2011, a team of researchers from UC Berkeley, DCU and University of Valparaíso have developed lab-on-a-chip that can diagnose diseases within 10 minutes without the use of external tubing and extra components. It is called Self-powered Integrated Microfluidic Blood Analysis System (SIMBAS). It uses tiny trenches to separate blood cells from plasma (99 percent of blood cells were captured during experiments). Researchers used plastic components, to reduce manufacturing costs.
See also
Barbro Hjalmarsson
Biomarker (medicine), a protein or other biomolecule measured in a blood test
Blood film, a way to look at blood cells under a microscope
Blood gas test
Blood lead level
Hematology, the study of blood
Luminol, a visual test for blood left at crime scenes.
Reference ranges for blood tests
Schumm test, a common test for blood mismatch
:Category:Blood tests
List of medical tests
References | Blood test | [
"Chemistry"
] | 1,165 | [
"Blood tests",
"Chemical pathology"
] |
336,568 | https://en.wikipedia.org/wiki/Classical%20orthogonal%20polynomials | In mathematics, the classical orthogonal polynomials are the most widely used orthogonal polynomials: the Hermite polynomials, Laguerre polynomials, Jacobi polynomials (including as a special case the Gegenbauer polynomials, Chebyshev polynomials, and Legendre polynomials).
They have many important applications in such areas as mathematical physics (in particular, the theory of random matrices), approximation theory, numerical analysis, and many others.
Classical orthogonal polynomials appeared in the early 19th century in the works of Adrien-Marie Legendre, who introduced the Legendre polynomials. In the late 19th century, the study of continued fractions to solve the moment problem by P. L. Chebyshev and then A.A. Markov and T.J. Stieltjes led to the general notion of orthogonal polynomials.
For given polynomials and the classical orthogonal polynomials are characterized by being solutions of the differential equation
with to be determined constants .
There are several more general definitions of orthogonal classical polynomials; for example, use the term for all polynomials in the Askey scheme.
Definition
In general, the orthogonal polynomials with respect to a weight satisfy
The relations above define up to multiplication by a number. Various normalisations are used to fix the constant, e.g.
The classical orthogonal polynomials correspond to the following three families of weights:
The standard normalisation (also called standardization) is detailed below.
Jacobi polynomials
For the Jacobi polynomials are given by the formula
They are normalised (standardized) by
and satisfy the orthogonality condition
The Jacobi polynomials are solutions to the differential equation
Important special cases
The Jacobi polynomials with are called the Gegenbauer polynomials (with parameter )
For , these are called the Legendre polynomials (for which the interval of orthogonality is [−1, 1] and the weight function is simply 1):
For , one obtains the Chebyshev polynomials (of the second and first kind, respectively).
Hermite polynomials
The Hermite polynomials are defined by
They satisfy the orthogonality condition
and the differential equation
Laguerre polynomials
The generalised Laguerre polynomials are defined by
(the classical Laguerre polynomials correspond to .)
They satisfy the orthogonality relation
and the differential equation
Differential equation
The classical orthogonal polynomials arise from a differential equation of the form
where Q is a given quadratic (at most) polynomial, and L is a given linear polynomial. The function f, and the constant λ, are to be found.
(Note that it makes sense for such an equation to have a polynomial solution.
Each term in the equation is a polynomial, and the degrees are consistent.)
This is a Sturm–Liouville type of equation. Such equations generally have singularities in their solution functions f except for particular values of λ. They can be thought of as eigenvector/eigenvalue problems: Letting D be the differential operator, , and changing the sign of λ, the problem is to find the eigenvectors (eigenfunctions) f, and the
corresponding eigenvalues λ, such that f does not have singularities and D(f) = λf.
The solutions of this differential equation have singularities unless λ takes on
specific values. There is a series of numbers λ0, λ1, λ2, ... that led to a series of polynomial solutions P0, P1, P2, ... if one of the following sets of conditions are met:
Q is actually quadratic, L is linear, Q has two distinct real roots, the root of L lies strictly between the roots of Q, and the leading terms of Q and L have the same sign.
Q is not actually quadratic, but is linear, L is linear, the roots of Q and L are different, and the leading terms of Q and L have the same sign if the root of L is less than the root of Q, or vice versa.
Q is just a nonzero constant, L is linear, and the leading term of L has the opposite sign of Q.
These three cases lead to the Jacobi-like, Laguerre-like, and Hermite-like polynomials, respectively.
In each of these three cases, we have the following:
The solutions are a series of polynomials P0, P1, P2, ..., each Pn having degree n, and corresponding to a number λn.
The interval of orthogonality is bounded by whatever roots Q has.
The root of L is inside the interval of orthogonality.
Letting , the polynomials are orthogonal under the weight function
W(x) has no zeros or infinities inside the interval, though it may have zeros or infinities at the end points.
W(x) gives a finite inner product to any polynomials.
W(x) can be made to be greater than 0 in the interval. (Negate the entire differential equation if necessary so that Q(x) > 0 inside the interval.)
Because of the constant of integration, the quantity R(x) is determined only up to an arbitrary positive multiplicative constant. It will be used only in homogeneous differential equations
(where this doesn't matter) and in the definition of the weight function (which can also be
indeterminate.) The tables below will give the "official" values of R(x) and W(x).
Rodrigues' formula
Under the assumptions of the preceding section,
Pn(x) is proportional to
This is known as Rodrigues' formula, after Olinde Rodrigues. It is often written
where the numbers en depend on the standardization. The standard values of en will be given in the tables below.
The numbers λn
Under the assumptions of the preceding section, we have
(Since Q is quadratic and L is linear, and are constants, so these are just numbers.)
Second form for the differential equation
Let
Then
Now multiply the differential equation
by R/Q, getting
or
This is the standard Sturm–Liouville form for the equation.
Third form for the differential equation
Let
Then
Now multiply the differential equation
by S/Q, getting
or
But , so
or, letting u = Sy,
Formulas involving derivatives
Under the assumptions of the preceding section, let P denote the r-th derivative of Pn.
(We put the "r" in brackets to avoid confusion with an exponent.)
P is a polynomial of degree n − r. Then we have the following:
(orthogonality) For fixed r, the polynomial sequence P, P, P, ... are orthogonal, weighted by .
(generalized Rodrigues' formula) P is proportional to
(differential equation) P is a solution of , where λr is the same function as λn, that is,
(differential equation, second form) P is a solution of
There are also some mixed recurrences. In each of these, the numbers a, b, and c depend on n
and r, and are unrelated in the various formulas.
There are an enormous number of other formulas involving orthogonal polynomials
in various ways. Here is a tiny sample of them, relating to the Chebyshev,
associated Laguerre, and Hermite polynomials:
Orthogonality
The differential equation for a particular λ may be written (omitting explicit dependence on x)
multiplying by yields
and reversing the subscripts yields
subtracting and integrating:
but it can be seen that
so that:
If the polynomials f are such that the term on the left is zero, and for , then the orthogonality relationship will hold:
for .
Derivation from differential equation
All of the polynomial sequences arising from the differential equation above are equivalent, under scaling and/or shifting of the domain, and standardizing of the polynomials, to more restricted classes. Those restricted classes are exactly "classical orthogonal polynomials".
Every Jacobi-like polynomial sequence can have its domain shifted and/or scaled so that its interval of orthogonality is [−1, 1], and has Q = 1 − x2. They can then be standardized into the Jacobi polynomials . There are several important subclasses of these: Gegenbauer, Legendre, and two types of Chebyshev.
Every Laguerre-like polynomial sequence can have its domain shifted, scaled, and/or reflected so that its interval of orthogonality is , and has Q = x. They can then be standardized into the Associated Laguerre polynomials . The plain Laguerre polynomials are a subclass of these.
Every Hermite-like polynomial sequence can have its domain shifted and/or scaled so that its interval of orthogonality is , and has Q = 1 and L(0) = 0. They can then be standardized into the Hermite polynomials .
Because all polynomial sequences arising from a differential equation in the manner
described above are trivially equivalent to the classical polynomials, the actual classical
polynomials are always used.
Jacobi polynomial
The Jacobi-like polynomials, once they have had their domain shifted and scaled so that
the interval of orthogonality is [−1, 1], still have two parameters to be determined.
They are and in the Jacobi polynomials,
written . We have and
.
Both and are required to be greater than −1.
(This puts the root of L inside the interval of orthogonality.)
When and are not equal, these polynomials
are not symmetrical about x = 0.
The differential equation
is Jacobi's equation.
For further details, see Jacobi polynomials.
Gegenbauer polynomials
When one sets the parameters and in the Jacobi polynomials equal to each other, one obtains the Gegenbauer or ultraspherical polynomials. They are written , and defined as
We have and
.
The parameter is required to be greater than −1/2.
(Incidentally, the standardization given in the table below would make no sense for α = 0 and n ≠ 0, because it would set the polynomials to zero. In that case, the accepted standardization sets instead of the value given in the table.)
Ignoring the above considerations, the parameter is closely related to the derivatives of :
or, more generally:
All the other classical Jacobi-like polynomials (Legendre, etc.) are special cases of the Gegenbauer polynomials, obtained by choosing a value of and choosing a standardization.
For further details, see Gegenbauer polynomials.
Legendre polynomials
The differential equation is
This is Legendre's equation.
The second form of the differential equation is:
The recurrence relation is
A mixed recurrence is
Rodrigues' formula is
For further details, see Legendre polynomials.
Associated Legendre polynomials
The Associated Legendre polynomials, denoted
where and are integers with , are defined as
The m in parentheses (to avoid confusion with an exponent) is a parameter. The m in brackets denotes the m-th derivative of the Legendre polynomial.
These "polynomials" are misnamed—they are not polynomials when m is odd.
They have a recurrence relation:
For fixed m, the sequence are orthogonal over [−1, 1], with weight 1.
For given m, are the solutions of
Chebyshev polynomials
The differential equation is
This is Chebyshev's equation.
The recurrence relation is
Rodrigues' formula is
These polynomials have the property that, in the interval of orthogonality,
(To prove it, use the recurrence formula.)
This means that all their local minima and maxima have values of −1 and +1, that is, the polynomials are "level". Because of this, expansion of functions in terms of Chebyshev polynomials is sometimes used for polynomial approximations in computer math libraries.
Some authors use versions of these polynomials that have been shifted so that the interval of orthogonality is [0, 1] or [−2, 2].
There are also Chebyshev polynomials of the second kind, denoted
We have:
For further details, including the expressions for the first few
polynomials, see Chebyshev polynomials.
Laguerre polynomials
The most general Laguerre-like polynomials, after the domain has been shifted and scaled, are the Associated Laguerre polynomials (also called generalized Laguerre polynomials), denoted . There is a parameter , which can be any real number strictly greater than −1. The parameter is put in parentheses to avoid confusion with an exponent. The plain Laguerre polynomials are simply the version of these:
The differential equation is
This is Laguerre's equation.
The second form of the differential equation is
The recurrence relation is
Rodrigues' formula is
The parameter is closely related to the derivatives of :
or, more generally:
Laguerre's equation can be manipulated into a form that is more useful in applications:
is a solution of
This can be further manipulated. When is an integer, and :
is a solution of
The solution is often expressed in terms of derivatives instead of associated Laguerre polynomials:
This equation arises in quantum mechanics, in the radial part of the solution of the Schrödinger equation for a one-electron atom.
Physicists often use a definition for the Laguerre polynomials that is larger, by a factor of , than the definition used here.
For further details, including the expressions for the first few polynomials, see Laguerre polynomials.
Hermite polynomials
The differential equation is
This is Hermite's equation.
The second form of the differential equation is
The third form is
The recurrence relation is
Rodrigues' formula is
The first few Hermite polynomials are
One can define the associated Hermite functions
Because the multiplier is proportional to the square root of the weight function, these functions
are orthogonal over with no weight function.
The third form of the differential equation above, for the associated Hermite functions, is
The associated Hermite functions arise in many areas of mathematics and physics.
In quantum mechanics, they are the solutions of Schrödinger's equation for the harmonic oscillator.
They are also eigenfunctions (with eigenvalue (−i n) of the continuous Fourier transform.
Many authors, particularly probabilists, use an alternate definition of the Hermite polynomials, with a weight function of instead of . If the notation He is used for these Hermite polynomials, and H for those above, then these may be characterized by
For further details, see Hermite polynomials.
Characterizations of classical orthogonal polynomials
There are several conditions that single out the classical orthogonal polynomials from the others.
The first condition was found by Sonine (and later by Hahn), who showed that (up to linear changes of variable) the classical orthogonal polynomials are the only ones such that their derivatives are also orthogonal polynomials.
Bochner characterized classical orthogonal polynomials in terms of their recurrence relations.
Tricomi characterized classical orthogonal polynomials as those that have a certain analogue of the Rodrigues formula.
Table of classical orthogonal polynomials
The following table summarises the properties of the classical orthogonal polynomials.
See also
Appell sequence
Askey scheme of hypergeometric orthogonal polynomials
Polynomial sequences of binomial type
Biorthogonal polynomials
Generalized Fourier series
Secondary measure
Sheffer sequence
Umbral calculus
Notes
References
Articles containing proofs
Orthogonal polynomials
Special hypergeometric functions
zh:正交多項式 | Classical orthogonal polynomials | [
"Mathematics"
] | 3,116 | [
"Articles containing proofs"
] |
336,574 | https://en.wikipedia.org/wiki/Saunders%20Mac%20Lane | Saunders Mac Lane (August 4, 1909 – April 14, 2005), born Leslie Saunders MacLane, was an American mathematician who co-founded category theory with Samuel Eilenberg.
Early life and education
Mac Lane was born in Norwich, Connecticut, near where his family lived in Taftville. He was christened "Leslie Saunders MacLane", but "Leslie" fell into disuse because his parents, Donald MacLane and Winifred Saunders, came to dislike it. He began inserting a space into his surname because his first wife found it difficult to type the name without a space. He was the eldest of three brothers; one of his brothers, Gerald MacLane, also became a mathematics professor at Rice University and Purdue University. Another sister died as a baby. His father and grandfather were both ministers; his grandfather had been a Presbyterian, but was kicked out of the church for believing in evolution, and his father was a Congregationalist. His mother, Winifred, studied at Mount Holyoke College and taught English, Latin, and mathematics.
In high school, Mac Lane's favorite subject was chemistry. While in high school, his father died, and he came under his grandfather's care. His half-uncle, a lawyer, was determined to send him to Yale University, where many of his relatives had been educated, and paid his way there beginning in 1926. As a freshman, he became disillusioned with chemistry. His mathematics instructor, Lester S. Hill, coached him for a local mathematics competition which he won, setting the direction for his future work. He went on to study mathematics and physics as a double major, taking courses from Jesse Beams, Ernest William Brown, Ernest Lawrence, F. S. C. Northrop, and Øystein Ore, among others. He graduated from Yale with a B.A. in 1930. During this period, he published his first scientific paper, in physics and co-authored with Irving Langmuir.
In 1929, at a party of Yale football supporters in Montclair, New Jersey, Mac Lane (there to be presented with a prize for having the best grade point average yet recorded at Yale) had met Robert Maynard Hutchins, the new president of the University of Chicago, who encouraged him to go there for his graduate studies and soon afterwards offered him a scholarship. Mac Lane neglected to actually apply to the program, but showed up and was admitted anyway. At Chicago, the subjects he studied included set theory with E. H. Moore, number theory with Leonard Eugene Dickson, the calculus of variations with Gilbert Ames Bliss, and logic with Mortimer J. Adler.
In 1931, having earned his master's degree and feeling restless at Chicago, he earned a fellowship from the Institute of International Education and became one of the last Americans to study at the University of Göttingen prior to its decline under the Nazis. His greatest influences there were Paul Bernays and Hermann Weyl. By the time he finished his doctorate in 1934, Bernays had been forced to leave because he was Jewish, and Weyl became his main examiner. At Göttingen, Mac Lane also studied with Gustav Herglotz and Emmy Noether. Within days of finishing his degree, he married Dorothy Jones, from Chicago, and soon returned to the U.S.
Career
From 1934 through 1938, Mac Lane held short-term appointments at Yale University, Harvard University, Cornell University, and the University of Chicago. He then held a tenure track appointment at Harvard from 1938 to 1947. In 1941, while giving a series of visiting lectures at the University of Michigan, he met Samuel Eilenberg and began what would become a fruitful collaboration on the interplay between algebra and topology. In 1944 and 1945, he directed Columbia University's Applied Mathematics Group, which was involved in the war effort as a contractor for the Applied Mathematics Panel; the mathematics he worked on in this group concerned differential equations for fire-control systems.
In 1947, he accepted an offer to return to Chicago, where (in part because of the university's involvement in the Manhattan Project, and in part because of the administrative efforts of Marshall Stone) many other famous mathematicians and physicists had also recently moved. He traveled as a Guggenheim Fellow to ETH Zurich for the 1947–1948 term, where he worked with Heinz Hopf. Mac Lane succeeded Stone as department chair in 1952, and served for six years.
He was vice president of the National Academy of Sciences and the American Philosophical Society, and president of the American Mathematical Society. While presiding over the Mathematical Association of America in the 1950s, he initiated its activities aimed at improving the teaching of modern mathematics. He was a member of the National Science Board, 1974–1980, advising the American government. In 1976, he led a delegation of mathematicians to China to study the conditions affecting mathematics there. Mac Lane was elected to the National Academy of Sciences in 1949, and received the National Medal of Science in 1989.
Contributions
After a thesis in mathematical logic, Mac Lane's early work was in field theory and valuation theory. He wrote on valuation rings and Witt vectors, and separability in infinite field extensions. He started writing on group extensions in 1942, and in 1943 began his research on what are now called Eilenberg–MacLane spaces K(G,n), having a single non-trivial homotopy group G in dimension n. This work opened the way to group cohomology in general.
After introducing, via the Eilenberg–Steenrod axioms, the abstract approach to homology theory, he and Eilenberg originated category theory in 1945. He is especially known for his work on coherence theorems. A recurring feature of category theory, abstract algebra, and of some other mathematics as well, is the use of diagrams, consisting of arrows (morphisms) linking objects, such as products and coproducts. According to McLarty (2005), this diagrammatic approach to contemporary mathematics largely stems from Mac Lane (1948), who also coined the term Yoneda lemma for a lemma which is an essential background to many central concepts of category theory and which was discovered by Nobuo Yoneda.
Mac Lane had an exemplary devotion to writing approachable texts, starting with his very influential A Survey of Modern Algebra, coauthored in 1941 with Garrett Birkhoff. From then on, it was possible to teach elementary modern algebra to undergraduates using an English text. His Categories for the Working Mathematician remains the definitive introduction to category theory.
Selected works
1997 (1941). A Survey of Modern Algebra (with Garrett Birkhoff). A K Peters.
1948, "Groups, categories and duality," Proceedings of the Nat. Acad. of Sciences of the USA 34: 263–67.
1963.
1995 (1963). Homology, Springer (Classics in Mathematics) (Originally, Band 114 of Die Grundlehren Der Mathematischen Wissenschaften in Einzeldarstellungen.) AMS review by David Buchsbaum.
1999 (1967). Algebra (with Garrett Birkhoff). Chelsea.
1998 (1972). Categories for the Working Mathematician, Springer (Graduate Texts in Mathematics)
1986. Mathematics, Form and Function. Springer-Verlag.
1992. Sheaves in Geometry and Logic: A First Introduction to Topos Theory (with Ieke Moerdijk).
1995.
2005. Saunders Mac Lane: A Mathematical Autobiography. A K Peters.
See also
Foundations of geometry
PROP (category theory)
SPQR tree
Notes
References
(e-book: ).
Biographical references
. With selected bibliography emphasizing Mac Lane's philosophical writings.
.
.
External links
Obituary press release from the University of Chicago.
Photographs of Mac Lane , 1984–1999.
Kutateladze S.S., Saunders Mac Lane, the Knight of Mathematics
1909 births
2005 deaths
Mathematicians from Connecticut
People from Norwich, Connecticut
20th-century American mathematicians
21st-century American mathematicians
University of Chicago alumni
University of Göttingen alumni
Yale University alumni
American algebraists
Category theorists
Columbia University faculty
Cornell University faculty
Harvard University Department of Mathematics faculty
University of Chicago faculty
National Medal of Science laureates
Members of the American Philosophical Society
Members of the United States National Academy of Sciences
Presidents of the American Mathematical Society
Presidents of the Mathematical Association of America
Proceedings of the National Academy of Sciences of the United States of America editors | Saunders Mac Lane | [
"Mathematics"
] | 1,706 | [
"Category theorists",
"Mathematical structures",
"Category theory"
] |
336,630 | https://en.wikipedia.org/wiki/Cetacean%20intelligence | Cetacean intelligence is the overall intelligence and derived cognitive ability of aquatic mammals belonging in the infraorder Cetacea (cetaceans), including baleen whales, porpoises, and dolphins. In 2014, a study found for first time that the long-finned pilot whale has more neocortical neurons than any other mammal, including humans, examined to date.
Brain
Size
Brain size was previously considered a major indicator of the intelligence of an animal. However, many other factors also affect intelligence, and recent discoveries concerning bird intelligence have called into question the influence of brain size. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for more complex cognitive tasks. Allometric analysis indicates that in general, mammalian brain size scales at approximately the or exponent of body mass. Comparison of actual brain size with the size expected from allometry provides an encephalization quotient (EQ) that can be used as a more accurate indicator of an animal's intelligence.
Sperm whales (Physeter macrocephalus) have the largest known brain mass of any extant animal, averaging 7.8 kg in mature males.
Orcas (Orcinus orca) have the second largest known brain mass of any extant animal. (5.4-6.8 kg)
Bottlenose dolphins (Tursiops truncatus) have an absolute brain mass of 1,500–1,700 grams. This is slightly greater than that of humans (1,300–1,400 grams) and about four times that of chimpanzees (400 grams).
The brain to body mass ratio (not the encephalization quotient) in some members of the odontocete superfamily Delphinoidea (dolphins, porpoises, belugas, and narwhals) is greater than modern humans, and greater than all other mammals (there is debate whether that of the treeshrew might be second in place of humans). In some dolphins, it is less than half that of humans: 0.9% versus 2.1%. However, this comparison is complicated by the large amount of insulating blubber Delphinoidea brains have (15-20% of mass).
The encephalization quotient varies widely between species. The La Plata dolphin has an EQ of approximately 1.67; the Ganges river dolphin of 1.55; the orca of 2.57; the bottlenose dolphin of 4.14; and the tucuxi dolphin of 4.56; In comparison to other animals, elephants have an EQ ranging from 1.13 to 2.36; chimpanzees of approximately 2.49; dogs of 1.17; cats of 1.00; and mice of 0.50.
The majority of mammals are born with a brain close to 90% of the adult brain weight. Humans are born with 28% of the adult brain weight, chimpanzees with 54%, bottlenose dolphins with 42.5%, and elephants with 35%.
Spindle cells (neurons without extensive branching) have been discovered in the brains of the humpback whale, fin whale, sperm whale, orca, bottlenose dolphins, Risso's dolphins, and beluga whales. Humans, great apes, and elephants, species all well known for their high intelligence, are the only others known to have spindle cells. Spindle neurons appear to play a central role in the development of intelligent behavior. Such a discovery may suggest a convergent evolution of these species.
Structure
Elephant brains also show a complexity similar to dolphin brains, and are also more convoluted than that of humans, and with a cortex thicker than that of cetaceans. It is generally agreed that the growth of the neocortex, both absolutely and relative to the rest of the brain, during human evolution, has been responsible for the evolution of human intelligence, however defined. While a complex neocortex usually indicates high intelligence, there are exceptions. For example, the echidna has a highly developed brain, yet is not widely considered very intelligent, though preliminary investigations into their intelligence suggest that echidnas are capable of more advanced cognitive tasks than were previously assumed.
In 2014, it was shown for the first time that a species of dolphin, the long-finned pilot whale, has more neocortical neurons than any mammal studied to date including humans.
Unlike terrestrial mammals, dolphin brains contain a paralimbic lobe, which may possibly be used for sensory processing. It has also been suggested that similar to humans, the paralimbic region of the brain is responsible for a dolphin's self-control, motivation, and emotions. The dolphin is a voluntary breather, even during sleep, with the result that veterinary anaesthesia of dolphins would result in asphyxiation. Ridgway reports that EEGs show alternating hemispheric asymmetry in slow waves during sleep, with occasional sleep-like waves from both hemispheres. This result has been interpreted to mean that dolphins sleep only one hemisphere of their brain at a time, possibly to control their voluntary respiration system or to be vigilant for predators.
The dolphin's greater dependence on sound processing is evident in the structure of its brain: its neural area devoted to visual imaging is only about one-tenth that of the human brain, while the area devoted to acoustical imaging is about 10 times as large. Sensory experiments suggest a great degree of cross-modal integration in the processing of shapes between echolocative and visual areas of the brain.
Brain evolution
The evolution of encephalization in cetaceans is similar to that in primates. Though the general trend in their evolutionary history increased brain mass, body mass, and encephalization quotient, a few lineages actually underwent decephalization, although the selective pressures that caused this are still under debate. Among cetaceans, Odontoceti tend to have higher encephalization quotients than Mysticeti, which is at least partially due to the fact that Mysticeti have much larger body masses without a compensating increase in brain mass. As far as which selective pressures drove the encephalization (or decephalization) of cetacean brains, current research espouses a few main theories. The most promising suggests that cetacean brain size and complexity increased to support complex social relations. It could also have been driven by changes in diet, the emergence of echolocation, or an increase in territorial range.
Problem-solving ability
Some research shows that dolphins, among other animals, understand concepts such as numerical continuity, though not necessarily counting. Dolphins may be able to discriminate between numbers.
Several researchers observing animals' ability to learn set formation tend to rank dolphins at about the level of elephants in intelligence, and show that dolphins do not surpass other highly intelligent animals in problem solving. A 1982 survey of other studies showed that in the learning of "set formation", dolphins rank highly, but not as high as some other animals.
Behavior
Pod characteristics
Dolphin group sizes vary quite dramatically. River dolphins usually congregate in fairly small groups from 6 to 12 in number or, in some species, singly or in pairs. The individuals in these small groups know and recognize one another. Other species such as the oceanic pantropical spotted dolphin, common dolphin and spinner dolphin travel in large groups of hundreds of individuals. It is unknown whether every member of the group is acquainted with every other. However, large packs can act as a single cohesive unitobservations show that if an unexpected disturbance, such as a shark approach, occurs from the flank or from beneath the group, the group moves in near-unison to avoid the threat. This means that the dolphins must be aware not only of their near neighbors but also of other individuals nearby in a similar manner to which humans perform "audience waves". This is achieved by sight, and possibly also echolocation. One hypothesis proposed by Jerison (1986) is that members of a pod of dolphins are able to share echolocation results with each other to create a better understanding of their surroundings.
Southern resident orcas in British Columbia, Canada, and Washington, United States, live in extended family groups. The basis of the southern resident orca social structure is the matriline, consisting of a matriarch and her descendants of all generations. A number of matrilines form a southern resident orca pod, which is ongoing and extremely stable in membership, and has its own dialect which is stable over time. A southern resident calf is born into the pod of their mother and remains in it for life.
A cetacean dialect is a socially–determined vocal tradition. The complex vocal communication systems of orcas correspond with their large brains and complex social structure. The three southern resident orca pods share some calls with one another, and also have unique calls. Discussing the function of resident orca dialects, researchers John Ford, Graeme Ellis and Ken Balcomb wrote, "It may well be that dialects are used by the whales as acoustic indicators of group identity and membership, which might serve to preserve the integrity and cohesiveness of the social unit." Resident orcas form closed societies with no emigration or dispersal of individuals, and no gene flow with other orca populations. There is evidence that other species of dolphins may also have dialects.
In bottlenose dolphin studies by Wells in Sarasota, Florida, and Smolker in Shark Bay, Australia, females of a community are all linked either directly or through a mutual association in an overall social structure known as fission-fusion. Groups of the strongest association are known as "bands", and their composition can remain stable over years. There is some genetic evidence that band members may be related, but these bands are not necessarily limited to a single matrilineal line. There is no evidence that bands compete with each other. In the same research areas, as well as in Moray Firth, Scotland, males form strong associations of two to three individuals, with a coefficient of association between 70 and 100. These groups of males are known as "alliances", and members often display synchronous behaviors such as respiration, jumping, and breaching. Alliance composition is stable on the order of tens of years, and may provide a benefit for the acquisition of females for mating.
The complex social strategies of marine mammals such as bottlenose dolphins, "provide interesting parallels" with the social strategies of elephants and chimpanzees.
Complex play
Dolphins are known to engage in complex play behavior, which includes such things as producing stable underwater toroidal air-core vortex rings or "bubble rings". There are two main methods of bubble ring production: rapid puffing of a burst of air into the water and allowing it to rise to the surface, forming a ring; or swimming repeatedly in a circle and then stopping to inject air into the helical vortex currents thus formed. The dolphin will often then examine its creation visually and with sonar. They also appear to enjoy biting the vortex-rings they have created, so that they burst into many separate normal bubbles and then rise quickly to the surface. Certain whales are also known to produce bubble rings or bubble nets for the purpose of foraging. Many dolphin species also play by riding in waves, whether natural waves near the shoreline in a method akin to human "body-surfing", or within the waves induced by the bow of a moving boat in a behavior known as bow riding.
Cross-species cooperation
There have been instances in captivity of various species of dolphin and porpoise helping and interacting across species, including helping beached whales. Dolphins have also been known to aid human swimmers in need, and in at least one instance a distressed dolphin approached human divers seeking assistance.
Creative behavior
Aside from having exhibited the ability to learn complex tricks, dolphins have also demonstrated the ability to produce creative responses. This was studied by Karen Pryor during the mid-1960s at Sea Life Park in Hawaii, and was published as The Creative Porpoise: Training for Novel Behavior in 1969. The two test subjects were two rough-toothed dolphins (Steno bredanensis), named Malia (a regular show performer at Sea Life Park) and Hou (a research subject at adjacent Oceanic Institute). The experiment tested when and whether the dolphins would identify that they were being rewarded (with fish) for originality in behavior and was very successful. However, since only two dolphins were involved in the experiment, the study is difficult to generalize.
Starting with the dolphin named Malia, the method of the experiment was to choose a particular behavior exhibited by her each day and reward each display of that behavior throughout the day's session. At the start of each new day Malia would present the prior day's behavior, but only when a new behavior was exhibited was a reward given. All behaviors exhibited were, at least for a time, known behaviors of dolphins. After approximately two weeks Malia apparently exhausted "normal" behaviors and began to repeat performances. This was not rewarded.
According to Pryor, the dolphin became almost despondent. However, at the sixteenth session without novel behavior, the researchers were presented with a flip they had never seen before. This was reinforced. As related by Pryor, after the new display: "instead of offering that again she offered a tail swipe we'd never seen; we reinforced that. She began offering us all kinds of behavior that we hadn't seen in such a mad flurry that finally we could hardly choose what to throw fish at".
The second test subject, Hou, took thirty-three sessions to reach the same stage. On each occasion the experiment was stopped when the variability of dolphin behavior became too complex to make further positive reinforcement meaningful.
The same experiment was repeated with humans, and it took the volunteers about the same length of time to figure out what was being asked of them. After an initial period of frustration or anger, the humans realised they were being rewarded for novel behavior. In dolphins this realisation produced excitement and more and more novel behaviorsin humans it mostly just produced relief.
Captive orcas have displayed responses indicating they get bored with activities. For instance, when Paul Spong worked with the orca Skana, he researched her visual skills. However, after performing favorably in the 72 trials per day, Skana suddenly began consistently getting every answer wrong. Spong concluded that a few fish were not enough motivation. He began playing music, which seemed to provide Skana with much more motivation.
At the Institute for Marine Mammal Studies in Mississippi, it has also been observed that the resident dolphins seem to show an awareness of the future. The dolphins are trained to keep their own tank clean by retrieving rubbish and bringing it to a keeper, to be rewarded with a fish. However, one dolphin, named Kelly, has apparently learned a way to get more fish, by hoarding the rubbish under a rock at the bottom of the pool and bringing it up one small piece at a time.
Use of tools
, scientists have observed wild bottlenose dolphins in Shark Bay, Western Australia using a basic tool. When searching for food on the sea floor, many of these dolphins were seen tearing off pieces of sponge and wrapping them around their rostra, presumably to prevent abrasions and facilitate digging.
Communication
Whales use a variety of sounds for their communication and sensation. Odontocete (toothed whale) vocal production is classified in three categories: clicks, whistles, and pulsed calls:
Clicks are very brief vocal sounds produced in rapid series for echolocation. Echoes of the clicks contain sound data about the surroundings transmitted through the ears to the brain, which is able to resolve echoes into information.
Whistlesnarrow-band frequency modulated (FM) signalsare used for communicative purposes, such as contact calls, or the signature whistle of bottlenose dolphins. Whistles are the primary social vocalization among the majority of Delphinidae species.
Pulsed calls are significant for a few cetacean species, such as the narwhal, and the orca. These calls have distinct tonal qualities and a complex harmonic structure. Typically 0.5–1.5 s in duration, they are the primary social vocalization of orcas. Researchers John Ford, Graeme Ellis, and Ken Balcomb wrote, "By varying the timbre and frequency structure of the calls, the whales can generate a variety of signals…Most calls contain sudden shifts or rapid sweeps in pitch, which give them distinctive qualities recognizable over distance and background noise."
There is strong evidence that some specific whistles, called signature whistles, are used by dolphins to identify and/or call each other; dolphins have been observed emitting both other specimens' signature whistles, and their own. A unique signature whistle develops quite early in a dolphin's life, and it appears to be created in imitation of the signature whistle of the dolphin's mother. Imitation of the signature whistle seems to occur only among the mother and its young, and among befriended adult males.
Xitco reported the ability of dolphins to eavesdrop passively on the active echolocative inspection of an object by another dolphin. Herman calls this effect the "acoustic flashlight" hypothesis, and may be related to findings by both Herman and Xitco on the comprehension of variations on the pointing gesture, including human pointing, dolphin postural pointing, and human gaze, in the sense of a redirection of another individual's attention, an ability which may require theory of mind.
The environment where dolphins live makes experiments much more expensive and complicated than for many other species; additionally, the fact that cetaceans can emit and hear sounds (which are believed to be their main means of communication) in a range of frequencies much wider than humans can means that sophisticated equipment, which was scarcely available in the past, is needed to record and analyse them. For example, clicks can contain significant energy in frequencies greater than 110 kHz (for comparison, it is unusual for a human to be able to hear sounds above 20 kHz), requiring that equipment have a sampling rates of at least 220 kHz; MHz-capable hardware is often used.
In addition to the acoustic communication channel, the visual modality is also significant. The contrasting pigmentation of the body may be used, for example with "flashes" of the hypopigmented ventral area of some species, as can the production of bubble streams during signature whistling. Also, much of the synchronous and cooperative behaviors, as described in the Behavior section of this entry, as well as cooperative foraging methods, likely are managed at least partly by visual means.
Experiments have shown that they can learn human sign language and can use whistles for 2-way human–animal communication. Phoenix and Akeakamai, bottlenose dolphins, understood individual words and basic sentences like "touch the frisbee with your tail and then jump over it". Phoenix learned whistles, and Akeakamai learned sign language. Both dolphins understood the significance of the ordering of tasks in a sentence.
A study conducted by Jason Bruck of the University of Chicago showed that bottlenose dolphins can remember whistles of other dolphins they had lived with after 20 years of separation. Each dolphin has a unique whistle that functions like a name, allowing the marine mammals to keep close social bonds.
The new research shows that dolphins have the longest memory yet known in any species other than humans.
Self-awareness
Self-awareness, though not well defined scientifically, is believed to be the precursor to more advanced processes like meta-cognitive reasoning (thinking about thinking) that are typical of humans. Scientific research in this field has suggested that bottlenose dolphins, alongside elephants and great apes, possess self-awareness.
The most widely used test for self-awareness in animals is the mirror test, developed by Gordon Gallup in the 1970s, in which a temporary dye is placed on an animal's body, and the animal is then presented with a mirror.
In 1995, Marten and Psarakos used television to test dolphin self-awareness. They showed dolphins real-time footage of themselves, recorded footage, and another dolphin. They concluded that their evidence suggested self-awareness rather than social behavior. While this particular study has not been repeated since then, dolphins have since passed the mirror test. However, some researchers have argued that evidence for self-awareness has not been convincingly demonstrated.
See also
Animal cognition
Animal consciousness
Morgan's Canon
John C. Lillypioneer researcher in human–dolphin communication.
Louis Hermanscientist in dolphin cognition and sensory abilities
Animal language
Vocal learning
Spindle neuron
Military dolphin
U.S. Navy Marine Mammal Program
So Long, and Thanks for All the Fishfiction novel which derives its title from the idea of dolphins leaving the Earth.
Uplift Universea series of novels, involving genetically-enhanced ("uplifted") intelligent dolphins
Pig intelligence
References
Further reading
Dolphin Communication and Cognition: Past, Present, and Future, edited by Denise L. Herzing and Christine M. Johnson, 2015, MIT Press
External links
Brain facts and figures.
Neuroanatomy of the Common Dolphin (Delphinus delphis) as Revealed by Magnetic Resonance Imaging (MRI).
"The Dolphin Brain Atlas" – A collection of stained brain sections and MRI images.
Animal intelligence
Cetaceans
Mammal behavior | Cetacean intelligence | [
"Biology"
] | 4,422 | [
"Behavior by type of animal",
"Behavior",
"Mammal behavior"
] |
336,661 | https://en.wikipedia.org/wiki/Glucuronolactone | Glucuronolactone or Glucurolactone (INN) is a naturally occurring substance that is an important structural component of nearly all connective tissues. It is sometimes used in energy drinks. Unfounded claims that glucuronolactone can be used to reduce "brain fog" are based on research conducted on energy drinks that contain other active ingredients that have been shown to improve cognitive function, such as caffeine. Glucuronolactone is also found in many plant gums.
Physical and chemical properties
Glucuronolactone is a white solid odorless compound, soluble in hot and cold water. Its melting point ranges from 176 to 178 °C. The compound can exist in a monocyclic aldehyde form or in a bicyclic hemiacetal (lactol) form.
History
It is unknown if glucuronolactone is safe for human consumption due to a lack of proper human or animal trials. However, it likely has limited effects on the human body. Furthermore research on isolated supplements of glucuronolactone is limited, no warnings appear on the Food and Drug Administration website regarding its potential to cause brain tumors or other maladies.
Uses
Glucuronolactone is an ingredient used in some energy drinks Although levels of glucuronolactone in energy drinks can far exceed those found in the rest of the diet. Research into Glucuronolactone is too limited to assert claims about its safety The European Food Safety Authority (EFSA) has concluded that it is unlikely that glucurono-γ-lactone would have any interaction with caffeine, taurine, alcohol or the effects of exercise. The Panel also concluded, based on the data available, that additive interactions between taurine and caffeine on diuretic effects are unlikely.
According to The Merck Index, glucuronolactone is used as a detoxicant.
Glucuronolactone is also metabolized to glucaric acid, xylitol, and L-xylulose, and humans may also be able to use glucuronolactone as a precursor for ascorbic acid synthesis.
Glucuronolactone is approved in China and Japan as an over-the-counter "hepatoprotectant", though there is a conspicuous lack of systematic reviews on this use.
See also
Glucuronic acid
Glucono delta-lactone
International Programme on Chemical Safety
References
Monosaccharides
Gamma-lactones
Tetrahydrofurans | Glucuronolactone | [
"Chemistry"
] | 548 | [
"Carbohydrates",
"Monosaccharides"
] |
336,815 | https://en.wikipedia.org/wiki/List%20of%20conjectures%20by%20Paul%20Erd%C5%91s | The prolific mathematician Paul Erdős and his various collaborators made many famous mathematical conjectures, over a wide field of subjects, and in many cases Erdős offered monetary rewards for solving them.
Unsolved
The Erdős–Gyárfás conjecture on cycles with lengths equal to a power of two in graphs with minimum degree 3.
The Erdős–Hajnal conjecture that in a family of graphs defined by an excluded induced subgraph, every graph has either a large clique or a large independent set.
The Erdős–Mollin–Walsh conjecture on consecutive triples of powerful numbers.
The Erdős–Selfridge conjecture that a covering system with distinct moduli contains at least one even modulus.
The Erdős–Straus conjecture on the Diophantine equation 4/n = 1/x + 1/y + 1/z.
The Erdős conjecture on arithmetic progressions in sequences with divergent sums of reciprocals.
The Erdős–Szekeres conjecture on the number of points needed to ensure that a point set contains a large convex polygon.
The Erdős–Turán conjecture on additive bases of natural numbers.
A conjecture on quickly growing integer sequences with rational reciprocal series.
A conjecture with Norman Oler on circle packing in an equilateral triangle with a number of circles one less than a triangular number.
The minimum overlap problem to estimate the limit of M(n).
A conjecture that the ternary expansion of contains at least one digit 2 for every .
The conjecture that the Erdős–Moser equation, , has no solutions except .
Solved
The Erdős–Faber–Lovász conjecture on coloring unions of cliques, proved (for all large n) by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus.
The Erdős sumset conjecture on sets, proven by Joel Moreira, Florian Karl Richter, Donald Robertson in 2018. The proof has appeared in "Annals of Mathematics" in March 2019.
The Burr–Erdős conjecture on Ramsey numbers of graphs, proved by Choongbum Lee in 2015.
A conjecture on equitable colorings proven in 1970 by András Hajnal and Endre Szemerédi and now known as the Hajnal–Szemerédi theorem.
A conjecture that would have strengthened the Furstenberg–Sárközy theorem to state that the number of elements in a square-difference-free set of positive integers could only exceed the square root of its largest value by a polylogarithmic factor, disproved by András Sárközy in 1978.
The Erdős–Lovász conjecture on weak/strong delta-systems, proved by Michel Deza in 1974.
The Erdős–Heilbronn conjecture in combinatorial number theory on the number of sums of two sets of residues modulo a prime, proved by Dias da Silva and Hamidoune in 1994.
The Erdős–Graham conjecture in combinatorial number theory on monochromatic Egyptian fraction representations of unity, proved by Ernie Croot in 2000.
The Erdős–Stewart conjecture on the Diophantine equation n! + 1 = pka pk+1b, solved by Florian Luca in 2001.
The Cameron–Erdős conjecture on sum-free sets of integers, proved by Ben Green and Alexander Sapozhenko in 2003–2004.
The Erdős–Menger conjecture on disjoint paths in infinite graphs, proved by Ron Aharoni and Eli Berger in 2009.
The Erdős distinct distances problem. The correct exponent was proved in 2010 by Larry Guth and Nets Katz, but the correct power of log n is still undetermined.
The Erdős–Rankin conjecture on prime gaps, proved by Ford, Green, Konyagin, and Tao in 2014.
The Erdős discrepancy problem on partial sums of ±1-sequences. Terence Tao announced a solution in September 2015; it was published in 2016.
The Erdős squarefree conjecture that central binomial coefficients C(2n, n) are never squarefree for n > 4 was proved in 1996.
The Erdős primitive set conjecture that the sum for any primitive set A (a set where no member of the set divides another member) attains its maximum at the set of primes numbers, proved by Jared Duker Lichtman in 2022.
The Erdős-Sauer problem about maximum number of edges an n-vertex graph can have without containing a k-regular subgraph, solved by Oliver Janzer and Benny Sudakov
See also
List of things named after Paul Erdős
References
External links
Fan Chung, "Open problems of Paul Erdős in graph theory"
Fan Chung, living version of "Open problems of Paul Erdős in graph theory"
Erdos
Paul Erdős
Conjectures,Erdos | List of conjectures by Paul Erdős | [
"Mathematics"
] | 989 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Conjectures"
] |
336,820 | https://en.wikipedia.org/wiki/Alvar | An alvar is a biological environment based on a limestone plain with thin or no soil and, as a result, sparse grassland vegetation. Often flooded in the spring, and affected by drought in midsummer, alvars support a distinctive group of prairie-like plants. Most alvars occur either in northern Europe or around the Great Lakes in North America. This stressed habitat supports a community of rare plants and animals, including species more commonly found on prairie grasslands. Lichen and mosses are common species. Trees and bushes are absent or severely stunted.
The primary cause of alvars is the shallow exposed bedrock. Flooding and drought, as noted, add to the stress of the site and prevent many species from growing. Disturbance may also play a role. In Europe, grazing is frequent, while in North America, there is some evidence that fire may also prevent encroachment by forest. The habitat also has strong competition gradients, with better competitors occupying the deeper soil and excluding other species to less productive locations. Crevices in the limestone provide a distinctive habitat which is somewhat protected from grazing, and which may provide habitat for unusual ferns such as Pellaea atropurpurea. Bare rock flats provide areas with extremely low competition that serve as refugia for weak competitors such as the sandwort Minuartia michauxii and Micranthes virginiensis. In a representative set of four Ontario alvars, seven habitat types were described. From deep to shallow soil these were: tall grassy meadows, tall forb-rich meadows, low grassy meadows, low forb-rich meadows, dry grassland, rock margin grassland and bare rock flats.
Alvars comprise a small percentage of the Earth's ecosystems by land extent. Although some 120 exist in the Great Lakes region, in total there are only about left across the entire Great Lakes basin, and many of these have been degraded by agriculture and other human uses. More than half of all remaining alvars occur in Ontario. There are smaller areas in New York, Michigan, Ohio, Wisconsin and Quebec.
In North America, alvars provide habitat for birds such as bobolinks, eastern meadowlarks, upland sandpipers, eastern towhees, brown thrashers and loggerhead shrikes whose habitat is declining elsewhere. Rare plants include Kalm's lobelia (Lobelia kalmii), Pringle's aster (Symphyotrichum pilosum var. pringlei), juniper sedge (Carex juniperorum), lakeside daisy (Hymenoxys acaulis), ram's-head lady's-slipper (Cypripedium arietinum), and dwarf lake iris (Iris lacustris). Also associated with alvars are rare butterflies and snails. The use of the word "alvar" to refer to this type of environment originated in Sweden. The largest alvar in Europe is located on the Swedish island of Öland. Here the thin soil mantle is only 0.5 to 2.0 centimeters thick in most places and in many extents consists of exposed limestone slabs. The landscape there has been designated a UNESCO World Heritage Site. There are other more local names for similar landforms, such as a pavement barren, although this term is also used for similar landforms based on sandstone. In the United Kingdom the exposed landform is called a limestone pavement and thinly covered limestone is known as calcareous grassland.
European alvar locations
Sweden
Öland – Stora Alvaret – largest alvar extent in Europe
Gotland
Västergötland – several locations on limestone mountain Kinnekulle, smaller fragments on Falbygden, e.g. in Dala and Högstena parishes
Estonia
Alvars are distributed along the whole northern coast of Estonia from approx. the town of Paldiski to Sillamäe, wherever limestone comes to the surface near the seashore (see Baltic Klint), as well as on the islands of the West Estonian archipelago. Estonia used to be home to approximately one third of the world's alvars; however, the total area of alvars has decreased from 43,000 hectares in the 1930s to 12,000 hectares in 2000, and approximately 9,000 hectares in 2010. Estonian alvars are home to 267 species of vascular plants, approximately one fifth of which are protected. There are also 142 species of bryophytes and 263 species of lichens. The Estonian government has committed itself to protect at least 9,800 hectares of the country's alvars as part of the Natura 2000 network. The Loopealse subdistrict of Tallinn is named after alvar.
Vardi Nature Reserve in Rapla County is an Estonian nature reserve especially designated to protect one of the more representative alvar areas of Estonia.
England
Cumbria and North Yorkshire – under protection in the UK Biodiversity Action Plan
Ireland
The Burren, a large alvar in northwest County Clare
Some North American alvar locations
The rare Charitable Research Reserve – Cambridge, Ontario
Lake Erie
Kelley's Island, Ohio – North Shore Alvar State Nature Preserve
Marblehead, Ohio – mostly destroyed by limestone quarrying
Pelee Island, Ontario – Stone Road Alvar Nature Reserve
Lake Huron
Maxton Plains Proposed Natural Area, Drummond Island, Michigan
Belanger Bay Alvar, Manitoulin Island, Ontario
Quarry Bay Nature Reserve, Manitoulin Island, Ontario
Bruce Alvar Nature Reserve, Bruce Peninsula, Ontario
Baptise Harbour Nature Reserve, Bruce Peninsula, Ontario
Misery Bay Provincial Park, Manitoulin Island, Ontario
Lake Michigan
Red Banks Alvar, Red Banks, Brown County, Wisconsin
Lake Ontario
Carden Plain Alvar, City of Kawartha Lakes, Ontario, including Carden Alvar Provincial Park
Chaumont Barrens Preserve, New York
Three Mile Creek Barrens, New York
Burnt Lands Alvar, Almonte, Ontario
Balsam Lake Indian Point Provincial Park, Ontario
Quebec
Quyon
Alvar d'Aylmer
Manitoba
Interlake
See also
References
External links
http://www.epa.gov/ecopage/shore/alvars/
Alkaline soils
Habitats
Landforms
Types of soil
Alpine flora
Gotland
Geography of Gotland County | Alvar | [
"Chemistry"
] | 1,273 | [
"Soil chemistry",
"Alkaline soils"
] |
336,838 | https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Gy%C3%A1rf%C3%A1s%20conjecture | In graph theory, the unproven Erdős–Gyárfás conjecture, made in 1995 by mathematician Paul Erdős and his collaborator András Gyárfás, states that every graph with minimum degree 3 contains a simple cycle whose length is a power of two. Erdős offered a prize of $100 for proving the conjecture, or $50 for a counterexample; it is one of many conjectures of Erdős.
If the conjecture is false, a counterexample would take the form of a graph with minimum degree three having no power-of-two cycles. It is known through computer searches of Gordon Royle and Klas Markström that any counterexample must have at least 17 vertices, and any cubic counterexample must have at least 30 vertices. Markström's searches found four graphs on 24 vertices in which the only power-of-two cycles have 16 vertices. One of these four graphs is planar; however, the Erdős–Gyárfás conjecture is now known to be true for the special case of 3-connected cubic planar graphs
Weaker results relating the degree of a graph to unavoidable sets of cycle lengths are known: there is a set S of lengths, with |S| = O(n0.99), such that every graph with average degree ten or more contains a cycle with its length in S , and every graph whose average degree is exponential in the iterated logarithm of n necessarily contains a cycle whose length is a power of two . The conjecture is also known to be true for planar claw-free graphs and for graphs that avoid large induced stars and satisfy additional constraints on their degrees .
References
.
.
.
.
.
External links
Exoo, Geoffrey, Graphs Without Cycles of Specified Lengths
West, Douglas B., Erdős Gyárfás Conjecture on 2-power Cycle Lengths, Open Problems - Graph Theory and Combinatorics
Conjectures
Unsolved problems in graph theory
Gyarfas conjecture | Erdős–Gyárfás conjecture | [
"Mathematics"
] | 410 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Conjectures",
"Unsolved problems in graph theory"
] |
336,870 | https://en.wikipedia.org/wiki/Sertindole | Sertindole, sold under the brand name Serdolect among others, is an antipsychotic medication. Sertindole was developed by the Danish pharmaceutical company Lundbeck and marketed under license by Abbott Labs. Like other atypical antipsychotics, it has activity at dopamine and serotonin receptors in the brain. It is used in the treatment of schizophrenia. It is classified chemically as a phenylindole derivative.
Sertindole is not approved for use in the United States and was discontinued in Australia in January 2014.
Medical Uses
Sertindole appears effective as an antipsychotic in schizophrenia. In a 2013 study in a comparison of 15 antipsychotic drugs in effectivity in treating schizophrenic symptoms, sertindole was found to be slightly less effective than haloperidol, quetiapine, and aripiprazole, as effective as ziprasidone, approximately as effective as chlorpromazine and asenapine, and slightly more effective than lurasidone and iloperidone.
Adverse effects
Very common (>10% incidence) adverse effects include:
Headache
Ejaculation failure
Insomnia
Dizziness
Common (1–10% incidence) adverse effects include:
Urine that tests positive for red and/or white blood cells
Sedation (causes less sedation than most antipsychotic drugs according to a recent meta-analysis of the efficacy and tolerability of 15 antipsychotic drugs. Causes only slightly [and non-significantly] more sedation than amisulpride and paliperidone)
Ejaculation disorder
Erectile dysfunction
Orthostatic hypotension
Weight gain (which it seems to possess a similar propensity for causing as quetiapine)
Uncommon (0.1–1% incidence) adverse effects include:
Substernal chest pain
Face oedema
Influenza-like illness
Neck rigidity
Pallor
Peripheral vascular disorder
syncope
Torsades de pointes
Vasodilation
Suicide attempt
Amnesia
Anxiety
Ataxia
Confusion
Incoordination
Libido decreased
Libido increased
Miosis
Nystagmus
Personality disorder
Psychosis
Reflexes decreased
Reflexes increased
Stupor
Suicidal tendency
Urinary retention
Vertigo
Diabetes mellitus
Abnormal stools
Gastritis
Gingivitis
Glossitis
Increased appetite
Mouth ulceration
Rectal disorder
Rectal haemorrhage
Stomatitis
Tongue disorder
Ulcerative stomatitis
Anaemia
Ecchymosis
Hypochromic anaemia
Leukopenia
Hyperglycaemia
Hyperlipemia
Oedema
Bone pain
Myasthenia
Twitching
Bronchitis
Hyperventilation
Pneumonia
Sinusitis
Furunculosis
Herpes simplex
Nail disorder
Psoriasis
Pustular Rash
Skin discolouration
Skin hypertrophy
Skin ulcer
Abnormal vision
Keratoconjunctivitis
Lacrimation disorder
Otitis externa
Pupillary disorder
Taste perversion
Anorgasmia
Penis disorder (gs)
Urinary urgency
Hyperprolactinaemia (which it seems to cause with a higher propensity than most other atypical antipsychotics do)
Seizures
Galactorrhoea
Rare (<0.1% incidence) adverse effects include:
Neuroleptic malignant syndrome
Tardive dyskinesia
Unknown frequency adverse events include:
Extrapyramidal side effects (EPSE; e.g. dystonia, akathisia, muscle rigidity, parkinsonism, etc. These adverse effects are probably uncommon/rare according to a recent meta-analysis of the efficacy and tolerability of 15 antipsychotic drugs which found it had the 2nd lowest effect size for causing EPSE)
Venous thromboembolism
QT interval prolongation (probably common; in a recent meta-analysis of the efficacy and tolerability of 15 antipsychotic drugs it was found to be the most prone to causing QT interval prolongation)
Pharmacology
Sertindole is metabolized in the body to dehydrosertindole.
Safety and status
United States
Abbott Labs first applied for U.S. Food and Drug Administration (FDA) approval for sertindole in 1996, but withdrew this application in 1998 following concerns over the increased risk of sudden death from QTc prolongation. In a trial of 2000 patients on taking sertindole, 27 patients died unexpectedly, including 13 sudden deaths. Lundbeck cites the results of the Sertindole Cohort Prospective (SCoP) study of 10,000 patients to support its claim that although sertindole does increase the QTc interval, this is not associated with increased rates of cardiac arrhythmias, and that patients on sertindole had the same overall mortality rate as those on risperidone. Nevertheless, in April 2009 an FDA advisory panel voted 13-0 that sertindole was effective in the treatment of schizophrenia but 12-1 that it had not been shown to be acceptably safe. , the drug has not been approved by the FDA for use in the USA.
European Union
In the European Union, sertindole was approved and marketed in 19 countries from 1996, but its marketing authorization was suspended by the European Medicines Agency in 1998 and the drug was withdrawn from the market. In 2002, based on new data, the EMA's CHMP suggested that Sertindole could be reintroduced for restricted use in clinical trials, with strong safeguards including extensive contraindications and warnings for patients at risk of cardiac dysrhythmias, a recommended reduction in maximum dose from 24 mg to 20 mg in all but exceptional cases, and extensive ECG monitoring requirement before and during treatment. , sertindole is authorized in several countries of the European Union.
References
4-Fluorophenyl compounds
Atypical antipsychotics
Chloroarenes
Imidazolidinones
HERG blocker
Indoles
Piperidines
Withdrawn drugs | Sertindole | [
"Chemistry"
] | 1,234 | [
"Drug safety",
"Withdrawn drugs"
] |
336,878 | https://en.wikipedia.org/wiki/Bolivian%20gas%20conflict | The Bolivian Gas War (Spanish: Guerra del Gas) or Bolivian gas conflict was a social confrontation in Bolivia reaching its peak in 2003, centering on the exploitation of the country's vast natural gas reserves. The expression can be extended to refer to the general conflict in Bolivia over the exploitation of gas resources, thus including the 2005 protests and the election of Evo Morales as president. Before these protests, Bolivia had seen a series of similar earlier protests during the Cochabamba protests of 2000, which were against the privatization of the municipal water supply.
The conflict had its roots in grievances over the government's economic policies concerning natural gas, as well as coca eradication policies, corruption and violent military responses against strikes.
The "Bolivian gas war" thus came to a head in October 2003, leading to the resignation of President Gonzalo Sánchez de Lozada (aka "Goni"). Strikes and road blocks mounted by indigenous and labour groups (including the COB trade union) brought the country to a standstill. Violent suppression by the Bolivian armed forces left some 60 people dead in October 2003, mostly inhabitants of El Alto, located on the Altiplano above the seat of government La Paz.
The governing coalition disintegrated forcing Goni to resign and leave the country on October 18, 2003. He was succeeded by the vice president, Carlos Mesa, who put the gas issue to a referendum on July 18, 2004. In May 2005, under duress from protesters, the Bolivian congress enacted a new hydrocarbons law, increasing the state's royalties from natural gas exploitation. However, protesters, who included Evo Morales and Felipe Quispe, demanded full nationalization of hydrocarbon resources, and the increased participation of Bolivia's indigenous majority, mainly composed of Aymaras and Quechuas, in the political life of the country. On June 6, 2005, Mesa was forced to resign as tens of thousands of protesters caused daily blockades to La Paz from the rest of the country. Morales' election at the end of 2005 was met with enthusiasm by the social movements, because he was, as the leader of left-wing MAS, one of the staunchest opponents to the exportation of the gas without corresponding industrialization in Bolivia. On May 1, 2006, President Morales signed a decree stating that all gas reserves were to be nationalized: "the state recovers ownership, possession and total and absolute control" of hydrocarbons. The 2006 announcement was met by applause on La Paz's main plaza, where Vice President Alvaro Garcia told the crowd that the government's energy-related revenue would jump US$320 million to US$780 million in 2007, continuing a trend where revenues had expanded nearly sixfold between 2002 and 2006.
Background
Gas reserves of Bolivia
The central issue was Bolivia's large natural gas reserves and the prospect for their future sale and use. The Bolivian gas reserves are the second largest in South America after Venezuela, and exploration after the privatization of the national oil company YPFB showed that proven natural gas reserves were 600% higher than previously known. The cash-poor, state-owned company could not afford the exploration costs. These reserves mainly are located in the southeastern Tarija Department, which contains 85% of gas and petrol reserves. According to the United States Department of Energy, another 10.6% is located within the department of Santa Cruz and 2.5% in the Cochabamba Department. After further exploration from 1996 to 2002, the estimated size of the probable gas reserves was calculated to be 12.5 times larger, passing from to . This number has declined somewhat to probable reserves. The proven reserves are . With the declining importance of tin mines, those reserves accounted for the majority of foreign investment in Bolivia.
The price which Bolivia is paid for its natural gas is roughly US to Brazil and $3.18 per million BTU to Argentina. Other sources state that Brazil pays between $3.15 and $3.60 per million BTU, not including $1.50 per million BTU in Petrobras extraction and transportation costs. As a comparison, the price of gas in the US as a whole in 2006 varied between US, although some years earlier the price of natural gas spiked at $14 per million BTU in California due to lack of pipeline capacity to and within California as well as due to electricity outages. While according to Le Monde, Brazil and Argentina pay US$2 per thousand cubic meter of gas, which costs from $12 to $15 in California.
In 1994, a contract with Brazil was passed, two years before 1996's privatization of the 70-year-old, state-owned Yacimientos Petroliferos Fiscales de Bolivia (YPFB). The construction of the Bolivia-Brazil gas pipeline cost US$2.2 billion.
A consortium called Pacific LNG was formed to exploit the newly discovered reserves. The consortium comprised the British companies BG Group and BP, and Spain's Repsol YPF. Repsol is one of three companies that dominate the gas sector in Bolivia, along with Petrobras and TotalEnergies. A plan costing US$6 billion was drawn to build a pipeline to the Pacific coast, where the gas would be processed and liquefied before being shipped to Mexico and the United States (Baja California and California), through a Chilean port, for example Iquique. The 2003 Lozada deal was opposed heavily by Bolivian society, in part because of nationalism (Bolivia feels resentment after the territorial losses of the War of the Pacific in the late 19th century, which deprived it of the Litoral province and hence access to the sea).
Government ministers hoped to use the gas profits to bolster the sagging Bolivian economy and claimed the money would be invested exclusively in health and education. Opponents argued that under the current law, the exportation of the gas as a raw material would give Bolivia only 18% of the future profits, or US$40 million to US$70 million per year. They further argued that exporting the gas so cheaply would be the latest case of foreign exploitation of Bolivia's natural resources, starting with its silver and gold from the 17th century. They demanded that a plant be built in Bolivia to process the gas and that domestic consumption had to be met before export. As Le Monde puts it, "two reasons plead for the industrial exploitation of the gas, which the multinational companies now have the capacities of doing. The first is related to the necessity of satisfying the Bolivians' energy needs. The second demonstrates the interest of exporting a more profitable product rather than selling raw material". According to the French newspaper, only La Paz, El Alto, Sucre, Potosí, Camiri and Santa Cruz are now connected to the gas network; making an interior network which would reach all Bolivians would cost $1.5 billion, notwithstanding a central gas pipeline to link the various regions together. According to Carlos Miranda, an independent expert quoted by Le Monde, the best industrialisation project is the petrochemical complex proposed by the Brazilian Braskem firm, which would create 40 000 direct or indirect jobs and cost $1.4 billion. This figure is equivalent to the amount so far invested by Repsol, TotalEnergies and Petrobras.
Santa Cruz autonomy movement
The eastern departments of Santa Cruz, Beni, Tarija, and Pando recently had been mobilizing in favor of autonomy. An important issue was opposition to the seizure of resources though nationalization. Community leaders are supported by the Comite Pro Santa Cruz, local co-ops, and by business organizations such as cattle ranchers and farmers. A strike against the new constitution was recently held which was observed in Santa Cruz, Beni, Tarija, and Pando. Tensions have been raised by the cultural and philosophical rift exposed by the push for a new constitution. As a basis for a new constitution, the western, Altiplano-based MAS party envisions a "council of indigenous peoples" along with a curtailment of private ownership, while Santa Cruz looks to western culture and capitalism.
Cultural divisions exist because people in eastern Bolivia, called "Cambas" (meaning "friends" in Guarani), are primarily of mestizo descent (mix of European and several native tribes the largest of which are the Guaraní), while the western Altiplano is dominated by a small white elite and a historically oppressed Quechua and Aymara majority.
The first signs of the modern autonomy movement occurred in 2005 when a march for autonomy was attended by hundreds of thousands of people. A result of this was the change in law to allow the election of departmental prefects. Another area of tension was the result of ongoing population shifts and the resulting demands for proportionally greater representation in Bolivia's Congress to reflect these shifts by Santa Cruz. A compromise was reached to allow Santa Cruz to receive some of the seats warranted by population growth, and for the highlands to keep seats despite population losses.
Left-wing intellectuals Walter Chávez and Álvaro García Linera (former Bolivian Vice President and MAS party member) published an article in the Monthly Review asserting that autonomy has been historically a demand of the Santa Cruz region, "contemporarily imbued with far-right, populist sentiments." They also qualified Santa Cruz autonomy as a "bourgeois ideology" of the "free market, foreign investment, racism, etc.", which pits the "modern", "whiter" Santa Cruz elite against the short, dark-skinned and anti-capitalist Aymara and Quechua peoples of the western region of Bolivia.
Dispute over pipeline route
The dispute arose in early 2002, when the administration of President Jorge Quiroga proposed building the pipeline through neighboring Chile to the port of Mejillones, the most direct route to the Pacific Ocean. However, antagonism towards Chile runs deep in Bolivia because of the loss of Bolivia's Pacific coastline to Chile in the War of the Pacific (1879–1884).
Bolivians began campaigning against the Chilean option, arguing instead that the pipeline should be routed north through the Peruvian port of Ilo, 260 km further from the gas fields than Mejillones, or, better yet, first industrialized in Bolivia. According to Chilean estimates, the Mejillones option would be $600 million cheaper.
Peru, however, claimed the difference in cost would be no more than $300 million. Bolivian proponents of the Peruvian option say it would also benefit the economy of the northern region of Bolivia through which the pipeline would pass.
Supporters of the Chile pipeline argued that U.S. financiers would be unlikely to develop processing facilities within Bolivia.
Meanwhile, the Peruvian government, eager to promote territorial and economic integration, offered Bolivia a special economic zone for 99 years for exporting the gas at Ilo, the right of free passage, and the concession of a 10 km2 area, including a port, that would be exclusively under Bolivian administration.
President Jorge Quiroga postponed the decision shortly before leaving office in July 2002 and left this highly contentious issue to his successor. It was thought Quiroga did not want to jeopardize his chances of re-election as president in the 2007 elections.
After winning the 2002 presidential election Gonzalo Sánchez de Lozada expressed his preference for the Mejillones option but made no "official" decision. The Gas War led to his resignation in October 2003.
Escalation
The social conflict escalated in September 2003 with protests and road blockages paralyzing large parts of the country, leading to increasingly violent confrontations with the Bolivian armed forces.
The insurrection was spearheaded by Bolivia's indigenous majority, who accused Sánchez de Lozada of pandering to the US government's "war on drugs" and blamed him for failing to improve living standards in Bolivia. On September 8, 650 Aymaras started a hunger strike to protest against the state detention of a villager. The man detained was one of the heads of the village, and was imprisoned for having sentenced to the death penalty two young men in a "community justice" trial.
On September 19, the National Coordination for the Defense of Gas mobilized 30,000 people in Cochabamba and 50,000 in La Paz to demonstrate against the pipeline.
The following day six Aymara villagers, including an eight-year-old girl, were killed in a confrontation in the town of Warisata. Government forces used planes and helicopters to circumvent the strikers and evacuate several hundred foreign and Bolivian tourists from Sorata who had been stranded by the road blockades for five days.
In response to the shootings, Bolivia's Labor Union (COB) called a general strike on September 29 that paralyzed the country with road closures.
Union leaders insisted they would continue until the government backed down on its decision.
Poorly armed Aymara community militias drove the army and police out of Warisata and the towns of Sorata and Achacachi, equipped only with traditional Aymara sling shots and guns from the 1952 Bolivian National Revolution.
Eugenio Rojas, leader of the regional strike committee, declared that if the government refused to negotiate in Warisata, then the insurgent Aymara communities would surround La Paz and cut it off from the rest of the country — a tactic employed in the Túpaj Katari uprising of 1781.
Felipe Quispe, leader of the Indigenous Pachakuti Movement (MIP), stated that he would not participate in dialogue with the government until the military withdrew from blockaded areas. The government refused to negotiate with Quispe, claiming that he did not have the authority to represent the campesino movement.
As the protests continued, protesters in El Alto, a sprawling indigenous city of 750,000 people on the periphery of La Paz, proceeded to block key access routes to the capital causing severe fuel and food shortages. They also demanded the resignation of Sánchez de Lozada and his ministers, Yerko Kukoc, Minister of Government, and Carlos Sánchez de Berzaín, Minister of Defense, who were held responsible for the Warisata massacre. Protesters also voiced their opposition to the Free Trade Area of the Americas agreement that was at the time under negotiation by the US and Latin American countries (since the November 2005 Mar del Plata Summit of the Americas, it has been put on stand-by).
Martial law in El Alto
On October 12, 2003, the government imposed martial law in El Alto after sixteen people were shot by the police and several dozen wounded in violent clashes which erupted when a caravan of oil trucks escorted by police and soldiers deploying tanks and heavy-caliber machine guns tried to breach a barricade.
On October 13, the administration of Sánchez de Lozada suspended the gas project "until consultations have been conducted [with the Bolivian people]." However, Vice President Carlos Mesa deplored what he referred to as the "excessive force" used in El Alto (80 dead) and withdrew his support for Sánchez de Lozada. The Minister of Economic Development, Jorge Torrez, of the MIR party, also resigned.
The United States Department of State issued a statement on October 13 declaring its support for Sánchez de Lozada, calling for "Bolivia's political leaders [to] publicly express their support for democratic and constitutional order. The international community and the United States will not tolerate any interruption of constitutional order and will not support any regime that results from undemocratic means".
On October 18, Sánchez de Lozada's governing coalition was fatally weakened when the New Republic Force party withdrew its support. He was forced to resign and was replaced by his vice president, Carlos Mesa, a former journalist. The strikes and roadblocks were lifted. Mesa promised that no civilians would be killed by police or army forces during his presidency. Despite dramatic unrest during his time in office, he respected this promise.
Among his first actions as president, Mesa promised a referendum on the gas issue and appointed several indigenous people to cabinet posts. On July 18, 2004, Mesa put the issue of gas nationalization to a referendum. On May 6, 2005, the Bolivian Congress passed a new law raising taxes from 18% to 32% on profits made by foreign companies on the extraction of oil and gas. Mesa failed to either sign or veto the law, so by law Senate President Hormando Vaca Diez was required to sign it into law on May 17. Many protesters felt this law was inadequate and demanded full nationalization of the gas and oil industry.
The 2005 Hydrocarbons Law
On May 6, 2005, the long-awaited Hydrocarbons Law was finally approved by the Bolivian Congress. On May 17 Mesa again refused to either sign or veto the controversial law, thus constitutionally requiring Senate President Hormando Vaca Díez to sign the measure and put it into effect.
The new law returned legal ownership to the state of all hydrocarbons and natural resources, maintained royalties at 18 percent, but increased taxes from 16 to 32 percent. It gave the government control of the commercialization of the resources and allowed for continuous government control with annual audits. It also ordered companies to consult with indigenous groups who live on land containing gas deposits. The law stated that the 76 contracts signed by foreign firms must be renegotiated before 180 days. Protesters argued that the new law did not go far enough to protect the natural resources from exploitation by foreign corporations, demanding a complete nationalization of the gas and process in Bolivia.
Due to the uncertainty over renegotiation of contracts, foreign firms have practically stopped investing in the gas sector. Foreign investment virtually came to a standstill in the second half of 2005. Shortages in supply – very similar to those observed in Argentina after the 2001 price-fixing – are deepening in diesel, LPG, and begin to be apparent in natural gas. The May–June social unrest affected the supply of hydrocarbons products to the internal market, principally LPG and natural gas to the occidental region. Brazil implemented a contingency plan – led by the Energy and Mines Minister – to mitigate any potential impact from gas export curtailment. Although the supply was never curtailed, the social unrest in Bolivia created a strong sensation that security of supply could not be guaranteed. Occasional social action has continued to affect the continuity of supply, especially valve-closing actions.
Carlos Mesa's June 2005 resignation
The protests
Over 80,000 people participated in the May 2005 protests. Tens of thousands of people each day walked from El Alto to the capital La Paz, where protesters effectively shut down the city, bringing transportation to a halt through strikes and blockades, and engaging in street battles with police. The protestors demanded the nationalisation of the gas industry and reforms to give more power to the indigenous majority, who were mainly Aymaras from the impoverished highlands. They were pushed back by the police with tear gas and rubber bullets, while many of the miners involved in the protests came armed with dynamite.
May 24, 2005
More than 10,000 Aymara peasant farmers from the twenty highland provinces came down from El Altos Ceja neighborhood into La Paz to protest.
On May 31, 2005, residents of El Alto and the Aymara peasant farmers returned to La Paz. More than 50,000 people covered an area of nearly 100 square kilometers. The next day, the first regiment of the National Police decided, by consensus, not to repress the protests and were internally reprimanded by the government.
On June 2, as the protests raged on, President Mesa announced two measures, designed to placate the indigenous protesters on the one hand and the Santa Cruz autonomy movement on the other: elections for a new constitutional assembly and a referendum on regional autonomy, both set for October 16. However, both sides rejected Mesa's call: the Pro-Santa Cruz Civic Committee declared its own referendum on autonomy for August 12, while in El Alto protesters began to cut off gasoline to La Paz.
Approximately half a million people mobilized in the streets of La Paz, on June 6, and President Mesa subsequently offered his resignation. Riot police used tear gas as miners amongst the demonstrators traditionally set off dynamite in clashes near the presidential palace, while a strike brought traffic to a standstill. However, Congress failed to meet for several days owing to the "insecurity" of meeting as protests raged nearby. Many members of Congress found themselves unable to physically attend the sessions. Senate President Hormando Vaca Díez decided to move the sessions to Bolivia's capital, Sucre, in an attempt to avoid the protesters. Radical farmers occupied oil wells owned by transnational companies, and blockaded border crossings. Mesa ordered the military to airlift food to La Paz, which remained totally blockaded.
Vaca Diez and House of Delegates president, Mario Cossío, were the two next in the line of succession to become president. However, they were strongly disliked by the protesters, and each declared they would not accept succession to the presidency, finally promoting Eduardo Rodríguez, Supreme Court Chief Justice, to the presidency. Considered apolitical and hence trustworthy by most, his administration was a temporary one until elections could be held. Protesters quickly disbanded in many areas, and like many times in Bolivia's past, major political upheavals were taken as a normal part of the political process.
Caretaker President Rodríguez proceeded to implement the Hydrocarbons Law. The new tax IDH has been levied from the companies that are paying 'under reserve'. A number of upstream gas companies have invoked Bilateral Investment Protection Treaties and entered the conciliation phase with the state of Bolivia. The treaties are a step towards a court hearing before the International Centre for Settlement of Investment Disputes (ICSID), dependent of the World Bank, which could force Bolivia to pay indemnities to the companies.
Concerns of possible US intervention
A military training agreement with Asunción (Paraguay), giving immunity to US soldiers, caused some concern after media reports initially reported that a base housing 20,000 US soldiers was being built at Mariscal Estigarribia within 200 km of Argentina and Bolivia, and 300 km of Brazil, near an airport which could receive large planes (B-52, C-130 Hercules, etc.) which the Paraguayan Air Forces do not have.US Marines put a foot in Paraguay , El Clarín, September 9, 2005 According to the Clarín, an Argentinian newspaper, the US military base is strategic because of its location near the Triple Frontera between Paraguay, Brazil and Argentina; its proximity to the Guarani aquifer; and, at the same "moment that Washington's magnifying glass goes on the Altiplano and points toward Venezuelan Hugo Chávez — the regional demon according to Bush's administration — as the instigator of the instability in the region" (Clarín).
Later reports indicated that 400 US troops would be deployed in Paraguay over 18 months for training and humanitarian missions consisting of 13 detachments numbering less than 50 personnel each. The Paraguayan administration as well as Bush's administration denied that the airport would be used as a US military base, or that there would be any other US base in Paraguay.The President of Paraguay similar statements link Mexico
Other countries
The social conflicts paralyzed Bolivia's political life for a time. The unpopularity of the neoliberal Washington consensus, a set of economic strategies implemented by Gonzalo de Lozada's administration, set the stage for the 2006 election of president Evo Morales.
In the meantime, Chile promptly started to build several coastal terminals to receive shipments of liquefied natural gas from Indonesia, Australia and other sources.
Other South American countries are contemplating other ways to secure gas supplies: one project aims at linking the Camisea gas reserves in Peru to Argentina, Brazil, Chile, Uruguay and Paraguay. Linking Pisco (south of Peru) to Tocopilla (north of Chile) with a 1200 km pipeline would cost $2 billion. However, experts doubt the Camisea reserves are enough for all the Southern Cone countries.
Another 8,000 km gas pipeline (Gran Gasoducto del Sur) has been proposed that would link Venezuela to Argentina via Brazil. Its cost is estimated between $8 and $12 billion.
While Argentina and Chile are large consumers of gas (50 percent and 25 percent respectively), other South American countries are a lot less dependent.
Nationalization of natural gas industry
On May 1, 2006, president Evo Morales signed a decree stating that all gas reserves were to be nationalized: "the state recovers ownership, possession and total and absolute control" of hydrocarbons. He thus fulfilled his electoral promises, declaring that "We are not a government of mere promises: we follow through on what we propose and what the people demand". The announcement was timed to coincide with Labor Day on May 1. Ordering the military and engineers of YPFB, the state firm, to occupy and secure energy installations, he gave foreign companies a six-month "transition period" to re-negotiate contracts, or face expulsion. Nevertheless, president Morales stated that the nationalization would not take the form of expropriations or confiscations. Vice President Álvaro García Linera said in La Paz's main plaza that the government's energy-related revenue will jump to $780 million next year, expanding nearly sixfold from 2002. Among the 53 installations affected by the measure are those of Brazil's Petrobras, one of Bolivia's largest investors, which controls 14% of the country's gas reserves. Brazil's Energy Minister, Silas Rondeau, reacted by considering the move as "unfriendly" and contrary to previous understandings between his country and Bolivia. Petrobras, Spain's Repsol YPF, UK gas and oil producer BG Group Plc and France's Total are the main gas companies present in the country. According to Reuters, "Bolivia's actions echo what Venezuelan President Hugo Chávez, a Morales ally, did in the world's fifth-largest oil exporter with forced contract migrations and retroactive tax hikes — conditions that oil majors largely agreed to accept." YPFB would pay foreign companies for their services, offering about 50 percent of the value of production, although the decree indicated that companies at the country's two largest gas fields would get just 18 percent.
Negotiations between the Bolivian government and the foreign companies intensified during the week leading up to the deadline of Saturday October 28, 2006. On Friday an agreement was reached with two of the companies (including Total) and by the deadline on Saturday the rest of the ten companies (including Petrobras and Repsol YPF) operating in Bolivia had also come to an agreement. Full details of the new contracts have not been released, but the objective of raising government share of revenues from the two major fields from 60 percent to 82 percent seems to have been achieved. Revenue share for the government from minor fields is set at 60 percent.
During the six month negotiation period talks with the Brazilian company Petrobras had proven especially difficult. Petrobras had refused raises or reduction to a mere service provider. As a result of stalled talks Bolivian energy minister Andres Soliz Rada resigned in October and was replaced by Carlos Villegas. "We are
obligated to live with Brazil in a marriage without divorce, because we both need each other", said Evo Morales in the contract signing ceremony underlining the mutual dependency of Brazil on Bolivian gas and of Bolivia on Petrobras in gas production.
Reaction
On December 15, 2007, the regions of Santa Cruz, Tarija, Beni, and Pando declared autonomy from the central government. They also moved to achieve full independence from Bolivia's new constitution.
The Protesters
Miners
Miners from the Bolivian trade union Central Obrera Boliviana (COB) have also been very active in the recent protests. Recently they have been active against propositions to privatize pensions. They have been known for letting off very loud explosions of dynamite in the recent protests.
Coca farmers
Shortly after the law passed, Evo Morales, an Aymara Indigenous, cocalero, and leader of the opposition party Movement Towards Socialism (MAS'''), took a moderate position calling the new law "middle ground". However, as the protests progressed, Morales has come out in favor of nationalization and new elections.
Protesters in Cochabamba
Oscar Olivera was a prominent leader in the 2001 protests in Cochabamba against the privatization of water in Bolivia and has also become a leading figure. Specifically the protesters in Cochabamba, Bolivia's fourth largest city, have cut off the main roads in the city and are calling for a new Constituent Assembly as well as nationalization.
Indigenous and peasant groups in Santa Cruz
Indigenous people in the eastern lowland department of Santa Cruz have also become active in the recent disputes over nationalization of the gas and oil industry. They are composed of indigenous groups such as the Guaraní, Ayoreo, Chiquitano and the Guyarayos, as opposed to the highland Indigenous people (Aymara and Quechua). They have been active in recent land disputes and the main organization representing this faction is known as the "Confederacion de pueblos indigenas de Bolivia" (CIDOB). The CIDOB after initially offering support to MAS, the party of Bolivia's new president, have come to believe that they were deceived by the Bolivian government. The MAS, which is based in the highlands, is no more willing to grant them voice than the previous governments whose power was also based from the highlands. Another smaller more radical group called the "Landless Peasant Movement" (MST) which is somewhat similar to the Landless Workers' Movement in Brazil, and is composed mainly of immigrants from the western part of the country. Recently, Guaraní people from this group have taken oil fields run by Spain's Repsol YPF and the United Kingdom's BP and have forced them to stop production.
Felipe Quispe and peasant farmers
Felipe Quispe was an Aymara leader who wished to return control of the country from what he saw as the "white elite" to the indigenous people who make up the majority of the country's population. Therefore, he was in favor of an independent "Aymaran state". Quispe is the leader of the Pachakutik Indigenous Movement, that won six seats in the Congress and the secretary general of the United Peasants Union of Bolivia in the 2002 Bolivian elections.
See also
Cochabamba anti-privatization protests
Geology of Bolivia
Bolivian gas referendum, 2004
References
External links
Democracy in Crisis in Latin America. Bolivia and Venezuela Test the International Community's Democratic Commitment, SWP-Comments 26/2005 (June 2005)
Bolivia's top Court chief takes Presidency AP (Yahoo news)
Main Protest Groups in Bolivia
Turning Gas into Development in Bolivia from Dollars & Sense magazine
Bolivia Information Forum Information on oil and gas in Bolivia
The Distribution of Bolivia’s Most Important Natural Resources and the Autonomy Conflicts Center for Economic and Policy Research
Black October , Miami New Times Dignity and Defiance: Stories from Bolivia’s Challenge to Globalization – video report on Democracy Now! โบลิเวียโมเดล กับ การปฎิรูปพลังงานไทย สร้างความมั่นคงทางพลังงานให้กับประเทศจริงหรือ? – in Thai Pantip''
21st-century conflicts
Politics of Bolivia
Protests in Bolivia
Natural gas in Bolivia
Evo Morales
Bolivia–Chile relations
Natural resource conflicts
Energy policy
2000s in Bolivia
Labor disputes in Bolivia | Bolivian gas conflict | [
"Environmental_science"
] | 6,452 | [
"Environmental social science",
"Energy policy"
] |
336,895 | https://en.wikipedia.org/wiki/Plan%20Colombia | Plan Colombia was a United States foreign aid, military aid, and diplomatic initiative aimed at combating Colombian drug cartels and left-wing insurgent groups. The plan was originally conceived in 1999 by the administrations of Colombian President Andrés Pastrana and U.S. President Bill Clinton, and signed into law in the United States in 2000.
The official objectives of Plan Colombia were to end the Colombian armed conflict by increasing funding and training of Colombian military and para-military forces and creating an anti-cocaine strategy to eradicate coca cultivation. Partly as a result of the plan, the FARC lost much of its power against the Colombian government. Sources conflict on its effects limiting cocaine production, however. US reports conclude that cocaine production in Colombia dropped 72% from 2001 to 2012, contradicting UN sources which found no change in cocaine production.
Plan Colombia in its initial form existed until 2015, with the United States and the Colombian government seeking a new strategy as a result of the peace talks between the Colombian government and the FARC. The new program is called "Peace Colombia" (Paz Colombia) and seeks to provide Colombia with aid after the implementation of the Peace Agreement in 2017 with the FARC.
Original Plan Colombia
The original version of Plan Colombia was officially unveiled by President Andrés Pastrana in 1999. Pastrana had first proposed the idea of a possible "Marshall Plan for Colombia" during a speech at Bogotá's Tequendama Hotel on June 8, 1998, nearly a week after the first round of that year's presidential elections. Pastrana argued that:
[Drug crops are] a social problem whose solution must pass through the solution to the armed conflict...Developed countries should help us to implement some sort of 'Marshall Plan' for Colombia, which will allow us to develop great investments in the social field, in order to offer our peasants different alternatives to the illicit crops.
After Pastrana was inaugurated, one of the names given to the initiative at this early stage was "Plan for Colombia's Peace", which President Pastrana defined as "a set of alternative development projects which will channel the shared efforts of multilateral organizations and [foreign] governments towards Colombian society". Pastrana's Plan Colombia, as originally presented, did not focus on drug trafficking, military aid, or fumigation, but instead emphasized the manual eradication of drug crops as a better alternative. According to author Doug Stokes, one of the earlier versions of the plan called for an estimated 55 per cent military aid and 45 percent developmental aid.
During an August 3, 1998 meeting, President Pastrana and U.S. President Bill Clinton discussed the possibility of "securing an increase in U.S. aid for counternarcotics projects, sustainable economic development, the protection of human rights, humanitarian aid, stimulating private investment, and joining other donors and international financial institutions to promote Colombia's economic growth". Diplomatic contacts regarding this subject continued during the rest of the year and into 1999.
For President Pastrana, it became necessary to create an official document that specifically "served to convene important U.S. aid, as well as that of other countries and international organizations" by adequately addressing US concerns. The Colombian government also considered that it had to patch up a bilateral relationship that had heavily deteriorated during the previous administration of President Ernesto Samper (1994–1998). According to Pastrana, Under Secretary of State Thomas R. Pickering eventually suggested that, initially, the U.S. could be able to commit to providing aid over a three-year period, as opposed to continuing with separate yearly packages.
As a result of these contacts, US input was extensive, and meant that Plan Colombia's first formal draft was originally written in English, not Spanish, and a Spanish version was not available until "months after a revised English version was already in place".
Critics and observers have referred to the differences between the earliest versions of Plan Colombia and later drafts. Originally, the focus was on achieving peace and ending violence, within the context of the ongoing peace talks that Pastrana's government was then holding with the FARC guerrillas, following the principle that the country's violence had "deep roots in the economic exclusion and...inequality and poverty".
The final version of Plan Colombia was seen as considerably different, since its main focuses would deal with drug trafficking and strengthening the military. When this final version was debated on the U.S. Senate floor, Joseph Biden spoke as a leading advocate of the more hardline strategy.
Ambassador Robert White stated:
If you read the original Plan Colombia, not the one that was written in Washington but the original Plan Colombia, there's no mention of military drives against the FARC rebels. Quite the contrary. (President Pastrana) says the FARC is part of the history of Colombia and a historical phenomenon, he says, and they must be treated as Colombians...[Colombians] come and ask for bread and you (America) give them stones.
In the final U.S. aid package, 78.12 percent of the funds for 2000 went to the Colombian military and police for counternarcotics and military operations. (See graph, below)
President Pastrana admitted that most of the resulting US aid to Colombia was overwhelmingly focused on the military and on counternarcotics (68%), but argued that this was only some 17% of the total amount of estimated Plan Colombia aid. The rest, focusing mostly on social development, would be provided by international organizations, Europe, Japan, Canada, Latin America, and Colombia itself. In light of this, Pastrana considered that the Plan had been unfairly labeled as "militarist" by national and international critics that focused only on the US contribution.
Financing
This original plan called for a budget of US$7.5 billion, with 51% dedicated to institutional and social development, 32% for fighting the drug trade, 16% for economic and social revitalization, and 0.8% to support the then on-going effort to negotiate a political solution to the state's conflict with insurgent guerrilla groups. Pastrana initially pledged US$4.864 billion of Colombian resources (65% of the total) and called on the international community to provide the remaining US$2.636 billion (35%). Most of this funding was earmarked for training and equipping new Colombian army counternarcotics battalions, providing them with helicopters, transport and intelligence assistance, and supplies for coca eradication.
In 2000, the Clinton administration in the United States supported the initiative by committing $1.3 billion in foreign aid and up to five hundred military personnel to train local forces. An additional three hundred civilian personnel were allowed to assist in the eradication of coca. This aid was an addition to US$330 million of previously approved US aid to Colombia. US$818 million was earmarked for 2000, with US$256 million for 2001. These appropriations for the plan made Colombia the third largest recipient of foreign aid from the United States at the time, behind only Israel and Egypt. Under President George W. Bush, aid to Colombia earmarked for military aid vs. humanitarian aid became more balanced. Ultimately, the U.S. would provide approximately US$10 billion under Plan Colombia through 2015.
Colombia sought additional support from the European Union and other countries, with the intention of financing the mostly social component of the original plan. Some would-be donors were reluctant to cooperate, as they considered that the US-approved aid represented an undue military slant, and additionally lacked the will to spend such amounts of money for what they considered an uncertain initiative.
Initially, some of these countries donated approximately US$128.6 million (in one year), which was 2.3% of the resulting total. Larger amounts, in some instances up to several hundred million dollars, were also donated to Colombia and continued to be provided either directly or through loans and access to credit lines, but technically fell outside the framework of Plan Colombia. "European countries provide economic and social development funds but do not consider them to be in support of Plan Colombia." In any case, the sums raised fell well short of what was originally called for. In addition, Colombia's eventual contribution was less than planned due in part to a 1999–2001 economic crisis.
War on drugs
In the United States, Plan Colombia is seen as part of the "war on drugs", which was started under President Nixon in 1971. Plan Colombia has numerous supporters in the United States Congress. Congressional supporters assert that over 1,300 square kilometers of mature coca were sprayed and eradicated in Colombia in 2003, which would have prevented the production of over 500 metric tons of cocaine, stating that it eliminated upward of $100 million of the illicit income that supports drug dealers and different illegal organizations considered terrorist in Colombia, the U.S. and the European Union.
According to a 2006 U.S. congressional report on U.S. enterprises that had signed contracts to carry out anti-narcotics activities as part of Plan Colombia, DynCorp, the largest private company involved, was among those contracted by the State Department, while others signed contracts with the Defense Department.
Expansion under Bush
As enacted in 2000, Plan Colombia called for two U.S. supported actions in Colombia. The first was to cause the “eradication, interdiction, and alternative development” of coca fields which are used to produce cocaine—which in turn provided most of the funding for the FARC. And second, to offer social and economic assistance to the rural areas that the FARC have controlled for half a century.
A third more security oriented countermeasure—to provide enhanced intelligence, training and supplies to Colombian armed forces against the FARC—took greater importance post 9/11 under the Andean Regional Initiative, as the threat of global terrorism received increased attention. The Andean Regional Initiative initially appropriated $676 million to South American countries, with approximately $380 million targeted at Colombia. The 2001 initiative reduced the limitations on the numbers and the activities of civilian contractors, allowing them to carry and use military weapons which, according to the U.S. government, would be necessary to ensure the safety of personnel and equipment during spray missions. The United States Congress rejected amendments to the Andean initiative that would have redirected some of the money to demand reduction programs in the United States, primarily through funding of drug treatment services. Some critics have opposed the rejection of these modifications, claiming that the drug problem and its multiple repercussions would be structurally addressed by curbing the demand, and not the production, of illicit drugs, since drug crops can always be regrown and transplanted elsewhere, inside or outside Colombia and its neighboring countries, as long as there is a commercially viable market.
In 2004, the United States appropriated approximately $727 million for the Andean Counterdrug Initiative, $463 million of which was targeted at Colombia.
In October 2004, the compromise version of two U.S. House–Senate bills was approved, increasing the number of U.S. military advisors that operate in the country as part of Plan Colombia to 800 (from 400) and that of private contractors to 600 (from 400).
In a November 22, 2004 visit to Cartagena, President Bush stood by Colombian president Uribe's security policies and declared his support for continuing to provide Plan Colombia aid in the future. Bush claimed the initiative enjoys "wide bipartisan support" in the US and in the coming year he would ask Congress to renew its support.
Taken together then, the three countermeasures represent what President George W. Bush referred to as his “three-legged stool” strategy of “waging a global war on terror, supporting democracy and reducing the flow of illicit drugs into the United States.” Although Plan Colombia includes components which address social aid and institutional reform, the initiative has come to be regarded by its critics as fundamentally a program of counternarcotics and military aid for the Colombian government.
Criticism
Research studies
The US Defense Department funded a two-year study which found that the use of the armed forces to interdict drugs coming into the United States would have minimal or no effect on cocaine traffic and might, in fact, raise the profits of cocaine cartels and manufacturers. The 175-page study, "Sealing the Borders: The Effects of Increased Military Participation in Drug Interdiction," was prepared by seven economists, mathematicians and researchers at the National Defense Research Institute, a branch of the RAND Corporation and released in 1988. The study noted that seven previous studies in the past nine years, including ones by the Center for Naval Research and the Office of Technology Assessment, had come to similar conclusions. Interdiction efforts, using current armed forces resources, would have almost no effect on cocaine importation into the United States, the report concluded.
During the early to mid-1990s, the Clinton administration ordered and funded a major cocaine policy study again by RAND. The Rand Drug Policy Research Center study concluded that $3 billion should be switched from federal and local law enforcement to treatment. The report said that treatment is the cheapest way to cut drug use. President Clinton's Director of National Drug Control Policy rejected slashing law enforcement spending.
Plan Colombia itself didn't exist at the time of the second RAND study, but the U.S. aid package has been criticized as a manifestation of the predominant law enforcement approach to the drug trade as a whole.
Guerrillas and oil
Critics of Plan Colombia, such as authors Doug Stokes and Francisco Ramirez Cuellar, argue that the main intent of the program is not drug eradication but to fight leftist guerrillas. They argue that these Colombian peasants are also a target because they are calling for social reform and hindering international plans to exploit Colombia's valuable resources, including oil and other natural resources. As of 2004, Colombia is the fifteenth largest supplier of oil to the United States and could potentially rise in that ranking if petroleum extraction could be conducted in a more secure environment. From 1986 to 1997 there were nearly of crude oil spilled in pipeline attacks. Damage and lost revenue were estimated at $1.5 billion, while the oil spills seriously damaged the ecology.
While the assistance is defined as counternarcotics assistance, critics such as filmmaker Gerard Ungeman argues it will be used primarily against the FARC. Supporters of the Plan such as the U.S. embassy in Bogotá and U.S. Under Secretary of State for Political Affairs Marc Grossman argue that the distinction between guerrillas, paramilitaries and drug dealers may have increasingly become irrelevant, seeing as they could be considered as part of the same productive chain. As a result, counternarcotics assistance and equipment should also be available for use against any of these irregular armed groups when necessary.
Human rights conditions
In June 2000, Amnesty International issued a press release in which it criticized the implemented Plan Colombia initiative:
Plan Colombia is based on a drug-focused analysis of the roots of the conflict and the human rights crisis which completely ignores the Colombian state's own historical and current responsibility. It also ignores deep-rooted causes of the conflict and the human rights crisis. The Plan proposes a principally military strategy (in the US component of Plan Colombia) to tackle illicit drug cultivation and trafficking through substantial military assistance to the Colombian armed forces and police. Social development and humanitarian assistance programs included in the Plan cannot disguise its essentially military character. Furthermore, it is apparent that Plan Colombia is not the result of a genuine process of consultation either with the national and international non-governmental organizations which are expected to implement the projects nor with the beneficiaries of the humanitarian, human rights or social development projects. As a consequence, the human rights component of Plan Colombia is seriously flawed.
During the late 1990s, Colombia was the leading recipient of US military aid in the Western Hemisphere, and due to its continuing internal conflict has the worst human rights record, with the majority of atrocities attributed (from most directly responsible to least directly responsible) to paramilitary forces, insurgent guerrilla groups and elements within the police and armed forces.
A United Nations study reported that elements within the Colombian security forces, which have been strengthened due to Plan Colombia and U.S. aid, do continue to maintain intimate relationships with right-wing death squads, help organize paramilitary forces, and either participate in abuses and massacres directly or, as it is usually argued to be more often the case, deliberately fail to take action to prevent them. One of the larger examples of this behavior was the 2008 False Positives Scandal, in which the Colombian military murdered approximately 1,400 innocent civilians in order to make false claims that these cadavers were Farc soldiers.
Critics of the Plan and of other initiatives to aid Colombian armed forces point to these continuing accusations of serious abuse, and argue that the Colombian state and military should sever any persisting relationship with these illegal forces and need to prosecute past offenses by paramilitary forces or its own personnel. Supporters of the Plan assert that the number and scale of abuses directly attributable to the government's forces have been slowly but increasingly reduced.
Some paramilitary commanders openly expressed their support for Plan Colombia. In May 2000, paramilitary commander "Yair" from the Putumayo Southern Bloc, himself a former Colombian special forces sergeant, said that the AUC supported the plan and he offered to assist U.S.-trained counternarcotics battalions in their operations against the FARC in the coca-growing Putumayo department. Paramilitaries and FARC fought it out in the region one month before a Plan Colombia mandated military offensive began later that year. AUC fighters would have passed through checkpoints manned by the army's 24th Brigade in the area during the fighting.
SOA and human rights
According to Grace Livingstone, more Colombian School of the Americas (SOA) graduates have been implicated in human rights abuses than SOA graduates from any other country. All of the commanders of the brigades highlighted in the 2001 Human Rights Watch report were graduates of the SOA, including the III brigade in Valle del Cauca, where the 2001 Alto Naya Massacre occurred. US-trained officers have been accused of being directly or indirectly involved in many atrocities during the 1990s, including the Massacre of Trujillo and the 1997 Mapiripán Massacre.
In addition, Livingstone also argues that the Colombian paramilitaries employ counter insurgency methods that US military schools and manuals have been teaching Latin American officers in Colombia and in the region at large since the 1960s, and that these manuals teach students to target civilian supporters of the guerrillas, because without such support the guerrillas cannot survive.
The Pastrana administration replied to critics by stating that it had publicly denounced military-paramilitary links, as well as increased efforts against paramilitaries and acted against questionable military personnel. President Pastrana argues that he implemented new training courses on human rights and on international law for military and police officers, as well as new reforms to limit the jurisdiction of military courts in cases of grave human rights abuses such as torture, genocide or forced disappearances.
Pastrana claims that some 1300 paramilitaries were killed, captured or surrendered during his term, and that hundreds of members of the armed forces, including up to a hundred officers, were dismissed due to the existence of what it considered as sufficient allegations of involvement in abuses or suspected paramilitary activities, in use of a new presidential discretional faculty. These would include some 388 discharges in 2000 and a further 70 in 2001. Human Rights Watch recognized these events, but questioned the fact that the reasons for such discharges were not always made clear nor followed by formal prosecutions, and claimed that Pastrana's administration cut funds for the Attorney General's Human Rights Unit.
Leahy Provision
In 1997 the US Congress approved an Amendment to the Foreign Operations Appropriations Act which banned the US from giving anti-narcotics aid to any foreign military unit whose members have violated human rights. The Amendment was called the "Leahy Provision" or "Leahy Law" (named after Senator Patrick Leahy who proposed it). Partially due to this measure and the reasoning behind it, anti-narcotics aid was initially only provided to Police units, and not to the military during much of the 1990s.
According to author Grace Livingstone and other critics, the problem is there have been very few military units free of members that have not been implicated in any kind of human rights abuses at all, so they consider that the policy has been usually ignored, downplayed or occasionally implemented in a patchy way. In 2000, Human Rights Watch, together with several Colombian human rights investigators, published a study in which it concluded that half of Colombia's eighteen brigade-level army units had extensive links to paramilitaries at the time, citing numerous cases which directly or indirectly implicated army personnel.
The State Department certified that Colombia would have complied with one of the human rights conditions (Sec. 3201) attached to Plan Colombia aid, due to President Pastrana's directing "in writing that Colombian Armed Forces personnel who are credibly alleged to have committed gross violations of human rights will be brought to justice in Colombia's civilian courts...". In August 2000 President Clinton used his presidential waiver to override the remaining human rights conditions, on the grounds that it was necessary for the interests of U.S. national security. Livingstone argues that if the US government funds military units guilty of human rights abuses, it is acting illegally.
Aerial eradication strategy
Aerial eradication, also referred to as fumigation, was implemented as a part of Plan Colombia, strongly supported by the United States government, as a strategy to eliminate drug crops in Colombia starting in the 1980s. By the mid-1990s, drug cultivation had increased and Colombia supplied up to 90% of the world's cocaine, which intensified aerial eradication efforts. United States policymakers advocated for the extensive use of the herbicide Roundup Ultra created by Monsanto for large-scale aerial spraying of illicit crops in Colombia.
Between 2000 and 2003, the aerial eradication program sprayed over 380,000 hectares of coca, accounting for more than 8% of Colombia's cultivatable land. This program was led by the Colombian Antinarcotics Directorate (DIRAN), police units responsible for the oversight of aerial spraying operations. By 2003, the program included twenty-four aircraft devoted to eradication with the addition of armed helicopters providing security against ground fire from armed groups including FARC and other organizations active in drug cultivation locals.
Aerial eradication Criticisms
The practice of forced eradication of illicit crops through aerial spraying has been criticized for its effectiveness in reducing the drug supply as well as having negative social and environmental impacts. According to the Transnational Institute, "the fact that an increasing crop area is being eradicated – much more was sprayed in 2003 than in 2002 – should be interpreted not as a sign of the policy's success, but as a sign of its failure, because it indicates that more and more land is being planted in these crops." According to Joshua Davis of Wired.com, the area has seen the emergence of a Roundup resistant variety of the coca plant known as "Boliviana Negra" that is not talked about because it might "put an end to American aid money", depicting the ecological adaptation resulting from this operation.
The aerial eradication strategy in Colombia has been a highly controversial approach in the nation's efforts to combat coca cultivation and the subsequent illicit drug trade. The strategies implementation of herbicides, specifically glyphosate, through aerial aircraft spraying was targeted towards coca crops, the raw material for cocaine production. The effectiveness and consequences have been a subject of large debate. In 2004, according to Robert Charles, assistant secretary of state for the INL, aerial eradication efforts were getting close to the point that continued suppression of the drug crops would convince growers that continued cultivation will be futile. Despite this perspective, statistics show that sharp reductions in growing caused by fumigation in 2002–2003 did not reduce cultivation levels back to their numbers in 1998, and on top of that, Colombia still remains the largest coca-growing country in the world.
Another reason to remain skeptical of the success of this program is the "balloon effect", where when one part of an industry is pressured it will be resolved in another area. In the context of aerial eradication, when drug cultivation was halted in one area, it would simply appear in another area, which would in turn reverse the intended effects of fumigation. As a result, coca farming has spread throughout Colombia, and the Colombian government even reported that between 1999 and 2002, the number of provinces where coca was being grown rose from twelve to twenty-two. The United Nations Office on Drugs and Crime (UNODC) also presented research on coca cultivation in Colombia which showed this crop's high degree of mobility and its increases of cultivation in ten provinces. For example, in the province Guaviare, coca cultivation moved south toward Caquetá and Putumayo as a result of aerial eradication in the 1990s, and thus cultivation rose 55 percent.
Another issue raised by aerial eradication is rights violations as aerial eradication destroys one of the few economic options for a number of peasants and causes forced displacement because peasants have to find a new place to grow their crops.
One notable aspect in discussion of the aerial spraying of illicit crops in Colombia is the size of the areas sprayed. In Putumayo alone, forty thousand hectares are said to have been sprayed in a single department. On a national scale in 2003, chemicals were sprayed on 139,000 hectares, contributing to the displacement of approximately 17,000 people. The displacements led to threats of local livelihoods and food insecurity.
The broader implications of aerial eradication efforts have been recorded by Colombia's Council for Human Rights and Displacement, which reported between 2001 and 2002, aerial eradication displaced 75,000 people nationwide. The use of glyphosate, a potent herbicide, sparked concerns about its effects on human health and the environment. The scale of the operation suggests that there must be modification to the landscape which has consequences doe flora and fauna in the affected regions. The long-term effects on the residents of the sprayed areas are a matter of growing study and concern. These studies bring up the multifaceted nature of Plan Colombia, specifically aerial spraying in human and environmental health.
In addition to the previous criticisms, spraying has been associated with significant health concerns among residents in the affected areas. Reports have indicated that individuals living in these areas have a range of health issues including skin reactions, respiratory problems, and other ailments. This has sparked controversy with the United States due to the U.S. government's downplaying of the severity of the health risks. The officials had argued that illnesses arise as a result of the herbicides used by local farmers for individual crop control and cultivation rather than the aerial spraying operations.
To further the divide, the EPA provided the State Department with the assessments of health and environmental impacts of aerial eradication, however these assessments were conducted without specific testing measures of the local Colombian environment. The State Department did not provide sufficient evidence to the EPA on the delivery and mechanics of the spraying operations. In terms of environmental effects, because of the "balloon effect", farmers moved their crop cultivation into forests and national parks, resulting in, deforestation, pollution of soil, pollution of waterways, and even increased risk of extinction for Colombian bird and plant species.
Aside from these specific issues, there are also others raised about the costs of fumigation and if it is utilizing too much of the budgetary spending.
This intensive program to eradicate crops with aerial spraying is the backbone of the bilateral anti-drug partnership between Colombia and United States making this program integral to both nations interactions.
Proposed use of mycoherbicides
In 1999, the U.S. Congress added a provision to its Plan Colombia aid package that called for the employment of mycoherbicides against coca and opium crops. The potential use of Fusarium oxysporum as part of these efforts was questioned and opposed by environmentalists. Colombia rejected the proposal and the Clinton administration waived the provision in light of continued criticism.
Military programs
Compared to the counternarcotics measures, the military campaign waged against the FARC and other paramilitaries appears to have achieved more success. Military aid packages that were both part of, and separate from, Plan Colombia have successfully driven the FARC out of most of their former territory and targeted the leaders of the insurgency, killing over two dozen of them. The U.S. has largely remained in a non-combatant role, providing real time intelligence, training, and military equipment.
As the Colombian military (with U.S. support) continues to crack down on the FARC during the mid-2000s, the insurgency was sapped of much of their military might. Traditionally, the FARC has operated with a centralized, hierarchical command structure and a governing body called the Secretariat. With the deaths in 2008 of main leader Manuel Marulanda and second-in-command Raúl Reyes, followed by the killing of top tacticians Mono Jojoy and Alfonso Cano in 2010 and 2011, the group became increasingly disjointed. In 2001, for example, the FARC had over 18,000 fighters, but that figure fell to under 7,000 by 2014 mainly as a result of fighters abandoning the cause. In terms of land, the FARC once controlled a DMZ the size of Switzerland in 1999 and had encircled the capital of Bogota, yet they were subsequently pushed back to the southern highlands of the country and into the surrounding borders of Ecuador and Bolivia. As a result, FARC attacks in Colombia have declined significantly. Bombings on Occidental Petroleum's Caño Limón pipeline—a frequent target of the FARC—for instance, reached 178 separate incidents in 2001 compared to just 57 in 2007.
Even as the FARC have lost power and subsequently signed a peace agreement, there is concern about what will happen to the remnants of the group. One fear is that autonomous fronts will forge their own relationships with the cartels and continue the drug trade in a more dispersed manner.
As of 2008 Plan Colombia's U.S.-funded military programs comprised:
Army Aviation Brigade (2000–2008 cost: $844 million)
This program is executed by the U.S. State and Defense departments. It equips and trains the helicopter units of the Colombian Army. It is subdivided into various specific programs.
Plan Colombia Helicopter Program (PCHP) comprises helicopters provided for free by the U.S. government to the Colombian Army. The program needs 43 contract pilots and 87 contract mechanics to operate.
17 Bell UH-1N helicopters (Former Canadian aircraft bought via US gov )
22 Bell UH-1H (Huey II) helicopters
13 Sikorsky UH-60L helicopters
Foreign Military Sales (FMS) helicopters are purchased by the Colombian Army but supported by U.S. personnel.
20 Sikorsky UH-60L helicopters
Technical Assistance Field Team
Based at Tolemaida Air Base (Melgar, Tolima), the team provides maintenance to U.S.-made helicopters.
Joint Initial Entry Rotary Wing School
Based at Melgar Air Base (Melgar, Tolima), it is a flight school for Colombian combat-helicopter pilots. Additional pilot training is provided at the U.S. Army's helicopter training center (Fort Rucker, Alabama)
National Police Air Service (2000–2008 cost: $463 million)
The U.S. State Department supports approximately 90 aircraft operated by the Colombian National Police. The U.S. Defense Department supports the construction of an aviation depot at Madrid Air Base (Madrid, Cundinamarca).
National Police Eradication Program (2000–2008 cost: $458 million)
This program is executed by a private company, DynCorp, under the supervision of the U.S. State Department's Bureau of International Narcotics and Law Enforcement Affairs (INL), and operates out of Patrick Space Force Base in Florida. U.S. State Department-owned planes spray chemicals to destroy coca and opium poppy crops in rural Colombia. From 2000 to 2008, more than 1 million hectares (2.5 million acres) of crops were destroyed.
13 Air Tractor AT-802 armored crop dusters
13 Bell UH-1N helicopters
4 Alenia C-27 cargo planes
National Police Interdiction Efforts (2000–2008 cost: $153 million)
The U.S. State Department equips and trains a Colombian National Police unit known as Junglas. The unit's 500 members are divided into three companies based in Bogotá, Santa Marta, and Tuluá.
Infrastructure Security Strategy (2000–2008 cost: $115 million)
This program secures part of the Cano Limon-Covenas Pipeline, benefiting international oil company Occidental Petroleum. Its air component has 2 Sikorsky UH-60 and 8 Bell UH-1H (Huey II) helicopters. Its ground component includes U.S. Special Forces training and equipment for 1,600 Colombian Army soldiers.
Army Ground Forces (2000–2008 cost: $104 million)
Joint Task Force Omega
It was established to operate in the central departments of Meta, Guaviare, and Caquetá. U.S. military advisors provided planning and intelligence support. The U.S. also provided weapons, ammunition, vehicles, and a base in La Macarena, Meta. It has about 10,000 soldiers.
Counternarcotics Brigade
It was established to operate in the southern departments of Putumayo and Caquetá. The U.S. Defense Department provided training and built bases in Tres Esquinas and Larandia, Caquetá. The U.S. State Department provided weapons, ammunition, and training. It has about 2,300 soldiers.
Joint Special Forces Command
It was established to pursue wanted individuals and rescue hostages. The U.S. provided training, weapons, ammunition, and a base near Bogotá. It has about 2,000 soldiers.
Police Presence in Conflict Zones (2000–2008 cost: $92 million)
This program aims to establish government presence in all Colombian municipalities. Fifteen percent of Colombian municipalities had no police presence in 2002. Today all municipalities are covered, but in many of them government presence is limited to a small number of policemen. The program organized 68 squadrons of Carabineros, of 120 policemen each. The U.S. Department of State provides training, weapons, ammunition, night-vision goggles, and other equipment.
Coastal and River Interdiction (2000–2008 cost: $89 million)
This program gave the Colombian Navy and Marines water vessels and aircraft to patrol the country's coast and rivers. The Navy received 8 interceptor boats and 2 Cessna Grand Caravan transport planes. The Marines received 95 patrol boats. The U.S. also provided both services with weapons, fuel, communications gear, night-vision goggles, and other equipment.
Air Interdiction (2000–2008 cost: $62 million)
The U.S. State and Defense departments provided the Colombian Air Force with 7 surveillance planes and their maintenance support. The program also operates five radars inside Colombia, other radars outside the country, and airborne radars. The program is also known as the Air Bridge Denial Program.
Another $2 billion were allocated from 2000 to 2008 to other programs including the Critical Flight Safety Program to extend the life of the U.S. State Department's fleet of aircraft, additional counternarcotics funding and aviation support for battlefield medical evacuations.
Nonmilitary programs
As of 2008, the U.S. has provided nearly $1.3 billion to Colombia through Plan Colombia's nonmilitary aid programs:
Alternative Development (2000–2008 cost: $500 million)
Internally Displaced Persons (2000–2008 cost: $247 million)
Demobilization and Reintegration (2000–2008 cost: $44 million)
Democracy and Human Rights (2000–2008 cost: $158 million)
Promote the Rule of Law (2000–2008 cost: $238 million)
Results
U.S. 2005 estimate
On April 14, 2006, the U.S. Drug Czar's office announced that its Colombian coca cultivation estimate for 2005 was significantly greater than that of any year since 2002. The press release from the U.S. Office of National Drug Control Policy stated that "coca cultivation declined by 8 percent, from 114,100 to 105,400 hectares, when those areas surveyed by the US government in 2004 were compared with the same areas in 2005". However, "the survey also found 144,000 hectares of coca under cultivation in 2005 in a search area that was 81 percent larger than that used in 2004...newly imaged areas show about 39,000 additional hectares of coca. Because these areas were not previously surveyed, it is impossible to determine for how long they have been under coca cultivation."
Critics of Plan Colombia and of ongoing fumigation programs considered this new information as a sign of the failure of current U.S. drug policy. The Center for International Policy stated that "even if we accept the U.S. government's argument that the high 2005 estimate owes to measurement in new areas, it is impossible to claim that Plan Colombia has brought a 50 percent reduction in coca-growing in six years...Either Colombia has returned to [the 2002] level of cultivation, or the 'reductions' reported in 2002 and 2003 were false due to poor measurement."
UN 2005 estimate
On June 20, 2006, the United Nations (UN) Office on Drugs and Crime (UNODC) presented its own survey on Andean coca cultivation, reporting a smaller increase of about 8% and confirming a rising trend shown by the earlier U.S. findings. UN surveys employ a different methodology and are part of the ongoing "Illicit Crop Monitoring Program" (ICMP) and its "Integrated Illicit Crop Monitoring System" (SIMCI) project. The UNODC press release stated that during 2005 the "area under coca cultivation in Colombia rose by 6,000 hectares to 86,000 after four consecutive years of decline despite the continued efforts of the Government to eradicate coca crops". This represents a small increase above the lowest figure recorded by UNODC's surveys, which was 80,000 hectares in 2004. For UNODC, current cultivation remained "still well below the peak of 163,300 hectares recorded in 2000", as "significant reductions [...] have been made in the past five years and overall figures remain nearly a third below their peak of 2000".
UNODC concluded that "substantial international assistance" is needed by Colombia and the other Andean countries "so they can provide poor coca farmers with sustainable alternative livelihoods" and that "aid efforts need to be multiplied at least tenfold in order to reach all impoverished farmers who need support".
Analysis
The results of Plan Colombia have been mixed. From the perspective of the U.S. and Colombian governments, the results of Plan Colombia have been positive. U.S. government statistics show that a significant reduction in leftover coca (total cultivation minus eradicated coca) has been observed from peak 2001 levels of 1,698 square kilometers to an estimated 1,140 square kilometers in 2004. It is said that a record high aerial herbicide fumigation campaign of 1,366 square kilometers in 2004 has reduced the total area of surviving coca, even as newer areas are planted.
Despite this, effective reductions may appear to have reached their limits as in 2004, despite a record high aerial herbicide fumigation campaign of 1,366 square kilometers, the total area of surviving coca has remained constant, as an estimated 1,139 square kilometers in 2003 were followed by about 1,140 square kilometers in 2004.
Additionally, recent poppy seed cultivation has decreased while coca cultivation actually has not. Overall attempted coca cultivation by growers (total planted coca without taking eradication into account) increased somewhat, from 2,467 square kilometers in 2003 to 2,506 square kilometers in 2004. Coca cultivation reached its highest point during the program in 2002 at 2,671 square kilometers.
The U.S. and Colombian governments interpret this data to show a decline in potential production of cocaine, from a peak of 700 metric tons in 2001 to 460 in 2003 and 430 in 2004, as result of an increase in "newly planted [coca fields] in response to eradication," which should be less productive than mature coca.
U.S. government officials admitted in late 2005 that the market price of cocaine has yet to rise significantly, as would be expected from the above reductions in supply. They pointed to possible hidden stashes and other methods of circumventing the immediate effect of eradication efforts which allow for a relatively constant flow of drugs able to enter into the market, delaying the consequences of drug eradication. U.S. Drug Czar John Walters stated that "the reason for [reductions in supply not immediately driving prices up] is that you are not seizing and consuming coca leaves that were grown in 2004 in 2004. You are seizing and consuming coca leaves that were probably grown and processed in 2003 and 2002."
Other observers say this points to the ultimate ineffectiveness of the Plan in stopping the flow of drugs and addressing more important or underlying issues like providing a viable alternative for landless and other peasants, who turn to coca cultivation due to a lack of other economic possibilities, in addition to having to deal with the tumultuous civil conflict between the state, guerrillas and paramilitaries. They also say that by making coca difficult to grow and transport in one area will lead to the movement of the drug cultivation processes to other areas, both inside and outside Colombia, a consequence also known as the balloon effect.
As an example of the above, it is claimed by critics that Peru and Bolivia, as countries which had earlier monopolized coca cultivations until local eradication efforts later led to the eventual transfer of that part of the illegal business to Colombia, have recently had small increases in coca production despite record eradication in Colombia, which some years ago accounted for about 80% of the coca base produced in South America. Supporters of the Plan and of drug prohibition in general consider that the increase has, so far, been significant to be a sign of the above "balloon effect".
The Colombian government announced that it eradicated around 73,000 coca hectares during 2006 which, according to it, would be above all local records in coca plant destruction. The Colombian government said that it plans to destroy an additional 50,000 hectares of coca in 2007.
The Weekly Standard hailed Colombia as "the most successful nation-building exercise by the United States in this century", noting:
Colombia used to be the world capital of kidnappings, but the number of victims is down from 2,882 in 2002 to 376 in 2008. Terrorist acts in the same period have fallen from 1,645 to 303. Homicides are also down dramatically: from 28,837 in 2002 to 13,632 in 2008, a 52 percent reduction. Three hundred fifty-nine Colombian soldiers and police lost their lives in battle in 2008, down from 684 in 2002. Between 2002 and 2008, the total hectares of cocaine eradicated rose from 133,127 to 229,227; tons of cocaine seized rose from 105.1 to 245.5; and the number of drug labs seized rose from 1,448 to 3,667. All statistics on narcotics production are hard to gather and therefore suspect, but the latest indications are that last year cocaine production in Colombia fell by 40 percent. Although Colombia's GDP grew by only 2.4 percent in 2008 as a result of the worldwide slowdown, it grew almost 8 percent in 2007, up from less than 2 percent in 2002. Unemployment is still high at 11.1 percent, but considerably lower than in 2002 when it was 15.7 percent.
See also
Colombia–United States relations
Leahy Law
Mérida Initiative
United States and South and Central America
Drug Enforcement Administration
Agent Orange
Sandra Suárez
References
Further reading
Journals
Dest, Anthony. “The Coca Enclosure: Autonomy Against Accumulation in Colombia.” World Development 137, no. 105166 (October 2020): 1-11. https://doi.org/10.1016/j.worlddev.2020.105166
News
Plan Colombia Misses Coca Target of November 6, 2008, BBC News'''
Plan Accion Target of November 6, 2008, BSC News Texts' Villar, Oliver, and Drew Cottle. Cocaine, Death Squads, and the War on Terror: U.S. Imperialism and Class Struggle in Colombia. New York: Monthly Review Press, 2011. https://web.p.ebscohost.com/ehost/detail/detail?vid=0&sid=22cfd99f-1138-4a6e-a19d-f75c19e20514%40redis&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#AN=443289&db=e025xna
External links
Conductive Capacity of The State: An Assessment of Mexican Political Institutions Since the Merida Initiative Mérida Initiative, Texas State J.P. Olvera
Chomsky's chapter on Plan Colombia.
Photos, statistics, graphs, and maps
The Colombian Miracle , The Weekly StandardGovernment resources
Successful coca eradication results in Colombia
Videos
Excerpt about Plan Colombia
4 video clips
Video clips
Fictional story. Love story about Fernando, an older man who has recently returned to his crime-ridden and drug influenced hometown of Medellin, Colombia.''
Volume two contains "China," "India," and "Colombia."
Drug control law
Colombia–United States relations
History of drug control
Colombian conflict
Law enforcement operations against organized crime in Colombia
Drugs in Colombia
Illegal drug trade in the United States
American terrorism victims | Plan Colombia | [
"Chemistry"
] | 9,379 | [
"Drug control law",
"Regulation of chemicals"
] |
336,897 | https://en.wikipedia.org/wiki/Function%20approximation | In general, a function approximation problem asks us to select a function among a that closely matches ("approximates") a in a task-specific way. The need for function approximations arises in many branches of applied mathematics, and computer science in particular , such as predicting the growth of microbes in microbiology. Function approximations are used where theoretical models are unavailable or hard to compute.
One can distinguish two major classes of function approximation problems:
First, for known target functions approximation theory is the branch of numerical analysis that investigates how certain known functions (for example, special functions) can be approximated by a specific class of functions (for example, polynomials or rational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.).
Second, the target function, call it g, may be unknown; instead of an explicit formula, only a set of points of the form (x, g(x)) is provided. Depending on the structure of the domain and codomain of g, several techniques for approximating g may be applicable. For example, if g is an operation on the real numbers, techniques of interpolation, extrapolation, regression analysis, and curve fitting can be used. If the codomain (range or target set) of g is a finite set, one is dealing with a classification problem instead.
To some extent, the different problems (regression, classification, fitness approximation) have received a unified treatment in statistical learning theory, where they are viewed as supervised learning problems.
References
See also
Approximation theory
Fitness approximation
Kriging
Least squares (function approximation)
Radial basis function network
Regression analysis
Statistical approximations | Function approximation | [
"Mathematics"
] | 340 | [
"Mathematical analysis",
"Mathematical analysis stubs",
"Mathematical relations",
"Statistical approximations",
"Approximations"
] |
336,940 | https://en.wikipedia.org/wiki/Girsanov%20theorem | In probability theory, Girsanov's theorem or the Cameron-Martin-Girsanov theorem explains how stochastic processes change under changes in measure. The theorem is especially important in the theory of financial mathematics as it explains how to convert from the physical measure, which describes the probability that an underlying instrument (such as a share price or interest rate) will take a particular value or values, to the risk-neutral measure which is a very useful tool for evaluating the value of derivatives on the underlying.
History
Results of this type were first proved by Cameron-Martin in the 1940s and by Igor Girsanov in 1960. They have been subsequently extended to more general classes of process culminating in the general form of Lenglart (1977).
Significance
Girsanov's theorem is important in the general theory of stochastic processes since it enables the key result that if Q is a measure that is absolutely continuous with respect to P then every P-semimartingale is a Q-semimartingale.
Statement of theorem
We state the theorem first for the special case when the underlying stochastic process is a Wiener process. This special case is sufficient for risk-neutral pricing in the Black–Scholes model.
Let be a Wiener process on the Wiener probability space . Let be a measurable process adapted to the natural filtration of the Wiener process ; we assume that the usual conditions have been satisfied.
Given an adapted process define
where is the stochastic exponential of X with respect to W, i.e.
and denotes the quadratic variation of the process X.
If is a martingale then a probability
measure Q can be defined on such that Radon–Nikodym derivative
Then for each t the measure Q restricted to the unaugmented sigma fields is equivalent to P restricted to
Furthermore, if is a local martingale under P then the process
is a Q local martingale on the filtered probability space .
Corollary
If X is a continuous process and W is a Brownian motion under measure P then
is a Brownian motion under Q.
The fact that is continuous is trivial; by Girsanov's theorem it is a Q local martingale, and by computing
it follows by Levy's characterization of Brownian motion that this is a Q Brownian
motion.
Comments
In many common applications, the process X is defined by
For X of this form then a necessary and sufficient condition for to be a martingale is Novikov's condition which requires that
The stochastic exponential is the process Z which solves the stochastic differential equation
The measure Q constructed above is not equivalent to P on as this would only be the case if the Radon–Nikodym derivative were a uniformly integrable martingale, which the exponential martingale described above is not. On the other hand, as long as Novikov's condition is satisfied the measures are equivalent on .
Additionally, then combining this above observation in this case, we see that the process
for is a Q Brownian motion. This was Igor Girsanov's original formulation of the above theorem.
Application to finance
This theorem can be used to show in the Black–Scholes model the unique risk-neutral measure, i.e. the measure in which the fair value of a derivative is the discounted expected value, Q, is specified by
Application to Langevin equations
Another application of this theorem, also given in the original paper of Igor Girsanov, is for stochastic differential equations. Specifically, let us consider the equation
where denotes a Brownian motion. Here and are fixed deterministic functions. We assume that this equation has a unique strong solution on . In this case Girsanov's theorem may be used to compute functionals of directly in terms a related functional for Brownian motion. More specifically, we have for any bounded functional on continuous functions that
This follows by applying Girsanov's theorem, and the above observation, to the martingale process
In particular, with the notation above, the process
is a Q Brownian motion. Rewriting this in differential form as
we see that the law of under Q solves the equation defining , as is a Q Brownian motion. In particular, we see that the right-hand side may be written as , where Q is the measure taken with respect to the process Y, so the result now is just the statement of Girsanov's theorem.
A more general form of this application is that if both
admit unique strong solutions on , then for any bounded functional on , we have that
See also
References
External links
Notes on Stochastic Calculus which contain a simple outline proof of Girsanov's theorem.
Stochastic processes
Mathematical theorems
Mathematical finance | Girsanov theorem | [
"Mathematics"
] | 966 | [
"Applied mathematics",
"nan",
"Mathematical problems",
"Mathematical theorems",
"Mathematical finance"
] |
336,957 | https://en.wikipedia.org/wiki/Fingerpaint | Fingerpaint is a kind of paint intended to be applied with the fingers; it typically comes in tubes and is used by small children, though it has occasionally been used by adults either to teach art to children, or for their own use.
Finger paint for education and therapy
American educator Ruth Faison Shaw is credited with introducing fingerpainting as an art education medium. She developed her techniques in Rome, Italy, before patenting a safe non-toxic paint in 1931. After developing her expressive medium for children, Shaw devoted her attention to its therapeutic benefits. At the request of Carl Menninger, she taught at the Southard School at the Menninger Foundation in Topeka, Kansas, United States. Later she served as a consultant to the Department of Psychiatry at Memorial Hospital at the University of North Carolina at Chapel Hill. While working at Memorial Hospital, she met psychologist, John Thomas Payne. Payne became her successor in 1969 and continued her work until his death in 2000.
Today Shaw and Payne's work continues at the Shaw School and Studio in Durham, NC. Founder and director, Bryan Carey apprenticed with Payne from 1986 to 1993. At the suggestion of Payne, Carey devoted an additional seven years to the study of Shaw as an historical figure—artist, teacher and therapist. Carey and his protégée Jennifer Falchi continue the Shaw-Payne tradition by traveling and teaching their method of artistic self-expression and emotional healing to people of all ages and abilities.
Technique
Although the name implies that the paint is applied with the fingers, expert use of this medium makes use of the hands and lower arms too. Use of the entire arm smooths the paint on the paper prior to more detailed modeling with the fingers and other parts of the hand. Sometimes sponges, cloth, and other tools are used to obtain a specific texture.
Some artists are known to solely paint with their hands, as a way to become more intimate with the process. These artists do not use traditional fingerpaint. This style, "Reckless Art", is most accurately categorized as a subgenre of outsider art. Painters like Tyler Ramsey have vowed never to touch a brush, but the use of surgical gloves for safety is common when using toxic oils. Tyler Ramsey claims that, "Rejecting brushes gives a painter the opportunity to approach the craft from a fresh perspective." "Reckless Art" started in 2002 as a way to refute the idea that "Everything has been done already."
Finger painting artist Nick Benjamin claims he "prefers to paint using fingers as the technique results in a real bond between the artwork and artist and allows for some intricate blending not achievable with brushes".
Another popular finger painting artist is Iris Scott, who only uses her hands because she follows her intuition.
Outsider artist Jimmy Lee Sudduth explained that he painted with his fingers because they "never wore out" the way brushes did.
Fingerpaint treated repeatedly by means of decalcomania on the same paper tends to generate fractals, as studied at Yale University.
Materials
Fingerpaint is non-toxic and is usually sold in packages of six bright colors. The paints can also be prepared from non-toxic household products such as flour or cornstarch.
Some childcare facilities use instant pudding as fingerpaint, eliminating the need to keep the children's fingers out of their mouths.
See also
Art movement
Creativity techniques
List of art media
List of artistic media
List of art movements
List of most expensive paintings
List of art techniques
References
External links
Shaw School and Studio
Children's art
Painting techniques
Paints
fr:Peinture au doigt
zh:指画 | Fingerpaint | [
"Chemistry"
] | 735 | [
"Paints",
"Coatings"
] |
336,975 | https://en.wikipedia.org/wiki/Candidate%20key | A candidate key, or simply a key, of a relational database is any set of columns that have a unique combination of values in each row, with the additional constraint that removing any column could produce duplicate combinations of values.
A candidate key is a minimal superkey,
i.e., a superkey that doesn't contain a smaller one. Therefore, a relation can have multiple candidate keys, each with a different number of attributes.
Specific candidate keys are sometimes called primary keys, secondary keys or alternate keys.
The columns in a candidate key are called prime attributes, and a column that does not occur in any candidate key is called a non-prime attribute.
Every relation without NULL values will have at least one candidate key: Since there cannot be duplicate rows, the set of all columns is a superkey, and if that isn't minimal, some subset of that will be minimal.
There is a functional dependency from the candidate key to all the attributes in the relation.
The superkeys of a relation are all the possible ways we can identify a row. The candidate keys are the minimal subsets of each superkey and as such, they are an important concept for the design of database schema.
Example
The definition of candidate keys can be illustrated with the following (abstract) example. Consider a relation variable (relvar) R with attributes (A, B, C, D) that has only the following two legal values r1 and r2:
Here r2 differs from r1 only in the A and D values of the last tuple.
For r1 the following sets have the uniqueness property, i.e., there are no two distinct tuples in the instance with the same attribute values in the set:
{A,B}, {A,C}, {B,C}, {A,B,C}, {A,B,D}, {A,C,D}, {B,C,D}, {A,B,C,D}
For r2 the uniqueness property holds for the following sets;
{B,C}, {B,D}, {C,D}, {A,B,C}, {A,B,D}, {A,C,D}, {B,C,D}, {A,B,C,D}
Since superkeys of a relvar are those sets of attributes that have the uniqueness property for all legal values of that relvar and because we assume that r1 and r2 are all the legal values that R can take, we can determine the set of superkeys of R by taking the intersection of the two lists:
{B,C}, {A,B,C}, {A,B,D}, {A,C,D}, {B,C,D}, {A,B,C,D}
Finally we need to select those sets for which there is no proper subset in the list, which are in this case:
{B,C}, {A,B,D}, {A,C,D}
These are indeed the candidate keys of relvar R.
We have to consider all the relations that might be assigned to a relvar to determine whether a certain set of attributes is a candidate key. For example, if we had considered only r1 then we would have concluded that {A,B} is a candidate key, which is incorrect. However, we might be able to conclude from such a relation that a certain set is not a candidate key, because that set does not have the uniqueness property (example {A,D} for r1). Note that the existence of a proper subset of a set that has the uniqueness property cannot in general be used as evidence that the superset is not a candidate key. In particular, note that in the case of an empty relation, every subset of the heading has the uniqueness property, including the empty set.
Determining candidate keys
The set of all candidate keys can be computed
e.g. from the set of functional dependencies.
To this end we need to define the attribute closure for an attribute set .
The set contains all attributes that are functionally implied by .
It is quite simple to find a single candidate key.
We start with a set of attributes and try to remove successively each attribute.
If after removing an attribute the attribute closure stays the same,
then this attribute is not necessary and we can remove it permanently.
We call the result .
If is the set of all attributes,
then is a candidate key.
Actually we can detect every candidate key with this procedure
by simply trying every possible order of removing attributes.
However there are many more permutations of attributes ()
than subsets ().
That is, many attribute orders will lead to the same candidate key.
There is a fundamental difficulty for efficient algorithms for candidate key computation:
Certain sets of functional dependencies lead to exponentially many candidate keys.
Consider the functional dependencies
which yields candidate keys:
.
That is, the best we can expect is an algorithm that is efficient with respect to the number of candidate keys.
The following algorithm actually runs in polynomial time in the number of candidate keys and functional dependencies:
function find_candidate_keys(A, F)
/* A is the set of all attributes and F is the set of functional dependencies */
K[0] := minimize(A);
n := 1; /* Number of Keys known so far */
i := 0; /* Currently processed key */
while i < n do
for each α → β ∈ F do
/* Build a new potential key from the previous known key and the current FD */
S := α ∪ (K[i] − β);
/* Search whether the new potential key is part of the already known keys */
found := false;
for j := 0 to n-1 do
if K[j] ⊆ S then found := true;
/* If not, add it */
if not found then
K[n] := minimize(S);
n := n + 1;
i := i + 1
return K
The idea behind the algorithm is that given a candidate key
and a functional dependency ,
the reverse application of the functional dependency yields
the set ,
which is a key, too.
It may however be covered by other already known candidate keys.
(The algorithm checks this case using the 'found' variable.)
If not, then minimizing the new key yields a new candidate key.
The key insight is that all candidate keys can be created this way.
See also
Alternate key, a key that is not selected as a primary key among candidate keys for a relationship
Compound key
Database normalization
Primary key
Relational database
Superkey
Prime implicant is the corresponding notion of a candidate key in boolean logic
References
External links
Relational Database Management Systems - Database Design - Terms of Reference - Keys: An overview of the different types of keys in the RDBMS (Relational Database Management System).
Data modeling
Relational model
Database management systems | Candidate key | [
"Engineering"
] | 1,457 | [
"Data modeling",
"Data engineering"
] |
336,999 | https://en.wikipedia.org/wiki/Armstrong%20Flight%20Research%20Center | The NASA Neil A. Armstrong Flight Research Center (AFRC) is an aeronautical research center operated by NASA. Its primary campus is located inside Edwards Air Force Base in California and is considered NASA's premier site for aeronautical research. AFRC operates some of the most advanced aircraft in the world and is known for many aviation firsts, including supporting the first crewed airplane to exceed the speed of sound in level flight (Bell X-1), highest speed by a crewed, powered aircraft (North American X-15), the first pure digital fly-by-wire aircraft (F-8 DFBW), and many others. AFRC operates a second site next to Air Force Plant 42 in Palmdale, California, known as Building 703, once the former Rockwell International/North American Aviation production facility. There, AFRC houses and operates several of NASA's Science Mission Directorate aircraft including SOFIA (Stratospheric Observatory For Infrared Astronomy), a DC-8 Flying Laboratory, a Gulfstream C-20A UAVSAR and ER-2 High Altitude Platform. As of 2023, Bradley Flick is the center's director.
Established as the National Advisory Committee for Aeronautics Muroc Flight Test Unit (1946), the center was subsequently known as the NACA High-Speed Flight Research Station (1949), the NACA High-Speed Flight Station (1954), the NASA High-Speed Flight Station (1958) and the NASA Flight Research Center (1959). On 26 March 1976, the center was renamed the NASA Ames-Dryden Flight Research Center (DFRC) after Hugh L. Dryden, a prominent aeronautical engineer who died in office as NASA's deputy administrator in 1965 and Joseph Sweetman Ames, who was an eminent physicist, and served as president of Johns Hopkins University. The facility took its current name on 1 March 2014, honoring Neil Armstrong, a former test pilot at the center and the first human being to walk on the Moon.
AFRC was the home of the Shuttle Carrier Aircraft (SCA), a modified Boeing 747 designed to carry a Space Shuttle orbiter back to Kennedy Space Center if one landed at Edwards.
The center long operated the oldest B-52 Stratofortress bomber, a B-52B (dubbed Balls 8 after its tail number, 008) that had been converted to drop test aircraft. 008 dropped many supersonic test vehicles, from the X-15 to its last research program, the hypersonic X-43A, powered by a Pegasus rocket. Retired in 2004, the aircraft is on display near Edwards' North Gate.
Location
Though Armstrong Flight Research Center has always been located on the shore of Rogers Dry Lake, its precise location has changed over the years. It currently resides on the northwestern edge of the lake bed, just south of North Gate. Visitors must obtain access to both Edwards AFB and NASA AFRC.
The Rogers Dry Lake bed offers a unique landscape well suited for flight research: dry conditions, few rainy days per year, and large, flat, open spaces in which emergency landings can be performed. At times, the bed can host a runway length of over 40,000 feet. It is home to a compass rose some 2,000 feet across, in which aircraft can land into the wind in any direction.
List of current projects
X-56
X-57
X-59 QueSST
Dream Chaser
UAS in the NAS
TGALS
Historic projects
Douglas Skyrocket
NASA's predecessor, the NACA, operated the Douglas Skyrocket. A successor to the Air Force's Bell X-1, the D-558-II could operate under rocket or jet power. It conducted extensive tests into aircraft stability in the transsonic range, optimal supersonic wing configurations, rocket plume effects, and high-speed flight dynamics. On November 20, 1953, the Douglas Skyrocket became the first aircraft to fly at over twice the speed of sound when it attained a speed of Mach 2.005. Like the X-1, the D-558-II could be air-launched using a B-29 Superfortress. Unlike the X-1, the Skyrocket could also takeoff from a runway with the help of JATO units.
Controlled Impact Demonstration
The Controlled Impact Demonstration was a joint project with the Federal Aviation Administration to research a new jet fuel that would decrease the damage due to fire in the crash of a large airliner. On 1 December 1984, a remotely piloted Boeing 720 aircraft was flown into specially built wing openers which tore the wings open, fuel spraying everywhere. Despite the new fuel additive, the resulting fireball was huge; the fire still took an hour to fully extinguish.
Even though the fuel additive did not prevent a fire, the research was not a complete failure. The additive still prevented the combustion of some fuel which flowed over the fuselage of the aircraft, and served to cool it, similar to how a conventional rocket engine cools its nozzle. Also, instrumented crash test dummies were in the airplane for the impact, and provided valuable research into other aspects of crash survivability for the occupants.
Linear Aerospike SR-71 Experiment
LASRE was a NASA experiment in cooperation with Lockheed Martin to study a reusable launch vehicle design based on a linear aerospike rocket engine. The experiment's goal was to provide in-flight data to help Lockheed Martin validate the computational predictive tools they developed to design the craft. LASRE was a small, half-span model of a lifting body with eight thrust cells of an aerospike engine. The experiment, mounted on the back of an SR-71 Blackbird aircraft, operated like a kind of "flying wind tunnel."
The experiment focused on determining how a reusable launch vehicle's engine plume would affect the aerodynamics of its lifting-body shape at specific altitudes and speeds reaching approximately . The interaction of the aerodynamic flow with the engine plume could create drag; design refinements look to minimize that interaction.
Lunar Landing Research Vehicle
The Lunar Landing Research Vehicle or LLRV was an Apollo Project era program to build a simulator for the Moon landing. The LLRVs, humorously referred to as "Flying Bedsteads", were used by the FRC, now known as the Armstrong Flight Research Center, at Edwards Air Force Base, California, to study and analyze piloting techniques needed to fly and land the Apollo Lunar Module in the moon's airless environment.
Aircraft on display
NB-52B Balls 8 NASA 008
Bell X-1E AF Ser. No. 46-063
F-104N - NASA 826
F-8 Supercritical wing - NASA 810
F-8 Digital Fly-by-wire - NASA 802
F-15B ACTIVE - NASA 837
Grumman X-29 - NASA 849
Lockheed SR-71 Blackbird LASRE - NASA 844
Northrop HL-10 Lifting Body - NASA 804
Rockwell HiMAT
Gallery
Notable employees
Neil Armstrong
Marta Bohn-Meyer
Bill Dana
C. Gordon Fullerton
David Hedgley
Bruce Peterson
R. Dale Reed
David Scott
Milt Thompson
J. Scott Howell
See also
Gromov Flight Research Institute - the Russia counterpart of the Armstrong Flight Research Centre
List of aerospace flight test centres
References
External links
X-Press official newsletter
Photo Collection for NASA Dryden Flight Research Center
The Spoken Word: Recollections of Dryden History, the Early Years, edited by Curtis Peebles
Flight Research: Problems Encountered and What They Should Teach Us by Milton O. Thompson—The early days of the DFRC
Aerospace research institutes
Aviation research institutes
Buildings and structures in Kern County, California
Edwards Air Force Base
NASA facilities
NASA visitor centers
Space technology research institutes
Science and technology in Greater Los Angeles
Aerospace engineering organizations
Flight Research Center
NASA research centers | Armstrong Flight Research Center | [
"Engineering"
] | 1,596 | [
"Aerospace engineering",
"Aerospace engineering organizations",
"Aeronautics organizations"
] |
337,011 | https://en.wikipedia.org/wiki/Distributed.net | Distributed.net is a volunteer computing effort that is attempting to solve large scale problems using otherwise idle CPU or GPU time. It is governed by Distributed Computing Technologies, Incorporated (DCTI), a non-profit organization under U.S. tax code 501(c)(3).
Distributed.net is working on RC5-72 (breaking RC5 with a 72-bit key). The RC5-72 project is on pace to exhaust the keyspace in just under 37 years as of January 2025, although the project will end whenever the required key is found. RC5 has eight unsolved challenges from RSA Security, although in May 2007, RSA Security announced that they would no longer be providing prize money for a correct key to any of their secret key challenges. distributed.net has decided to sponsor the original prize offer for finding the key as a result.
In 2001, distributed.net was estimated to have a throughput of over 30 TFLOPS. , the throughput was estimated to be the same as a Cray XC40, as used in the Lonestar 5 supercomputer, or around 1.25 petaFLOPs.
History
A coordinated effort was started in February 1997 by Earle Ady and Christopher G. Stach II of Hotjobs.com and New Media Labs, as an effort to break the RC5-56 portion of the RSA Secret-Key Challenge, a 56-bit encryption algorithm that had a $10,000 USD prize available to anyone who could find the key. Unfortunately, this initial effort had to be suspended as the result of SYN flood attacks by participants upon the server.
A new independent effort, named distributed.net, was coordinated by Jeffrey A. Lawson, Adam L. Beberg, and David C. McNett along with several others who would serve on the board and operate infrastructure. By late March 1997 new proxies were released to resume RC5-56 and work began on enhanced clients. A cow head was selected as the icon of the application and the project's mascot.
The RC5-56 challenge was solved on October 19, 1997 after 250 days. The correct key was "0x532B744CC20999" and the plaintext message read "The unknown message is: It's time to move to a longer key length".
The RC5-64 challenge was solved on July 14, 2002 after 1,757 days. The correct key was "0x63DE7DC154F4D039" and the plaintext message read "The unknown message is: Some things are better left unread".
The search for Optimal Golomb Rulers (OGRs) of order 24, 25, 26, 27 and 28 were completed by distributed.net on 13 October 2004, 25 October 2008, 24 February 2009, 19 February 2014, and 23 November 2022 respectively.
Client
"DNETC" is the file name of the software application which users run to participate in any active distributed.net project. It is a command line program with an interface to configure it, available for a wide variety of platforms. distributed.net refers to the software application simply as the "client". , volunteers running 32-bit Windows with AMD FireStream enabled GPUs have contributed the most processing power to the RC5-72 project and volunteers running 64-bit Linux have contributed the most processing power to the OGR-28 project.
Portions of the source code for the client are publicly available, although users are not permitted to distribute modified versions themselves.
Distributed.net's RC5-72 project is available on the BOINC client through the Moo! Wrapper.
Development of GPU-enabled clients
In recent years, most of the work on the RC5-72 project has been submitted by clients that run on the GPU of modern graphics cards. Although the project had already been underway for almost 6 years when the first GPUs began submitting results, as of January 2025, GPUs represent 88% of all completed work units, and complete more than 95% of all work units each day.
NVIDIA
In late 2007, work began on the implementation of new RC5-72 cores designed to run on NVIDIA CUDA-enabled hardware, with the first completed work units reported in November 2008. On high-end NVIDIA video cards at the time, upwards of 600 million keys/second was observed For comparison, a 2008-era high-end single CPU working on RC5-72 achieved about 50 million keys/second, representing a very significant advancement for RC5-72. As of January 2025, CUDA clients have completed almost 11% of all work on the RC5-72 project, and perform about 9% of the work each day.
AMD / ATI
Similarly, near the end of 2008, work began on the implementation of new RC5-72 cores designed to run on AMD FireStream-enabled hardware. Some of the products in the Radeon HD 5000 and 6000 series provided key rates in excess of 1.8 billion keys/second. As of January 2025, FireStream clients have completed over 21% of all work on the RC5-72 project. Daily production from FireStream clients has dropped below 0.5% as the majority of AMD GPU contributors now use the OpenCL client.
OpenCL
An OpenCL client entered beta testing in late 2012 and was released in 2013. As of January 2025, OpenCL clients have completed more than 56% of all work on the RC5-72 project, and now perform almost 86% of the work each day. No breakdown of OpenCL production by GPU manufacturer exists, as AMD, NVIDIA, and Intel GPUs all support OpenCL.
Timeline of distributed.net projects
Current
RSA Lab's 72-bit RC5 Encryption Challenge started 3 December 2002 — In progress, 13.228% complete as of 4 January 2025 (although RSA Labs has discontinued sponsorship)
Cryptography
RSA Lab's 56-bit RC5 Encryption Challenge — Completed 19 October 1997 (after 250 days and 47% of the key space tested).
RSA Lab's 56-bit DES-II-1 Encryption Challenge — Completed 23 February 1998 (after 39 days)
RSA Lab's 56-bit DES-II-2 Encryption Challenge — Ended 15 July 1998 (found independently by the EFF DES cracker after 2.5 days)
RSA Lab's 56-bit DES-III Encryption Challenge — Completed 19 January 1999 (after 22.5 hours with the help of the EFF DES cracker)
CS-Cipher Challenge — Completed 16 January 2000 (after 60 days and 98% of the key space tested).
RSA Lab's 64-bit RC5 Encryption Challenge — Completed 14 July 2002 (after days and 83% of the key space tested).
Golomb rulers
Optimal Golomb Rulers (OGR-24) — Completed 13 October 2004 (after days, confirmed predicted best ruler)
Optimal Golomb Rulers (OGR-25) — Completed 24 October 2008 (after days, confirmed predicted best ruler)
Optimal Golomb Rulers (OGR-26) — Completed 24 February 2009 (after days, confirmed predicted best ruler)
Optimal Golomb Rulers (OGR-27) — Completed 19 February 2014 (after days, confirmed predicted best ruler)
Optimal Golomb Rulers (OGR-28) — Completed 23 November 2022 (after days, confirmed predicted best ruler)
See also
RSA Secret-Key Challenge
Golomb Ruler
DES Challenges
Brute force attack
Cryptanalysis
Key size
List of volunteer computing projects
Berkeley Open Infrastructure for Network Computing
References
External links
Official website
Cryptographic attacks
Volunteer computing projects
Charities based in the United States
Organizations established in 1997
Articles which contain graphical timelines | Distributed.net | [
"Technology"
] | 1,612 | [
"Cryptographic attacks",
"Computer security exploits"
] |
337,019 | https://en.wikipedia.org/wiki/Robert%20Todd%20Lincoln | Robert Todd Lincoln (August 1, 1843 – July 26, 1926) was an American lawyer and businessman. The eldest son of President Abraham Lincoln and Mary Todd Lincoln, he was the only one of their four children to survive past the teenage years and also the only to outlive both parents. Robert Lincoln became a business lawyer and company president, and served as both United States Secretary of War (1881–1885) and the U.S. Ambassador to Great Britain (1889–1893).
Lincoln was born in Springfield, Illinois, and graduated from Harvard College. He then served on the staff of General Ulysses S. Grant as a captain in the Union Army in the closing days of the American Civil War. After the war was over, he married Mary Eunice Harlan, and they had three children together. Following completion of his law school studies in Chicago, he built a successful law practice, and became wealthy representing corporate clients.
Lincoln was often spoken of as a possible candidate for national office, including the presidency, but never took steps to mount a campaign. He served as Secretary of War in the administration of James A. Garfield, continuing under Chester A. Arthur, and as Minister to Great Britain in the Benjamin Harrison administration.
Lincoln became general counsel of the Pullman Company, and after founder George Pullman died in 1897, Lincoln assumed the company's presidency. After retiring from this position in 1911, Lincoln served as chairman of the board until 1924. In Lincoln's later years, he resided at homes in Washington, D.C., and Manchester, Vermont; the Manchester home, Hildene, was added to the National Register of Historic Places in 1977. In 1922, he took part in the dedication ceremonies for the Lincoln Memorial. Lincoln died at Hildene in July 1926, at age 82, and was buried at Arlington National Cemetery.
Early life
Robert Todd Lincoln was born in Springfield, Illinois, on August 1, 1843, to Abraham Lincoln and Mary Todd Lincoln. He had three younger brothers, Edward, William, and Tad. By the time Lincoln was born, his father had become a well-known member of the Whig political party and had served as a member of the Illinois state legislature for four terms. He was named after his maternal grandfather, Robert Smith Todd.
Some commentators believe that Robert Lincoln had a distant relationship with his father, in part because, during his formative years, Abraham Lincoln spent months on the judicial circuit. Lincoln recalled, "During my childhood and early youth he was almost constantly away from home, attending court or making political speeches."
Abraham apparently realized that his being away had a potential impact on his sons as evidenced by the following quote from his April 16, 1848, letter to his wife: "don't let the blessed fellows forget Father". One such example that gives insight into Robert's childhood in general was related by Joseph Humphreys, who had taken a train to Lexington, Kentucky, in 1847: "there were two lively youngsters on board who kept the whole train in a turmoil, and their long-legged father, instead of spanking the brats, looked pleased as Punch and aided and abetted the older one in mischief".
Lincoln took the Harvard College entrance examination in 1859, but failed fifteen out of the sixteen subjects. Subsequently, Lincoln was enrolled at Phillips Exeter Academy to prepare for college; he graduated Phillips Exeter in 1860. Admitted to Harvard, he graduated in 1864, having been elected vice-president of the Hasty Pudding Club, and was a member of the Delta Kappa Epsilon (Alpha chapter) fraternity. Welsh author Jan Morris wrote that Robert Lincoln, "having failed fifteen out of sixteen subjects in the Harvard entrance examination, got in at last and emerged an unsympathetic bore."
Civil War years
After graduating from Harvard, Robert Lincoln enrolled at Harvard Law School. Lincoln attended Harvard Law School from September 1864 to January 1865, but left after four months in order to join the Union Army. In 1893, Harvard awarded Lincoln the honorary degree of LL.D.
Mary Todd Lincoln prevented Robert Lincoln from joining the Army until shortly before the war's conclusion. President Lincoln argued "our son is not more dear to us than the sons of other people are to their mothers." In January 1865, the First Lady gave in and President Lincoln wrote Ulysses Grant, asking if Robert could be placed on his staff.
On February 11, 1865, Lincoln was commissioned as an assistant adjutant with a captain's rank. He served in the last weeks of the American Civil War on General Grant's staff, a status which meant, in all likelihood, he would not be involved in actual combat. He was present at Appomattox when Robert E. Lee surrendered. He resigned his commission on June 12, 1865, and returned to civilian life.
Lincoln was once saved from possible serious injury or death by Edwin Booth, whose brother, John Wilkes Booth, assassinated Robert's father. This event took place on a train platform in Jersey City, New Jersey. The exact date is uncertain, but it is believed to have taken place in late 1863 or early 1864, before John Wilkes Booth's assassination of President Lincoln. In a letter written in 1909 to the editor of The Century Magazine, Robert Lincoln recalled what had happened that day:
Months afterwards, while serving on Grant's US Army staff, Robert Lincoln recalled the occurrence to Colonel Adam Badeau, a fellow officer who happened to be a friend of Edwin Booth's. Badeau sent a letter to Booth, complimenting the actor for his heroism. Before receiving the letter, Booth had been unaware that the man whose life he had saved on the train platform was the president's son. The knowledge of whom he had saved that day was said to have been of some comfort to Booth following his brother's assassination of the president. Grant also sent Booth a letter of gratitude for his action.
On the night his father was assassinated, Robert had turned down an invitation to accompany the Lincolns to Ford's Theatre due to fatigue after spending much of his recent time in a covered wagon at the battlefront. Ten days later, Robert Lincoln wrote President Andrew Johnson requesting that he and his family be allowed to stay in the Executive mansion for two and a half weeks because his mother had told him that "she can not possibly be ready to leave here". Lincoln also acknowledged that he was aware of the "great inconvenience" this would be to Johnson since he had become president of the United States only a short time earlier.
In late April, 1865, Robert moved to the city of Chicago with his remaining family. He attended law classes at the Old University of Chicago and studied law at the Chicago firm of Scammon, McCagg & Fuller. On January 1, 1866, Lincoln moved out of the apartment he shared with his mother and brother. He rented his own rooms in downtown Chicago to "begin to live with some degree of comfort" which he had not known when living in cramped conditions with his family. Lincoln graduated from Northwestern University with an LL.B. in 1866 and became licensed as an attorney in Chicago on February 22, 1867. He was certified to practice law four days later on February 26, 1867.
Family
Marriage and children
On September 24, 1868, Lincoln married Mary Eunice Harlan, daughter of Senator James Harlan and Ann Eliza Peck of Mount Pleasant, Iowa.
They had three children, two daughters and one son: Mary "Mamie" Lincoln, Abraham "Jack" Lincoln II, and Jessie Harlan Lincoln.
Robert, Mary, and the children would often leave their hot city life behind for the cooler climate of Mount Pleasant, during the 1880s the family would summer at the Harlan home there. The Harlan-Lincoln home, built in 1876, still stands today. Donated by Mary Harlan Lincoln to Iowa Wesleyan College in 1907, it now serves as a museum containing a collection of artifacts from the Lincoln family and from Abraham Lincoln's presidency.
Of Robert's children, Jessie Harlan Lincoln Beckwith had two children, namely Mary Lincoln Beckwith ("Peggy") and Robert Todd Lincoln Beckwith, but neither of them had children of their own. The Robert Todd Lincoln of this article had another daughter, Mary ("Mamie") Todd Lincoln, who married Charles Bradford Isham in 1891; they had one son, Lincoln Isham, who married Leahalma Correa in 1919, but died without children. The last person acknowledged and known to be of Lincoln lineage, Robert's grandson Robert Todd Lincoln Beckwith, died in 1985.
Relationship with Mary Todd Lincoln
In 1871, Lincoln's only surviving brother, Tad, died at age 18, leaving his mother devastated. Lincoln was already concerned about what he thought were his mother's compulsive and extravagant spending, hallucinations, and eccentric behaviors. Fearing that she was a danger to herself, he arranged to have her committed to a psychiatric hospital in Batavia, Illinois, in 1875. With his mother in the hospital, he was left with control of her finances, although he used his own money to pay for her care. As the head of the family, he felt that it was his duty to protect her, although he did wish that she would have "every liberty and privilege" restored to her as soon as she was better. On May 20, 1875, she arrived at Bellevue Place, a private, upscale sanitarium in the Fox River Valley.
Three months after she started living there, Mary Lincoln was able to escape from Bellevue Place. She smuggled letters to her lawyer, James B. Bradwell, and his wife, Myra. Mary also wrote to the editor of the Chicago Times and shortly, the embarrassment Robert had hoped to avoid came to the forefront, with his motives and character being publicly questioned. Bellevue's director, who at Mary's commitment trial assured the jury she would benefit from treatment at his facility, now declared her well enough to go to Springfield to live with her sister. Her commitment and subsequent events alienated Lincoln from his mother, and they did not possibly reconcile until shortly before her unexpected death.
Politics
Secretary of War (1881–1885)
From 1876 to 1877 Lincoln served as Town Supervisor of South Chicago, a town which was later absorbed into the city of Chicago. In 1877 he rejected President Rutherford B. Hayes' offer to appoint him Assistant Secretary of State. He was appointed by President James Garfield as Secretary of War and served from 1881 to 1885 under Garfield and then Chester A. Arthur.
During his term in office, the Cincinnati Riots of 1884 broke out over a case in which a jury gave a verdict of manslaughter rather than murder in a case that many suspected was rigged. Forty-five people died during three days of rioting before U.S. troops dispatched by Lincoln reestablished calm.
Subsequent to serving as Secretary of War, Lincoln assisted Oscar Dudley to establish the Illinois Industrial Training School for Boys (now known as Glenwood Academy) in Norwood Park in 1887, after Dudley (a Humane Society employee) "discovered more homeless, neglected and abused boys than dogs on the city streets."
Republican politics
From 1884 to 1912, Lincoln's name was mentioned in varying degrees of seriousness as a candidate for the Republican presidential or vice-presidential nomination. He repeatedly disavowed any interest in running and stated he would not accept nomination for either position. His likeness was included in an 1888 set of "Presidential Possibilities" cards.
Minister to the Court of St James's
Lincoln served as the U.S. minister to Great Britain, formally to the Court of St James's, from 1889 to 1893 under President Benjamin Harrison. Lincoln's teenage son, Abraham II "Jack", died during this time in Europe. After serving as minister, Lincoln returned to private business as a lawyer.
Later life and career
Robert fought to preserve and protect his father's legacy, clashing with Abraham Lincoln biographer William Herndon over Herndon's statements about his father. As a result of their confrontations over his Lincoln biography, in 1890 Herndon wrote to Jesse Weik, his Lincoln biography collaborator, that Robert was "a Todd and not a Lincoln ... a little bitter fellow of the pig-headed kind, silly and cold and selfish."
Lincoln was general counsel of the Pullman Palace Car Company under George Pullman, and was named president after Pullman's death in 1897. According to Almont Lindsey's 1942 book, The Pullman Strike, Lincoln arranged to have Pullman quietly excused from the subpoena issued for him to testify in the 1895 conspiracy trials of the American Railway Union's leaders (during the 1894 Pullman Strike). Pullman hid from the deputy marshal sent to his office with the subpoena and then appeared with Lincoln to meet privately with Judge Grosscup after the jury had been dismissed. In 1911, Lincoln became chairman of the Pullman Company board, a position he held until 1924.
A serious nonprofessional astronomer, Lincoln had an observatory built at Hildene, and a 1909 Warner & Swasey refracting telescope with a six-inch John A. Brashear objective lens was installed. Lincoln's telescope and observatory have been restored and it was used by a local astronomy club in the early 2000s. Lincoln was also a dedicated golfer, and served as president of the Ekwanok Country Club in Manchester. His last public appearance was on May 30, 1922, at the dedication ceremony for his father's memorial in Washington, D.C.
Presence at assassinations
Robert Lincoln was coincidentally either present or nearby when three presidential assassinations occurred.
Lincoln was not present at Ford's Theatre when his father was assassinated but he was at the White House nearby, and rushed to be with his parents. The president was moved to the Petersen House after the shooting, where Robert attended his father's deathbed.
Lincoln was an eyewitness when Charles J. Guiteau shot President James A. Garfield at the Sixth Street Train Station in Washington, D.C., on July 2, 1881. Lincoln was serving as Garfield's Secretary of War at the time.
Lincoln was at the 1901 Pan-American Exposition in Buffalo, New York, when President William McKinley was shot by Leon Czolgosz. Though not an eyewitness, he was just outside the Temple of Music when the shooting actually occurred.
Lincoln himself recognized these coincidences. He is said to have refused a later presidential invitation with the comment, "No, I'm not going, and they'd better not ask me, because there is a certain fatality about presidential functions when I am present."
Death
Robert Todd Lincoln died in his sleep at Hildene, his Vermont home, on July 26, 1926, at age 82. The cause of death was given by his physician as a "cerebral hemorrhage induced by arteriosclerosis". His body was stored in the receiving vault at Dellwood Cemetery from July 1926 until March 1928 when arrangements were made to inter his remains at Arlington National Cemetery.
Robert had long expressed his intention to be buried in the Lincoln Tomb with his family at the Oak Ridge Cemetery in Springfield. Two weeks after his death, his widow Mary Harlan Lincoln wrote to her husband's niece of an inspired thought: "...[O]ur darling was a personage, made his own history, independently of his great father, and should have his own place 'in the sun'".
Lincoln's body was buried at Arlington National Cemetery in a sarcophagus designed by the sculptor James Earle Fraser. He is buried together with his wife, Mary, and their son, Abraham II ("Jack"), who had died in London, England, of sepsis in 1890 at the age of 16. Weeks after Jack's death, Robert wrote to his cousin Charles Edwards, "We had a long & most anxious struggle and at times had hopes of saving our boy. It would have been done if it had depended only on his own marvelous pluck & patience now that the end has come, there is a great blank in our future lives & an affliction not to be measured."
Legacy
Historian Michael Burlingame considered Robert Todd Lincoln to be "a particularly unfortunate, even tragic figure." Lincoln himself once said, "No one wanted me for Secretary of War... For minister to England... For president of the Pullman Company; they wanted Abraham Lincoln's son." Nevertheless, he accepted the appointments and was very well-paid, becoming a millionaire lawyer and businessman, fond of the pleasures of the wealthy conservative Victorian gentlemen of his social circle.
Lincoln is considered to have had little in common with his father personally or politically, not being humorous or unpretentious, but rather cold, stuffy, and aloof. Fanny Seward, daughter of secretary of state William H. Seward, described him, however, as "ready and easy in conversation having, I fancy, considerable humor in his disposition...agreeable, good-natured, and intelligent".
Lincoln was the last surviving member of the Garfield and Arthur Cabinets, and the last-surviving witness of Lee's surrender at Appomattox. The Lincoln Sea, a body of water in the Arctic Ocean between Canada and Greenland, was named after then Secretary of War Lincoln on Adolphus Greely's 1881–1884 Arctic expedition.
Lincoln's last known surviving descendant, Robert Todd Lincoln Beckwith, died December 24, 1985.
Cultural depictions
Robert Todd Lincoln as a character has appeared multiple times on film, in television programs, and in dramatic productions.
Films
Edwin Mills in Abe Lincoln in Illinois (1940)
Joseph Gordon-Levitt in Steven Spielberg's Lincoln (2012)
Television
Kieran Mulroney in Tad (1995)
Gregory Cooke in the miniseries Lincoln (1988)
Wil Wheaton in The Day Lincoln Was Shot (1998)
Brett Dalton in Killing Lincoln (2013)
Neal Bledsoe in Timeless (2016)
James Carroll Jordan in Sandburg's Lincoln (1974), with Hal Holbrook as Abraham Lincoln.
Nick Robinson in History of the World, Part II (2023)
Maxwell Korn in Manhunt (2024)
Stage plays
Michael Cristofer in The Last of Mrs. Lincoln (1976). The Last of Mrs. Lincoln, starring Julie Harris, was also seen on television, on PBS as part of a series called Hollywood Television Theater.
See also
List of people on the cover of Time Magazine: 1920s – March 8, 1926
Lincoln family tree
Notes
References
Citations
Print sources
Cooper, Dan. "President Lincoln of the Pullman Company," Financial History (Fall 2013), Issue 108, pp 10–39.
External links
Robert Todd Lincoln
Photographs of Robert Todd Lincoln
Original Letters and Manuscripts: Robert Todd Lincoln Shapell Manuscript Foundation
1843 births
1926 deaths
19th-century American diplomats
Amateur astronomers
Ambassadors of the United States to the United Kingdom
American people of English descent
Arthur administration cabinet members
19th-century American politicians
Burials at Arlington National Cemetery
Children of presidents of the United States
Garfield administration cabinet members
Harvard Law School alumni
Harvard College alumni
Illinois city council members
Illinois lawyers
Illinois Republicans
Lincoln family
People associated with the assassination of Abraham Lincoln
People of Illinois in the American Civil War
Phillips Exeter Academy alumni
Politicians from Springfield, Illinois
Union army officers
United States secretaries of war
People from Manchester, Vermont | Robert Todd Lincoln | [
"Astronomy"
] | 3,932 | [
"Astronomers",
"Amateur astronomers"
] |
337,082 | https://en.wikipedia.org/wiki/Biological%20Weapons%20Convention | The Biological Weapons Convention (BWC), or Biological and Toxin Weapons Convention (BTWC), is a disarmament treaty that effectively bans biological and toxin weapons by prohibiting their development, production, acquisition, transfer, stockpiling and use. The treaty's full name is the Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on their Destruction.
Having entered into force on 26 March 1975, the BWC was the first multilateral disarmament treaty to ban the production of an entire category of weapons of mass destruction. The convention is of unlimited duration. As of July 2024, 187 states have become party to the treaty. Four additional states have signed but not ratified the treaty, and another six states have neither signed nor acceded to the treaty.
The BWC is considered to have established a strong global norm against biological weapons. This norm is reflected in the treaty's preamble, which states that the use of biological weapons would be "repugnant to the conscience of mankind". It is also demonstrated by the fact that not a single state today declares to possess or seek biological weapons, or asserts that their use in war is legitimate. In light of the rapid advances in biotechnology, biodefense expert Daniel Gerstein has described the BWC as "the most important arms control treaty of the twenty-first century". However, the convention's effectiveness has been limited due to insufficient institutional support and the absence of any formal verification regime to monitor compliance.
History
While the history of biological warfare goes back more than six centuries to the Siege of Caffa in 1346 CE, international restrictions on biological warfare began only with the 1925 Geneva Protocol, which prohibits the use but not the possession or development of chemical and biological weapons. Upon ratification of the Geneva Protocol, several countries made reservations regarding its applicability and use in retaliation. Due to these reservations, it was in practice a "no-first-use" agreement only. In particular, it did not prevent multiple states from starting and scaling offensive biological weapons programs, including the United States (active from 1943 to 1969) and the Soviet Union (active from the 1920s until at least 1992).
The American biowarfare system was terminated in 1969 by President Nixon when he issued his Statement on Chemical and Biological Defense Policies and Programs. The statement ended, unconditionally, all U.S. offensive biological weapons programs. When Nixon ended the program the budget was $300 million annually.
The BWC sought to supplement the Geneva Protocol and was negotiated in the Conference of the Committee on Disarmament in Geneva from 1969 to 1972, following the conclusion of the negotiation of the Treaty on the Non-Proliferation of Nuclear Weapons. Of significance was a 1968 British proposal to separate consideration of chemical and biological weapons and to first negotiate a convention on biological weapons. The negotiations gained further momentum when the United States decided to unilaterally end its offensive biological weapons program in 1969 and support the British proposal. In March 1971, the Soviet Union and its allies reversed their earlier opposition to the separation of chemical and biological weapons and tabled their own draft convention. The final negotiation stage was reached when the United States and the Soviet Union submitted identical but separate drafts of the BWC text on 5 August 1971. The BWC was opened for signature on 10 April 1972 with ceremonies in London, Moscow, and Washington, D.C., and it entered into force on 26 March 1975 after the ratification by 22 states, including its three depositary governments (the Soviet Union, the United Kingdom, and the United States).
There have been some concerned scientists who have called for the modernization of the BWC at the periodic Review Conferences. For example, Filippa Lentzos and Gregory Koblentz pointed out in 2016 that "crucial contemporary debates about new developments" for the BWC Review Conferences included "gain-of-function experiments, potential pandemic pathogens, CRISPR and other genome editing technologies, gene drives, and synthetic biology".
Treaty obligations
With only 15 articles, the BWC is relatively short. Over time, the treaty has been interpreted and supplemented by additional politically binding agreements and understandings reached by its States Parties at eight subsequent Review Conferences.
Summary of key articles
Article I: Never under any circumstances to develop, produce, stockpile, acquire, or retain biological weapons.
Article II: To destroy or divert to peaceful purposes biological weapons and associated resources prior to joining.
Article III: Not to transfer, or in any way assist, encourage, or induce anyone else to acquire or retain biological weapons.
Article IV: To take any national measures necessary to implement the provisions of the BWC domestically.
Article V: Undertaking to consult bilaterally and multilaterally and cooperate in solving any problems which may arise in relation to the objective, or in the application, of the BWC.
Article VI: Right to request the United Nations Security Council to investigate alleged breaches of the BWC and undertaking to cooperate in carrying out any investigation initiated by the Security Council.
Article VII: To assist States which have been exposed to danger as a result of a violation of the BWC.
Article X: Undertaking to facilitate, and have the right to participate in, the fullest possible exchange of equipment, materials and information for peaceful purposes.
The remaining articles concern the BWC's compatibility with the 1925 Geneva Protocol (Article VIII), negotiations to prohibit chemical weapons (Article IX), amendments (Article XI), Review Conferences (Article XII), duration (Article XIII, 1), withdrawal (Article XIII, 2), joining the convention, depositary governments, and conditions for entry into force (Article XIV, 1–5), and languages (Article XV).
Article I: Prohibition of biological weapons
Article I is the core of the BWC and requires each state "never in any circumstances to develop, produce, stockpile or otherwise acquire or retain:
microbial or other biological agents, or toxins whatever their origin or method of production, of types and in quantities that have no justification for prophylactic, protective or other peaceful purposes;
weapons, equipment or means of delivery designed to use such agents or toxins for hostile purposes or in armed conflict."
Article I does not prohibit any specific biological agents or toxins as such but rather certain purposes for which they may be employed. This prohibition is known as the general-purpose criterion and is also used in Article II, 1 of the 1993 Chemical Weapons Convention (CWC). The general-purpose criterion covers all hostile uses of biological agents, including those developed in the future, and recognizes that biological agents and toxins are inherently dual use. While these agents may be employed for nefarious ends, they also have several legitimate peaceful purposes, including developing medicines and vaccines to counter natural or deliberate disease outbreaks. Against this background, Article I only considers illegitimate those types and quantities of biological agents or toxins and their means of delivery which cannot be justified by prophylactic, protective, or other peaceful purposes; regardless of whether the agents in question affect humans, animals, or plants. A disadvantage of this intent-based approach is a blurring of the line between defensive and offensive biological weapons research.
While it was initially unclear during the early negotiations of the BWC whether viruses would be regulated by it since they lie "at the edge of life"—they possess some but not all of the characteristics of life—viruses were defined as biological agents in 1969 and thus fall within the BWC's scope.
While Article I does not explicitly prohibit the "use" of biological weapons as it was already considered to be prohibited by the 1925 Geneva Protocol, it is still regarded as a violation of the BWC, as reaffirmed by the final document of the Fourth Review Conference in 1996.
Article III: Prohibition of transfer and assistance
Article III bans the transfer, encouragement, assistance, or inducement of anyone, whether governments or non-state actors, in developing or acquiring any of the agents, toxins, weapons, equipment, or means of delivery specified in Article I. The article's objective is to prevent the proliferation of biological weapons by limiting the availability of materials and technology which may be used for hostile purposes.
Article IV: National implementation
Article IV obliges BWC States Parties to implement the convention's provisions domestically. This is essential to allow national authorities to investigate, prosecute, and punish any activities prohibited by the BWC; to prevent access to biological agents for harmful purposes; and to detect and respond to the potential use of biological weapons. National implementing measures may take various forms, such as legislation, regulations, codes of conduct, and others. Which implementing measures are adequate for a state depends on several factors, including its legal system, its size and geography, the development of its biotechnology industry, and its participation in regional economic cooperation. Since no one set of measures fits all states, the implementation of specific obligations is left to States Parties' discretion, based on their assessment of what will best enable them to ensure compliance with the BWC.
A database of over 1,500 laws and regulations that States Parties have enacted to implement the BWC domestically is maintained by the non-governmental organization VERTIC. A similar database on national implementation measures developed by VERTIC and the United Nations Institute for Disarmament Research was launched in 2023. These concern the penal code, enforcement measures, import and export controls, biosafety and biosecurity measures, as well as domestic and international cooperation and assistance. For instance, the 1989 Biological Weapons Anti-Terrorism Act implemented the Convention for the United States. A 2023 VERTIC report concluded that "gaps persist in States Parties' legal frameworks for implementing the Convention at the national level". The BWC's Implementation Support Unit issued a background information document on "strengthening national implementation" in 2018 and an update in 2019.
Article V: Consultation and cooperation
Article V requires States Parties to consult one another and cooperate in disputes concerning the purpose or implementation of the BWC. The Second Review Conference in 1986 agreed on procedures to ensure that alleged violations of the BWC would be promptly addressed at a consultative meeting when requested by a State Party. These procedures were further elaborated by the Third Review Conference in 1991. Two formal consultative meetings have taken place, the first in 1997 at the request of Cuba, and the second in 2022 at the request of the Russian Federation.
Article VI: Complaint about an alleged BWC violation
Article VI allows States Parties to lodge a complaint with the United Nations Security Council if they suspect a breach of treaty obligations by another state. Moreover, the article requires states to cooperate with any investigation which the Security Council may launch. There is a general unwillingness to invoke Article VI due to the highly political nature of the Security Council, where the five permanent members—China, France, Russia, the United Kingdom, and the United States—hold veto power, including over investigations for alleged treaty violations. One formal complaint pursuant to Article VI has been lodged by the Russian Federation in 2022.
Article VII: Assistance after a BWC violation
Article VII obliges States Parties to provide assistance to states that so request it if the UN Security Council decides they have been exposed to danger as a result of a violation of the BWC. In addition to helping victims in the event of a biological weapons attack, the purpose of the article is to deter such attacks from occurring in the first place by reducing their potential for harm through international solidarity and assistance. Despite no state ever having invoked Article VII, the article has drawn more attention in recent years, in part due to increasing evidence of terrorist organizations being interested in acquiring biological weapons and also following various naturally occurring epidemics. In 2018, the BWC's Implementation Support Unit issued a background document describing a number of additional understandings and agreements on Article VII that have been reached at past Review Conferences.
Article X: Peaceful cooperation
Article X protects States Parties' right to exchange biological materials, technology, and information to be used for peaceful purposes. The article states that the implementation of the BWC shall avoid hampering the economic or technological development of States Parties or peaceful international cooperation on biological projects. The Seventh Review Conference in 2011 established an Article X database, which matches voluntary requests and offers for assistance and cooperation among States Parties and international organizations.
Membership and joining the BWC
The BWC has 187 States Parties as of July 2024, with the Federated States of Micronesia the most recent to become a party. Four states have signed but not ratified the treaty: Egypt, Haiti, Somalia, and Syria. Six additional states have neither signed nor acceded to the treaty: Chad, Comoros, Djibouti, Eritrea, Israel and Kiribati. For one of these ten states not party to the convention, the process of joining is well advanced, while an additional two states have started the process. The BWC's degree of universality remains low compared to other weapons of mass destruction regimes, including the Chemical Weapons Convention with 193 parties and the Treaty on the Non-Proliferation of Nuclear Weapons with 191 parties.
States can join the BWC through either ratification, accession or succession, in accordance with their national constitutional processes, which often require parliamentary approval. Ratification applies to states which had previously signed the treaty before it entered into force in 1975. Since then, signing the treaty is no longer possible, but states can accede to it. Succession concerns newly independent states that accept to be bound by a treaty that the predecessor state had joined. The Convention enters into force on the date when an instrument of ratification, accession, or succession is deposited with at least one of the depositary governments (the Russian Federation, the United Kingdom, and the United States).
Several countries made reservations when ratifying the BWC declaring that it did not imply their complete satisfaction that the treaty allows the stockpiling of biological agents and toxins for "prophylactic, protective or other peaceful purposes", nor should it imply recognition of other countries they do not recognize.
Verification and compliance
Confidence-building measures
At the Second Review Conference in 1986, BWC States Parties agreed to strengthen the treaty by exchanging annual confidence-building measures (CBMs). These politically binding reports aim to prevent or reduce the occurrence of ambiguities, doubts and suspicions, and at improving international cooperation on peaceful biological activities. CBMs are the main formal mechanism through which States Parties regularly exchange compliance-related information. After revisions by the Third, Sixth, and Seventh Review Conferences, the current CBM form requires states to provide information annually on six issues (CBM D was deleted by the Seventh Review Conference in 2011):
CBM A: (i) research centres and laboratories, and (ii) national biological defence research and development programs
CBM B: outbreaks of infectious diseases and similar occurrences caused by toxins
CBM C: efforts to promote research results
CBM E: legislation, regulations, and other measures
CBM F: past activities in offensive and/or defensive biological research and development programs
CBM G: vaccine production facilities
While the number of CBM submissions has increased over time, the overall participation rate remains slightly above 50 percent. In 2018, an online CBM platform was launched to facilitate the electronic submission of CBM reports. An increasing number of states are making their CBM reports publicly available on the platform, but many reports remain only accessible to other states. The history and implementation of the CBM system have been described by the BWC Implementation Support Unit in a 2022 report to the Ninth Review Conference.
Failed negotiation of a verification protocol
Unlike the chemical or nuclear weapons regimes, the BWC lacks both a system to verify states' compliance with the treaty and a separate international organization to support the convention's effective implementation. Agreement on such a system was not feasible at the time the BWC was negotiated, largely due to Cold War politics but also due to a belief it was not necessary and that the BWC would be difficult to verify. U.S. biological weapons expert Jonathan B. Tucker commented that "this lack of an enforcement mechanism has undermined the effectiveness of the BWC, as it is unable to prevent systematic violations".
Earlier drafts of the BWC included limited provisions for addressing compliance issues, but these were removed during the negotiation process. Some countries attempted to reintroduce these provisions when the BWC text was submitted to the General Assembly in 1971 but were unsuccessful, as were attempts led by Sweden at the First Review Conference in 1980.
Following the end of the Cold War, a long negotiation process to add a verification mechanism began in 1991, when the Third Review Conference established an expert group on verification, VEREX, with the mandate to identify and examine potential verification measures from a scientific and technical standpoint. During four meetings in 1992 and 1993, VEREX considered 21 verification measures, including inspections of facilities, monitoring relevant publications, and other on-site and off-site measures. Another stimulus came from the successful negotiation of the Chemical Weapons Convention, which opened for signature in 1993.
Subsequently, a Special Conference of BWC States Parties in 1994 considered the VEREX report and decided to establish an Ad Hoc Group to negotiate a legally-binding verification protocol. The Ad Hoc Group convened 24 sessions between 1995 and 2001, during which it negotiated a draft protocol to the BWC which would establish an international organization and introduce a verification system. This organization would employ inspectors who would regularly visit declared biological facilities on-site and could also investigate specific suspect facilities and activities. Nonetheless, states found it difficult to agree on several fundamental issues, including export controls and the scope of on-site visits. By early 2001, the "rolling text" of the draft protocol still contained many areas on which views diverged widely.
In March 2001, a 210-page draft protocol was circulated by the chairman of the Ad Hoc Group, which attempted to resolve the contested issues. However, at the 24th session of the Ad Hoc Group in July 2001 the George W. Bush administration rejected both the draft protocol circulated by the Group's Chairman and the entire approach on which the draft was based, resulting in the collapse of the negotiation process. To justify its decision, the United States asserted that the protocol would not have improved BWC compliance and would have harmed U.S. national security and commercial interests. Many analysts, including Matthew Meselson and Amy Smithson, criticized the U.S. decision as undermining international efforts against non-proliferation and as contradicting U.S. government rhetoric regarding the alleged biological weapons threat posed by Iraq and other U.S. adversaries.
In subsequent years, calls for restarting negotiations on a verification protocol have been repeatedly voiced. For instance, during the 2019 Meeting of Experts "several States Parties stressed the urgency of resuming multilateral negotiations aimed at concluding a non-discriminatory, legally-binding instrument dealing with (...) verification measures". However, since "some States Parties did not support the negotiation of a protocol to the BWC" it seems "neither realistic nor practicable to return to negotiations". Notably, the Biden administration seems to reconsider the U.S. position on verification, as demonstrated by U.S. ambassador Bonnie Jenkins calling on the 2021 BWC Meeting of States Parties to "establish a new expert working group to examine possible measures to strengthen implementation of the Convention, increase transparency, and enhance assurance of compliance".
In December 2022, States Parties decided to establish a Working Group on strengthening the Convention, which aims to address among other issues, measures on verification and compliance.
Non-compliance
A number of BWC States Parties have been accused of breaching the convention's obligations by developing or producing biological weapons. Because of the intense secrecy around biological weapons programs, it is challenging to assess the actual scope of biological activities and whether they are legitimate defensive programs or a violation of the Convention—except for a few cases with an abundance of evidence for offensive development of biological weapons.
Soviet Union and Russia
Despite being a party and depositary to the BWC, the Soviet Union has operated the world's largest, longest, and most sophisticated biological weapons program, which goes back to the 1920s under the Red Army. Around the time when the BWC negotiations were finalized, and the treaty was signed in the early 1970s, the Soviet Union significantly expanded its covert biological weapons program under the oversight of the "civilian" institution Biopreparat within the Soviet Ministry of Health. The Soviet program employed up to 65,000 people in several hundred facilities and successfully weaponized several pathogens, such as those responsible for smallpox, tularemia, bubonic plague, influenza, anthrax, glanders, and Marburg fever.
The Soviet Union first drew much suspicion of violating its obligations under the BWC after an unusual anthrax outbreak in 1979 in the Soviet city of Sverdlovsk (formerly, and now again, Yekaterinburg) resulted in the deaths of approximately 65 to 100 people. The Soviet authorities blamed the outbreak on the consumption of contaminated meat and for years denied any connection between the incident and biological weapons research. However, investigations concluded that the outbreak was caused by an accident at a nearby military microbiology facility, resulting in the escape of an aerosol of anthrax pathogen. Supporting this finding, Russian President Boris Yeltsin later admitted that "our military developments were the cause".
Western concerns about Soviet compliance with the BWC increased during the late 1980s and were supported by information provided by several defectors, including Vladimir Pasechnik and Ken Alibek. American President George H. W. Bush and British Prime Minister Margaret Thatcher therefore directly challenged President Gorbachev with the information. After the Soviet Union's dissolution, the United Kingdom, the United States, and Russia concluded the Trilateral Agreement on 14 September 1992, reaffirming their commitment to full compliance with the BWC and declaring that Russia had eliminated its inherited offensive biological weapons program. The agreement's objective was to uncover details about the Soviet's biological weapons program and to verify that all related activities had truly been terminated.
David Kelly, a British expert on biological warfare and participant in the visits arranged under the Trilateral Agreement, concluded that, on the one hand, the agreement "was a significant achievement" in that it "provided evidence of Soviet non-compliance from 1975 to 1991"; on the other hand, Kelly noted that the Trilateral Agreement "failed dramatically" because Russia did not "acknowledge and fully account for either the former Soviet programme or the biological weapons activities that it had inherited and continued to engage in".
Milton Leitenberg and Raymond Zilinskas, authors of the 2012 book The Soviet Biological Weapons Program: A History, assert that Russia may still continue parts of the Soviet biological weapons program today. Similarly, as of 2021, the U.S. Department of State "assesses that the Russian Federation (Russia) maintains an offensive [biological weapons] program and is in violation of its obligation under Articles I and II of the BWC. The issue of compliance by Russia with the BWC has been of concern for many years".
Iraq
Starting around 1985 under Saddam Hussein's leadership, Iraq weaponized anthrax, botulinum toxin, aflatoxin, and other agents, and created delivery vehicles, including bombs, missile warheads, aerosol generators, and spray systems. Thereby, Iraq breached the provisions of the BWC, which it had signed in 1972, although it only ratified the Convention in 1991 as a condition of the cease-fire agreement that ended the 1991 Gulf War. The Iraqi biological weapons program—along with its chemical weapons program—was uncovered after the Gulf War through the investigations of the United Nations Special Commission (UNSCOM), which was responsible for disarmament in post-war Iraq. Iraq deliberately obstructed, delayed, and deceived the UNSCOM investigations and only admitted to having operated an offensive biological weapons program under significant pressure in 1995. While Iraq maintained that it ended its biological weapons program in 1991, many analysts believe that the country violated its BWC obligations by continuing the program until at least 1996.
Other accusations of non-compliance
In April 1997, Cuba invoked the provisions of Article V to request a formal consultative meeting to consider its allegations that the United States introduced the crop-eating insect Thrips palmi to Cuba via crop-spraying planes in October 1996. Cuba and the United States presented evidence for their diverging views on the incident in a formal consultation in August 1997. Having reviewed the evidence, twelve States Parties submitted reports, of which nine concluded that the evidence did not support the Cuban allegations, and two (China and Vietnam) maintained it was inconclusive.
At the Fifth BWC Review Conference in 2001, the United States charged four BWC States Parties—Iran, Iraq, Libya, and North Korea—and one signatory, Syria, with operating covert biological weapons programs. Moreover, a 2019 report from the U.S. Department of State raises concerns regarding BWC compliance in China, Russia, North Korea, and Iran. The report concluded that North Korea "has an offensive biological weapons program and is in violation of its obligations under Articles I and II of the BWC" and that Iran "has not abandoned its (...) development of biological agents and toxins for offensive purposes".
In recent years, Russia has repeatedly alleged that the United States is supporting and operating biological weapons facilities in the Caucasus and Central Asia, in particular the Richard Lugar Center for Public Health Research in the Republic of Georgia. The U.S. Department of State called these allegations "groundless" and reaffirmed that "all U.S. activities (...) [were] consistent with the obligations set forth in the Biological Weapons Convention". Biological weapons expert Filippa Lentzos agreed that the Russian allegations are "unfounded" and commented that they are "part of a disinformation campaign". Similarly, Swedish biodefense specialists Roger Roffey and Anna-Karin Tunemalm called the allegations "a Russian propaganda tool".
During the Russian invasion of Ukraine, the Russian Federation convened a Formal Consultative Meeting under Article V of the Convention to address outstanding questions concerning the operation of biological laboratories in Ukraine by the United States. The meeting did not reach a consensus.
Implementation Support Unit
After a decade of negotiations, the major effort to institutionally strengthen the BWC failed in 2001, which would have resulted in a legally binding protocol to establish an Organization for the Prohibition of Biological Weapons (OPBW). Against this background, the Sixth Review Conference in 2006 created an Implementation Support Unit (ISU) funded by the States Parties to the BWC and housed in the Geneva Branch of the United Nations Office for Disarmament Affairs. The unit's mandate is to provide administrative support, assist the national implementation of the BWC, encourage the treaty's universal adoption, pair assistance requests and offers, and oversee the confidence-building measures process.
The ISU was initially composed of three full-time staff with a budget smaller than the average McDonald's restaurant, and does not compare with the institutions established to deal with chemical or nuclear weapons. For example, the Organisation for the Prohibition of Chemical Weapons (OPCW) has about 500 employees, the International Atomic Energy Agency employs around 2,600 people, and the CTBTO Preparatory Commission employs around 280 staff. In December 2022, as a result of the Ninth Review Conference, States Parties decided to establish one new full-time staff position within the ISU, only for the period from 2023 to 2027.
Review Conferences
States Parties have formally reviewed the operation of the BWC at periodic Review Conferences held every five years; the first took place in 1980. The objective of these conferences is to ensure the effective realization of the convention's goals and, in accordance with Article XII, to "take into account any new scientific and technological developments relevant to the Convention". Most Review Conferences have adopted additional understandings or agreements that have interpreted or elaborated the meaning, scope, and implementation of BWC provisions. These additional understandings are contained in the final documents of the Review Conferences and in an overview document prepared by the BWC Implementation Support Unit for the Eighth Review Conference in 2016. Due to the COVID-19 pandemic, the Ninth Review Conference originally scheduled for 2021 was postponed to 2022.
Intersessional program
As agreed at the Fifth Review Conference in 2001/2002, annual BWC meetings have been held between Review Conferences starting in 2003, referred to as the intersessional program. The intersessional program includes both annual Meetings of States Parties (MSP)—aiming to discuss, and promote common understanding and effective action on the topics identified by the Review Conference—as well as Meetings of Experts (MX), which serve as preparation for the Meeting of States Parties. The annual meetings do not have the mandate to adopt decisions, a privilege reserved for the Review Conferences which consider the results from the intersessional program.
Challenges
Potential misuse of rapid scientific and technological developments
Advances in science and technology are relevant to the BWC since they may affect the threat presented by biological weapons. The ongoing advances in synthetic biology and enabling technologies are eroding the technological barriers to acquiring and genetically enhancing dangerous pathogens and using them for hostile purposes. For example, a 2019 report by the Stockholm International Peace Research Institute finds that "advances in three specific emerging technologies—additive manufacturing (AM), artificial intelligence (AI) and robotics—could facilitate, each in their own way, the development or production of biological weapons and their delivery systems". Similarly, biological weapons expert Filippa Lentzos argues that the convergence of genomic technologies with "machine learning, automation, affective computing, and robotics (...) [will] create the possibility of novel biological weapons that target particular groups of people and even individuals". On the other hand, these scientific developments may improve pandemic preparedness by strengthening prevention and response measures.
Technological challenges in the verification of biological weapons
There are several reasons why biological weapons are especially difficult to verify. First, in contrast to chemical and nuclear weapons, even small initial quantities of biological agents can be used to quickly produce militarily significant amounts. Second, biotechnological equipment and even dangerous pathogens and toxins cannot be prohibited altogether since they also have legitimate peaceful or defensive purposes, including the development of vaccines and medical therapies. Third, it is possible to rapidly eliminate biological agents, which makes short-notice inspections less effective in determining whether a facility produces biological weapons. For these reasons, Filippa Lentzos notes that "it is not possible to verify the BWC with the same level of accuracy and reliability as the verification of nuclear treaties".
Financial health of the Convention
BWC intersessional program meetings have recently been impeded by late payments and non-payments of financial contributions. BWC States Parties agreed at the Meeting of States Parties in 2018, which was cut short due to funding shortfalls, on a package of remedial financial measures including the establishment of a Working Capital Fund. This fund is financed by voluntary contributions and provides short-term financing in order to ensure the continuity of approved programs and activities. At the Ninth Review Conference, States Parties welcomed the improvement of the financial situation following the measures endorsed by the 2018 Meeting of States Parties, confirmed their effectiveness and decided to review them at the Tenth Review Conference. Live information on the financial status of the BWC and other disarmament conventions is available publicly on the financial dashboard of the United Nations Office for Disarmament Affairs.
See also
Biological weapons and warfare
Australia Group of countries controlling exports to prevent the spread of biological and chemical weapons
Biological weapons
Biological warfare
Biological terrorism
Geneva Protocol, the first treaty to prohibit the use of biological and chemical weapons
International pandemic treaty
United Nations Security Council Resolution 1540, resolution to curb the proliferation of weapons of mass destruction, particularly to non-state actors
Treaties for other types of weapons of mass destruction
Chemical Weapons Convention (CWC) (states parties)
Nuclear Non-Proliferation Treaty (NPT) (states parties)
Treaty on the Prohibition of Nuclear Weapons (TPNW) (states parties)
Comprehensive Nuclear Test-Ban Treaty (CTBT) (states parties)
References
External links
Official resources created by the United Nations Office for Disarmament Affairs
Official website of the Biological Weapons Convention
Full text of the Biological Weapons Convention, Treaty Database
Brochure: The Biological Weapons Convention: An Introduction
Meetings Place. A page with details on disarmament meetings, including documents and presentations.
BWC Meeting of States Parties
BWC Meetings of Experts
Electronic Confidence-Building Measures facility
Article X cooperation and assistance database
External resources
"Treaties and Regimes: The Biological Weapons Convention", Nuclear Threat Initiative
"The Biological Weapons Convention (BWC) at a Glance", Arms Control Association
"The Historical Context of the Origins of the Biological Weapons Convention (BWC)", University College London
"Understanding Biological Disarmament: Final Report"
Arms control treaties
Biological warfare
Cold War treaties
Human rights instruments
Non-proliferation treaties
Treaties of the Soviet Union
Treaties concluded in 1972
Treaties entered into force in 1975
1975 in politics
Treaties of the Republic of Afghanistan
Treaties of Albania
Treaties of Algeria
Treaties of Andorra
Treaties of Angola
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Armenia
Treaties of Australia
Treaties of Austria
Treaties of Azerbaijan
Treaties of the Bahamas
Treaties of Bahrain
Treaties of Bangladesh
Treaties of Barbados
Treaties of the Byelorussian Soviet Socialist Republic
Treaties of Belgium
Treaties of Belize
Treaties of the Republic of Dahomey
Treaties of Bhutan
Treaties of Bolivia
Treaties of Bosnia and Herzegovina
Treaties of Botswana
Treaties of the military dictatorship in Brazil
Treaties of Brunei
Treaties of the People's Republic of Bulgaria
Treaties of Burkina Faso
Treaties of Myanmar
Treaties of Burundi
Treaties of the People's Republic of Kampuchea
Treaties of Cameroon
Treaties of Canada
Treaties of Cape Verde
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Taiwan
Treaties of Colombia
Treaties of the Republic of the Congo
Treaties of Zaire
Treaties of the Cook Islands
Treaties of Costa Rica
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of Czechoslovakia
Treaties of the Czech Republic
Treaties of Denmark
Treaties of Dominica
Treaties of the Dominican Republic
Treaties of Ecuador
Treaties of El Salvador
Treaties of Equatorial Guinea
Treaties of Estonia
Treaties of the Derg
Treaties of Fiji
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of the Gambia
Treaties of Georgia (country)
Treaties of West Germany
Treaties of East Germany
Treaties of Ghana
Treaties of Greece
Treaties of Grenada
Treaties of Guatemala
Treaties of Guinea-Bissau
Treaties of Guyana
Treaties of the Holy See
Treaties of Honduras
Treaties of the Hungarian People's Republic
Treaties of Iceland
Treaties of India
Treaties of Indonesia
Treaties of Pahlavi Iran
Treaties of Ba'athist Iraq
Treaties of Ireland
Treaties of Italy
Treaties of Ivory Coast
Treaties of Jamaica
Treaties of Japan
Treaties of Jordan
Treaties of Kazakhstan
Treaties of Kenya
Treaties of Kuwait
Treaties of Kyrgyzstan
Treaties of Laos
Treaties of Latvia
Treaties of Lebanon
Treaties of Lesotho
Treaties of Liberia
Treaties of the Libyan Arab Jamahiriya
Treaties of Liechtenstein
Treaties of Lithuania
Treaties of Luxembourg
Treaties of North Macedonia
Treaties of Madagascar
Treaties of Malawi
Treaties of Malaysia
Treaties of the Maldives
Treaties of Mali
Treaties of Malta
Treaties of the Marshall Islands
Treaties of Mauritania
Treaties of Mauritius
Treaties of Mexico
Treaties of Moldova
Treaties of Monaco
Treaties of the Mongolian People's Republic
Treaties of Montenegro
Treaties of Morocco
Treaties of Mozambique
Treaties of Nauru
Treaties of Nepal
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Nicaragua
Treaties of Nigeria
Treaties of Niger
Treaties of North Korea
Treaties of Norway
Treaties of Oman
Treaties of Pakistan
Treaties of Palau
Treaties of Panama
Treaties of Papua New Guinea
Treaties of Paraguay
Treaties of Peru
Treaties of the Philippines
Treaties of the Polish People's Republic
Treaties of Portugal
Treaties of Qatar
Treaties of the Socialist Republic of Romania
Treaties of Rwanda
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of San Marino
Treaties of São Tomé and Príncipe
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Yugoslavia
Treaties of Seychelles
Treaties of Sierra Leone
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Solomon Islands
Treaties of South Africa
Treaties of South Korea
Treaties of Spain
Treaties of Sri Lanka
Treaties of the Republic of the Sudan (1985–2011)
Treaties of Suriname
Treaties of Eswatini
Treaties of Sweden
Treaties of Switzerland
Treaties of Tajikistan
Treaties of Thailand
Treaties of Timor-Leste
Treaties of Togo
Treaties of Tonga
Treaties of Trinidad and Tobago
Treaties of Tunisia
Treaties of Turkey
Treaties of Turkmenistan
Treaties of Uganda
Treaties of the Ukrainian Soviet Socialist Republic
Treaties of the United Arab Emirates
Treaties of the United Kingdom
Treaties of the United States
Treaties of Uruguay
Treaties of Uzbekistan
Treaties of Vanuatu
Treaties of Venezuela
Treaties of Vietnam
Treaties of the Yemen Arab Republic
Treaties of South Yemen
Treaties of Zambia
Treaties of Zimbabwe
Treaties extended to Greenland
Treaties extended to the Faroe Islands
Treaties extended to the Netherlands Antilles
Treaties extended to Aruba
Treaties extended to Saint Christopher-Nevis-Anguilla
Treaties extended to Bermuda
Treaties extended to the British Virgin Islands
Treaties extended to the Cayman Islands
Treaties extended to the Falkland Islands
Treaties extended to Gibraltar
Treaties extended to Montserrat
Treaties extended to the Pitcairn Islands
Treaties extended to Saint Helena, Ascension and Tristan da Cunha
Treaties extended to South Georgia and the South Sandwich Islands
Treaties extended to the Turks and Caicos Islands | Biological Weapons Convention | [
"Biology"
] | 7,588 | [
"Biological warfare"
] |
337,083 | https://en.wikipedia.org/wiki/Particle%20swarm%20optimization | In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
PSO is originally attributed to Kennedy, Eberhart and Shi and was first intended for simulating social behaviour, as a stylized representation of the movement of organisms in a bird flock or fish school. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli. In 2017, a comprehensive review on theoretical and experimental works on PSO has been published by Bonyadi and Michalewicz.
PSO is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. Also, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods. However, metaheuristics such as PSO do not guarantee an optimal solution is ever found.
Algorithm
A basic variant of the PSO algorithm works by having a population (called a swarm) of candidate solutions (called particles). These particles are moved around in the search-space according to a few simple formulae. The movements of the particles are guided by their own best-known position in the search-space as well as the entire swarm's best-known position. When improved positions are being discovered these will then come to guide the movements of the swarm. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.
Formally, let f: ℝn → ℝ be the cost function which must be minimized. The function takes a candidate solution as an argument in the form of a vector of real numbers and produces a real number as output which indicates the objective function value of the given candidate solution. The gradient of f is not known. The goal is to find a solution a for which f(a) ≤ f(b) for all b in the search-space, which would mean a is the global minimum.
Let S be the number of particles in the swarm, each having a position xi ∈ ℝn in the search-space and a velocity vi ∈ ℝn. Let pi be the best known position of particle i and let g be the best known position of the entire swarm. A basic PSO algorithm to minimize the cost function is then:
for each particle i = 1, ..., S do
Initialize the particle's position with a uniformly distributed random vector: xi ~ U(blo, bup)
Initialize the particle's best known position to its initial position: pi ← xi
if f(pi) < f(g) then
update the swarm's best known position: g ← pi
Initialize the particle's velocity: vi ~ U(-|bup-blo|, |bup-blo|)
while a termination criterion is not met do:
for each particle i = 1, ..., S do
for each dimension d = 1, ..., n do
Pick random numbers: rp, rg ~ U(0,1)
Update the particle's velocity: vi,d ← w vi,d + φp rp (pi,d-xi,d) + φg rg (gd-xi,d)
Update the particle's position: xi ← xi + vi
if f(xi) < f(pi) then
Update the particle's best known position: pi ← xi
if f(pi) < f(g) then
Update the swarm's best known position: g ← pi
The values blo and bup represent the lower and upper boundaries of the search-space respectively. The w parameter is the inertia weight. The parameters φp and φg are often called cognitive coefficient and social coefficient.
The termination criterion can be the number of iterations performed, or a solution where the adequate objective function value is found. The parameters w, φp, and φg are selected by the practitioner and control the behaviour and efficacy of the PSO method (below).
Parameter selection
The choice of PSO parameters can have a large impact on optimization performance. Selecting PSO parameters that yield good performance has therefore been the subject of much research.
To prevent divergence ("explosion") the inertia weight must be smaller than 1. The two other parameters can be then derived thanks to the constriction approach, or freely selected, but the analyses suggest convergence domains to constrain them. Typical values are in .
The PSO parameters can also be tuned by using another overlaying optimizer, a concept known as meta-optimization, or even fine-tuned during the optimization, e.g., by means of fuzzy logic.
Parameters have also been tuned for various optimization scenarios.
Neighbourhoods and topologies
The topology of the swarm defines the subset of particles with which each particle can exchange information. The basic version of the algorithm uses the global topology as the swarm communication structure. This topology allows all particles to communicate with all the other particles, thus the whole swarm share the same best position g from a single particle. However, this approach might lead the swarm to be trapped into a local minimum, thus different topologies have been used to control the flow of information among particles. For instance, in local topologies, particles only share information with a subset of particles. This subset can be a geometrical one – for example "the m nearest particles" – or, more often, a social one, i.e. a set of particles that is not depending on any distance. In such cases, the PSO variant is said to be local best (vs global best for the basic PSO).
A commonly used swarm topology is the ring, in which each particle has just two neighbours, but there are many others. The topology is not necessarily static. In fact, since the topology is related to the diversity of communication of the particles, some efforts have been done to create adaptive topologies (SPSO, APSO, stochastic star, TRIBES, Cyber Swarm, and C-PSO)
By using the ring topology, PSO can attain generation-level parallelism, significantly enhancing the evolutionary speed.
Inner workings
There are several schools of thought as to why and how the PSO algorithm can perform optimization.
A common belief amongst researchers is that the swarm behaviour varies between exploratory behaviour, that is, searching a broader region of the search-space, and exploitative behaviour, that is, a locally oriented search so as to get closer to a (possibly local) optimum. This school of thought has been prevalent since the inception of PSO. This school of thought contends that the PSO algorithm and its parameters must be chosen so as to properly balance between exploration and exploitation to avoid premature convergence to a local optimum yet still ensure a good rate of convergence to the optimum. This belief is the precursor of many PSO variants, see below.
Another school of thought is that the behaviour of a PSO swarm is not well understood in terms of how it affects actual optimization performance, especially for higher-dimensional search-spaces and optimization problems that may be discontinuous, noisy, and time-varying. This school of thought merely tries to find PSO algorithms and parameters that cause good performance regardless of how the swarm behaviour can be interpreted in relation to e.g. exploration and exploitation. Such studies have led to the simplification of the PSO algorithm, see below.
Convergence
In relation to PSO the word convergence typically refers to two different definitions:
Convergence of the sequence of solutions (aka, stability analysis, converging) in which all particles have converged to a point in the search-space, which may or may not be the optimum,
Convergence to a local optimum where all personal bests p or, alternatively, the swarm's best known position g, approaches a local optimum of the problem, regardless of how the swarm behaves.
Convergence of the sequence of solutions has been investigated for PSO. These analyses have resulted in guidelines for selecting PSO parameters that are believed to cause convergence to a point and prevent divergence of the swarm's particles (particles do not move unboundedly and will converge to somewhere). However, the analyses were criticized by Pedersen for being oversimplified as they assume the swarm has only one particle, that it does not use stochastic variables and that the points of attraction, that is, the particle's best known position p and the swarm's best known position g, remain constant throughout the optimization process. However, it was shown that these simplifications do not affect the boundaries found by these studies for parameter where the swarm is convergent. Considerable effort has been made in recent years to weaken the modeling assumption utilized during the stability analysis of PSO, with the most recent generalized result applying to numerous PSO variants and utilized what was shown to be the minimal necessary modeling assumptions.
Convergence to a local optimum has been analyzed for PSO in and. It has been proven that PSO needs some modification to guarantee finding a local optimum.
This means that determining the convergence capabilities of different PSO algorithms and parameters still depends on empirical results. One attempt at addressing this issue is the development of an "orthogonal learning" strategy for an improved use of the information already existing in the relationship between p and g, so as to form a leading converging exemplar and to be effective with any PSO topology. The aims are to improve the performance of PSO overall, including faster global convergence, higher solution quality, and stronger robustness. However, such studies do not provide theoretical evidence to actually prove their claims.
Adaptive mechanisms
Without the need for a trade-off between convergence ('exploitation') and divergence ('exploration'), an adaptive mechanism can be introduced. Adaptive particle swarm optimization (APSO) features better search efficiency than standard PSO. APSO can perform global search over the entire search space with a higher convergence speed. It enables automatic control of the inertia weight, acceleration coefficients, and other algorithmic parameters at the run time, thereby improving the search effectiveness and efficiency at the same time. Also, APSO can act on the globally best particle to jump out of the likely local optima. However, APSO will introduce new algorithm parameters, it does not introduce additional design or implementation complexity nonetheless.
Besides, through the utilization of a scale-adaptive fitness evaluation mechanism, PSO can efficiently address computationally expensive optimization problems.
Variants
Numerous variants of even a basic PSO algorithm are possible. For example, there are different ways to initialize the particles and velocities (e.g. start with zero velocities instead), how to dampen the velocity, only update pi and g after the entire swarm has been updated, etc. Some of these choices and their possible performance impact have been discussed in the literature.
A series of standard implementations have been created by leading researchers, "intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community. Having a well-known, strictly-defined standard algorithm provides a valuable point of comparison which can be used throughout the field of research to better test new advances." The latest is Standard PSO 2011 (SPSO-2011).
Hybridization
New and more sophisticated PSO variants are also continually being introduced in an attempt to improve optimization performance. There are certain trends in that research; one is to make a hybrid optimization method using PSO combined with other optimizers, e.g., combined PSO with biogeography-based optimization, and the incorporation of an effective learning method.
Alleviate premature convergence
Another research trend is to try to alleviate premature convergence (that is, optimization stagnation), e.g. by reversing or perturbing the movement of the PSO particles, another approach to deal with premature convergence is the use of multiple swarms (multi-swarm optimization). The multi-swarm approach can also be used to implement multi-objective optimization. Finally, there are developments in adapting the behavioural parameters of PSO during optimization.
Simplifications
Another school of thought is that PSO should be simplified as much as possible without impairing its performance; a general concept often referred to as Occam's razor. Simplifying PSO was originally suggested by Kennedy and has been studied more extensively, where it appeared that optimization performance was improved, and the parameters were easier to tune and they performed more consistently across different optimization problems.
Another argument in favour of simplifying PSO is that metaheuristics can only have their efficacy demonstrated empirically by doing computational experiments on a finite number of optimization problems. This means a metaheuristic such as PSO cannot be proven correct and this increases the risk of making errors in its description and implementation. A good example of this presented a promising variant of a genetic algorithm (another popular metaheuristic) but it was later found to be defective as it was strongly biased in its optimization search towards similar values for different dimensions in the search space, which happened to be the optimum of the benchmark problems considered. This bias was because of a programming error, and has now been fixed.
Bare Bones PSO
Initialization of velocities may require extra inputs. The Bare Bones PSO variant has been proposed in 2003 by James Kennedy, and does not need to use velocity at all.
In this variant of PSO one dispences with the velocity of the particles and instead updates the positions of the particles using the following simple rule,
where , are the position and the best position of the particle ; is the global best position; is the normal distribution with the mean and standard deviation ; and where signifies the norm of a vector.
Accelerated Particle Swarm Optimization
Another simpler variant is the accelerated particle swarm optimization (APSO), which also does not need to use velocity and can speed up the convergence in many applications. A simple demo code of APSO is available.
In this variant of PSO one dispences with both the particle's velocity and the particle's best position. The particle position is updated according to the following rule,
where is a random uniformly distributed vector, is the typical length of the problem at hand, and and are the parameters of the method. As a refinement of the method one can decrease with each iteration, , where is the number of the iteration and is the decrease control parameter.
Multi-objective optimization
PSO has also been applied to multi-objective problems, in which the objective function comparison takes Pareto dominance into account when moving the PSO particles and non-dominated solutions are stored so as to approximate the pareto front.
Binary, discrete, and combinatorial
As the PSO equations given above work on real numbers, a commonly used method to solve discrete problems is to map the discrete search space to a continuous domain, to apply a classical PSO, and then to demap the result. Such a mapping can be very simple (for example by just using rounded values) or more sophisticated.
However, it can be noted that the equations of movement make use of operators that perform four actions:
computing the difference of two positions. The result is a velocity (more precisely a displacement)
multiplying a velocity by a numerical coefficient
adding two velocities
applying a velocity to a position
Usually a position and a velocity are represented by n real numbers, and these operators are simply -, *, +, and again +. But all these mathematical objects can be defined in a completely different way, in order to cope with binary problems (or more generally discrete ones), or even combinatorial ones. One approach is to redefine the operators based on sets.
See also
Artificial bee colony algorithm
Bees algorithm
Derivative-free optimization
Multi-swarm optimization
Particle filter
Swarm intelligence
Fish School Search
Dispersive flies optimisation
References
External links
Particle Swarm Central is a repository for information on PSO. Several source codes are freely available.
A brief video of particle swarms optimizing three benchmark functions.
Simulation of PSO convergence in a two-dimensional space (Matlab).
Applications of PSO.
Links to PSO source code
Nature-inspired metaheuristics
Optimization algorithms and methods
Multi-agent systems | Particle swarm optimization | [
"Engineering"
] | 3,504 | [
"Artificial intelligence engineering",
"Multi-agent systems"
] |
337,124 | https://en.wikipedia.org/wiki/SIGCOMM | SIGCOMM is the Association for Computing Machinery's Special Interest Group on Data Communications, which specializes in the field of communication and computer networks. It is also the name of an annual 'flagship' conference, organized by SIGCOMM, which is considered to be the leading conference in data communications and networking in the world. Known to have an extremely low acceptance rate (~10%), many of the landmark works in Networking and Communications have been published through it.
Of late, a number of workshops related to networking are also co-located with the SIGCOMM conference. These include Workshop on Challenged Networks (CHANTS), Internet Network Management (INM), Large Scale Attack Defense (LSAD) and Mining Network Data (MineNet).
SIGCOMM also produces a quarterly magazine, Computer Communication Review, with both peer-reviewed and editorial (non-peer reviewed) content, and a bi-monthly refereed journal IEEE/ACM Transactions on Networking, co-sponsored with IEEE.
SIGCOMM hands out the following awards on an annual basis
The SIGCOMM Award, for outstanding lifetime technical achievement in the fields of data and computer communications
The Rising Star Award, for a young research under the age of 35 who has made outstanding contributions during this early part of their career.
The Test of Time Award recognizes papers published 10 to 12 years in the past in a SIGCOMM sponsored or co-sponsored venue whose contents still represent a vibrant, useful contribution.
Best Paper Award and the Best Student Paper Award at that year's conference.
The SIGCOMM Doctoral Dissertation Award recognizes excellent thesis research by doctoral candidates in the field of computer networking and data communication.
The SIGCOMM Networking Systems Award recognizes the development of a networking system that has had a significant impact on the world of computer networking.
References
Association for Computing Machinery Special Interest Groups | SIGCOMM | [
"Technology"
] | 382 | [
"Computer network stubs",
"Computing stubs",
"Computer conference stubs"
] |
337,153 | https://en.wikipedia.org/wiki/Gap%20junction | Gap junctions are membrane channels between adjacent cells that allow the direct exchange of cytoplasmic substances. Substances exchanged include small molecules, substrates, and metabolites.
Gap junctions were first described as close appositions as other tight junctions, but following electron microscopy studies in 1967, they were renamed gap junctions to distinguish them from tight junctions. They bridge a 2-4 nm gap between cell membranes.
Gap junctions use protein complexes known as connexons to connect one cell to another. The proteins are called connexins. Gap junction proteins include the more than 26 types of connexin, and at least 12 non-connexin components that make up the gap junction complex or nexus. These components include the tight junction protein ZO-1—a protein that holds membrane content together and adds structural clarity to a cell, sodium channels, and aquaporin.
More gap junction proteins have become known due to the development of next-generation sequencing. Connexins were found to be structurally homologous between vertebrates and invertebrates but different in sequence. As a result, the term innexin is used to differentiate invertebrate connexins. There are more than 20 known innexins, along with unnexins in parasites and vinnexins in viruses.
An electrical synapse is a gap junction that can transmit action potentials between neurons. Such synapses create bidirectional continuous-time electrical coupling between neurons. Connexon pairs act as generalized regulated gates for ions and smaller molecules between cells. Hemichannel connexons form channels to the extracellular environment.
A gap junction or macula communicans is different from an ephaptic coupling that involves electrical signals external to the cells.
Structure
In vertebrates, gap junction hemichannels are primarily homo- or hetero-hexamers of connexin proteins. Hetero-hexamers at gap junction plaques, help form a uniform intercellular space of 2-4 nm. In this way hemichannels in the membrane of each cell are aligned with one another forming an intercellular communication path.
Invertebrate gap junctions comprise proteins from the innexin family. Innexins have no significant sequence homology with connexins. Though differing in sequence to connexins, innexins are similar enough to connexins to form gap junctions in vivo in the same way connexins do.
The more recently characterized pannexin family, which was originally thought to form intercellular channels (with an amino acid sequence similar to innexins) in fact functions as a single-membrane channel that communicates with the extracellular environment and has been shown to pass calcium and ATP. This has led to the idea that pannexins may not form intercellular junctions in the same way connexins and innexins do and therefore should not use the same hemi-channel/channel naming. Others have presented evidence based on genetic sequencing and overall functioning in tissues, that pannexins should still be considered part of the gap junction family of proteins despite structural differences. These researchers also note that there are still more groups of connexin orthologs to be discovered.
Gap junction channels formed from two identical hemichannels are called homotypic, while those with differing hemichannels are heterotypic. In turn, hemichannels of uniform protein composition are called homomeric, while those with differing proteins are heteromeric. Channel composition influences the function of gap junction channels, and different connexins will not necessarily form heterotypic with all others.
Before innexins and connexins were well characterized, the genes coding for the connexin gap junction channels were classified in one of three groups (A, B and C; for example, , ), based on gene mapping and sequence similarity. However, connexin genes do not code directly for the expression of gap junction channels; genes can produce only the proteins that make up gap junction channels. An alternative naming system based on the protein's molecular weight is the most widely used (for example, connexin43=GJA1, connexin30.3=GJB4).
Levels of organization
In vertebrates, two pairs of six connexin proteins form a connexon. In invertebrates, six innexin proteins form an innexon. Otherwise, the structures are similar.
The connexin genes (DNA) are transcribed to RNA, which is then translated to produce a connexin.
One connexin protein has four transmembrane domains
Six connexin proteins create one connexon channel a hemichannel. When identical connexin proteins join to form one connexon, it is called a homomeric connexon. When different connexin proteins join to form one connexon, it is called a heteromeric connexon.
Two connexons, joined across a cell membrane, comprise a gap junction channel.When two identical connexons come together to form a gap junction channel, it is called a homotypic channel. When one homomeric connexon and one heteromeric connexon come together, it is called a heterotypic gap junction channel. When two heteromeric connexons join, it is also called a heterotypic gap junction channel.
Tens to thousands of gap junction channels cluster in areas to enable connexon pairs to form. The macromolecular complex is called a gap junction plaque. Molecules other than connexins are involved in gap junction plaques including tight junction protein 1 and sodium channels.
Properties of connexon pairs
A connexon or innexon channel pair:
Allows for direct electrical communication between cells, although different hemichannel subunits can impart different single channel conductances, from about 30 pS to 500 pS.
Allows for chemical communication between cells through the transmission of small second messengers, such as inositol triphosphate () and calcium (), although different hemichannel subunits can impart different selectivities for particular molecules.
Generally allows transmembrane movement of molecules smaller than 485 daltons (1,100 daltons through invertebrate gap junctions), although different hemichannel subunits may impart different pore sizes and different charge selectivity. Large biomolecules, including nucleic acids and proteins, are precluded from cytoplasmic transfer between cells through gap junction hemichannel pairs.
Ensures that molecules and current passing through the gap junction do not leak into the intercellular space.
Properties of connexons as hemichannels
Unpaired connexons or innexons can act as hemichannels in a single membrane, allowing the cell to exchange molecules directly with the exterior of the cell. It has been shown that connexons would be available to do this prior to being incorporated into the gap junction plaques. Some of the properties of these unpaired connexons are listed below:
Pore or transmembrane channel size is highly variable, in the range of approximately 8-20Å in diameter.
They connect the cytoplasm of the cell to the cell exterior and are thought to be in a closed state by default in order to prevent leakage from the cell.
Some connexons respond to external factors by opening up. Mechanical shear and various diseases can cause this to happen.
Establishing further connexon properties different to those of connexon pairs, proves difficult due to separating their effects experimentally in organisms.
Occurrence and distribution
Gap Junctions have been observed in various animal organs and tissues where cells contact each other. From the 1950s to 1970s they were detected in:
Human islet of Langerhans, myometrium, and eye lens
Rat pancreas, liver, adrenal cortex, epididymis, duodenum, muscle, and seminiferous tubules
Rabbit cornea, ovary, and skin
Monkey retina
Chick embryos
Frog embryos
Fish blastoderm
Crayfish nerves
Lamprey and Tunicate heart
Goldfish and hamster pressure-sensing acoustico-vestibular receptors
Daphnia hepatic caecum
Cephalopod digestive epithelium
Hydra muscle
Cockroach hemocyte capsules
Reaggregated cells
Gap junctions have continue to be found in nearly all healthy animal cells that touch each other. Techniques such as confocal microscopy allow more rapid surveys of large areas of tissue. Tissues that were traditionally considered to have isolated cells such as in bone were shown to have cells that were still connected with gap junctions, however tenuously. Exceptions to this are cells not normally in contact with neighboring cells such as blood cells suspended in blood plasma. Adult skeletal muscle is a possible exception to the rule though their large size makes it difficult to be certain of this. An argument used against skeletal muscle gap junctions is that if they were present gap junctions may propagate contractions in an arbitrary way through cells making up the muscle. However, other muscle types do have gap junctions which do not cause arbitrary contractions. Sometimes the number of gap junctions are reduced or absent in diseased tissues such as cancers or the aging process.
Since the discovery of innexins, pannexins and unnexins, gaps in our knowledge of intercellular communication are becoming more defined. Innexins look and behave similarly to connexins and can be seen to fill a similar role to connexins in invertebrates. Pannexins also look individually similar to connexins though they do not appear to easily form gap junctions. Of the over 20 metazoan groups connexins have been found only in vertebrata and tunicata. Innexins and pannexins are far more widespread including innexin homologues in vertebrates. The unicellular Trypanosomatidae parasites presumably have unnexin genes to aid in their infection of animals including humans. The even smaller adenovirus has its own vinnexin, apparently derived from an innexin, to aid its transmission between the virus's insect hosts.
The term gap junction cannot be defined by a single protein or family of proteins with a specific function. For example, gap junction structures are found in sponges, despite the absence of pannexins. While we are still at the early stages of understanding the nervous system of a sponge the gap junctions of sponges may as yet indicate intercellular communications pathways.
Functions
At least five discrete functions have been ascribed to gap junction proteins:
Electrical and metabolic coupling between cells
Electrical and metabolic exchange through hemichannels
Tumor suppressor genes (Cx43, Cx32 and Cx36)
Adhesive function independent of conductive gap junction channel (neural migration in neocortex)
Role of carboxyl-terminal in signaling cytoplasmic pathways (Cx43)
In a more general sense, gap junctions may be seen to function at the simplest level as a direct cell to cell pathway for electrical currents, small molecules and ions. The control of this communication allows complex downstream effects on multicellular organisms.
Embryonic, organ and tissue development
In the 1980s, more subtle roles of gap junctions in communication have been investigated. It was discovered that gap junction communication could be disrupted by adding anti-connexin antibodies into embryonic cells. Embryos with areas of blocked gap junctions failed to develop normally. The mechanism by which antibodies blocked the gap junctions was unclear; systematic studies were undertaken to elucidate the mechanism. Refinement of these studies suggested that gap junctions were key in the development of cell polarity and the left-right symmetry in animals. While signaling that determines the position of body organs appears to rely on gap junctions, so does the more fundamental differentiation of cells at later stages of embryonic development.
Gap junctions were found to be responsible for the transmission of signals required for drugs to have an effect. Conversely, some drugs were shown to block gap junction channels.
The bystander effect and disease
Cell death
The bystander effect has its connotations of the innocent bystander being killed. When cells are dying or compromised due to disease or injury, messages are transmitted to neighboring cells by gap junctions. This can cause otherwise healthy bystander cells to also die.
The bystander effect was later researched with regard to cells damaged by radiation or mechanical injury and in turn wound healing. Disease seems to have an effect on the ability of gap junctions to fulfill their roles in wound healing. The oral administration of gap junction blockers to reduce the symptoms of disease in remote parts of the body is slowly becoming a reality.
Tissue restructuring
While there has been a tendency to focus on the bystander effect in disease due to the possibility of therapeutic avenues, there is evidence that there is a more central role in normal development of tissues. Death of some cells and their surrounding matrix may be required for a tissue to reach its final configuration; gap junctions appear essential to this process. There are also more complex studies that try to combine our understanding of the simultaneous roles of gap junctions in both wound healing and tissue development.
Disease
Mutations in connexins have been associated with many diseases in humans, including deafness, heart atrial fibrillation (standstill) and cataracts. The study of these mutations has helped clarify some of the functions of connexins.
Hemichannels are thought to play a general role in the progression and severity of many diseases; this is in part due to hemichannels being an open door to the outside of each cell.
Areas of electrical coupling
Gap junctions electrically couple cells throughout the body of most animals. Electrical coupling can be relatively fast acting and can be used over short distances within an organism. Tissues in this section have well known functions observed to be coordinated by gap junctions, with intercellular signaling happening in time frames of microseconds or less.
Heart
Gap junctions are particularly important in cardiac muscle: the signal to contract is passed efficiently through gap junctions, allowing the heart muscle cells to contract in unison. The importance is emphasized by a secondary ephaptic pathway for the signal to contract also being associated with the gap junction plaques. This redundancy in signal transmission associated with gap junction plaques is the first to be described and involves sodium channels rather than connexins.
Eye lens
Precise control of light refraction, structural dimensions and transparency are key aspects of the eye lens structure that allow focusing by the eye. Transparency is aided by the absence of nerves and blood vessels from the lens, so gap junctions are left with a larger loading of intercellular communication than in other tissues reflected in large numbers of gap junctions. The crystallinity of the lens also means the cells and gap junctions are well ordered for systematic mapping of where the gap junction plaques are. As no cells are lost from the lens interior during the life of the animal, a complete map of the gap junctions is possible.
The associated figure shows how the size, shape, and frequency of gap junction plaques change with cell growth. With growth, fiber cells are progressively isolated from more direct metabolite exchange with the aqueous humor through the capsule and lens epithelium. The isolation correlates with the classical circular shape of larger plaques shown in the yellow zone being disrupted. Changing the fiber cells' morphology requires the movements of vesicles through the gap junction plaques at higher frequencies in this area.
Neurons
A gap junction located between neurons is often referred to as an electrical synapse. The electrical synapse was discovered using electrical measurements before the gap junction structure was described. Electrical synapses are present throughout the central nervous system and have been studied specifically in the neocortex, hippocampus, vestibular nucleus, thalamic reticular nucleus, locus coeruleus, inferior olivary nucleus, mesencephalic nucleus of the trigeminal nerve, ventral tegmental area, olfactory bulb, retina and spinal cord of vertebrates.
There has been some observation of coupling in the locus coeruleus between weak neurons and glial cells and in the cerebellum between Purkinje neurons and Bergmann glial cells. It appears that astrocytes are coupled by gap junctions, both to other astrocytes and to oligodendrocytes. Moreover, mutations in the gap junction genes Cx43 and Cx56.6 cause white matter degeneration similar to that observed in Pelizaeus–Merzbacher disease and multiple sclerosis.
Connexin proteins expressed in neuronal gap junctions include mCX36, mCX57, and mCX45, with mRNAs for at least five other connexins (mCx26, mCx30.2, mCx32, mCx43, mCx47) detected but without immunocytochemical evidence for the corresponding protein within ultrastructurally-defined gap junctions. Those mRNAs appear to be downregulated or destroyed by micro interfering RNAs (miRNAs) that are cell-type and cell-lineage specific.
Astrocytes
An important feature of astrocytes is their high expression levels of the gap junction proteins connexin 30 (Cx30) and connexin 43 (Cx43). These proteins play crucial roles in regulating brain homeostasis through potassium buffering, intercellular communication, and nutrient transport. Connexins typically form gap junction channels that allow direct intercellular communication between astrocytes. However, they can also form hemichannels that facilitate the exchange of ions and molecules with the extracellular space.
Studies have highlighted channel-independent functions of connexins, involving intracellular signaling, protein interactions, and cell adhesion. Specifically, Cx30 has been shown to regulate the insertion of astroglial processes into synaptic clefts, which controls the efficacy of glutamate clearance. This, in turn, affects the synaptic strength and long-term plasticity of excitatory terminals, indicating a significant role in modulating synaptic transmission. Levels of Cx30 regulate synaptic glutamate concentration, hippocampal excitatory synaptic strength, plasticity, and memory. Astroglial networks have a physiologically optimized size to appropriately regulate neuronal functions.
Cx30 is not limited to regulating excitatory synaptic transmission but also plays a crucial role in inhibitory synaptic regulation and broader neuronal network activities. This highlights the importance of connexins in maintaining the intricate balance required for proper brain function.
Retina
Neurons within the retina show extensive coupling, both within populations of one cell type and between different cell types.
Uterus
The uterine muscle (myometrium) remains in a quiescent relaxed state during pregnancy to maintain fetal development. Immediately preceding labor, the myometrium transforms into an activated contractile unit by increasing expression of connexin-43 (CX43, a.k.a. Gap Junction Alpha-1 protein, GJA1) facilitating gap junction (GJ) formation between individual myometrial cells. Importantly, the formation of GJs promotes communication between neighbouring myocytes, which facilitates the transfer of small molecules such as secondary messengers, metabolites, and small ions for electrical coupling. Consistent with all species, uterine myometrial contractions propagate from spontaneous action potentials as a result of sudden change in plasma membrane permeability. This leads to an increase of intracellular Ca²⁺ concentration, facilitating action potential propagation through electrically coupled cells. It has more recently been discovered that uterine macrophages directly physically couples with uterine myocytes through CX43, transferring Ca²⁺, to promote uterine muscle contraction and excitation during labor onset.
Hemichannel function
Hemichannels contribute to a cellular network of gap junctions and allow the release of sdenosine triphosphate, glutamate, Nicotinamide adenine dinucleotide, and prostaglandin E2 from cells, which can all act as messengers to cells otherwise disconnected from such messaging. In this sense, a gap junction plaque forms a one-to-one relationship with the neighboring cell, daisy chaining many cells together. Hemichannels form a one to many relationship with the surrounding tissue.
On a larger scale, the one-to-many communication of cells is typically carried out by the vascular and nervous systems. This makes detecting the contribution of hemichannels to extracellular communication more difficult in whole organisms. With the eye lens, the vascular and nervous systems are absent, making reliance on hemichannels greater and their detection easier. At the interface of the lens with the aqueous humor (where the lens exchanges metabolites), both gap junction plaques and more diffused connexon distribution can be seen in the accompanying micrographs.
Discovery
Form to function
Well before the demonstration of the gap in gap junctions, they were seen at the junction of neighboring nerve cells. The close proximity of the neighboring cell membranes at the gap junction led researchers to speculate that they had a role in intercellular communication, in particular the transmission of electrical signals. Gap junctions were also found to be electrically rectifying in the early studies and referred to as an electrical synapse but are now known to be bidirectional in general. Later, it was found that chemicals could also be transported between cells through gap junctions.
Implicit or explicit in most of the early studies is that the area of the gap junction was different in structure to the surrounding membranes in a way that made it look different. The gap junction had been shown to create a micro-environment between the two cells in the extracellular space or gap. This portion of extracellular space was somewhat isolated from the surrounding space and also bridged by what we now call connexon pairs, which form even more tightly sealed bridges that cross the gap junction gap between two cells. When viewed in the plane of the membrane by freeze-fracture techniques, higher-resolution distribution of connexons within the gap junction plaque is possible.
Connexin free islands are observed in some junctions. The observation was largely without explanation until vesicles were shown by Peracchia using transmission electron microscopy (TEM) thin sections to be systematically associated with gap junction plaques. Peracchia's study was probably also the first study to describe paired connexon structures, which he called a globule. Studies showing vesicles associated with gap junctions and proposing the vesicle contents may move across the junction plaques between two cells were rare, as most studies focused on connexons rather than vesicles. A later study using a combination of microscopy techniques confirmed the early evidence of a probable function for gap junctions in intercellular vesicle transfer. Areas of vesicle transfer were associated with connexin free islands within gap junction plaques. Connexin 43 has been shown to be necessary for the transfer of whole mitochondrias to neighboring cells, though whether the mitochondria is transferred directly through the membrane or within a vesicle has not been determined
Electrical and chemical synapses
Because of the widespread occurrence of gap junctions in cell types other than nerve cells, the term gap junction became more generally used than terms such as electrical synapse or nexus. Another dimension in the relationship between nerve cells and gap junctions was revealed by studying chemical synapse formation and gap junction presence. By tracing nerve development in leeches with gap junction expression suppressed it was shown that the bidirectional gap junction (electrical nerve synapse) needs to form between two cells before they can grow to form a unidirectional chemical nerve synapse. The chemical nerve synapse is the synapse most often truncated to the more ambiguous term nerve synapse.
Composition
Connexins
The purification of the intercellular gap junction plaques enriched in the channel forming protein (connexin) showed a protein forming hexagonal arrays in x-ray diffraction. Because of this, the systematic study and identification of the predominant gap junction protein became possible.
Refined ultrastructural studies by TEM showed protein occurred in a complementary fashion in both cells participating in a gap junction plaque. The gap junction plaque is a relatively large area of membrane observed in TEM thin section and freeze fracture (FF) seen filled with transmembrane proteins in both tissues and more gently treated gap junction preparations. With the apparent ability for one protein alone to enable intercellular communication seen in gap junctions the term gap junction tended to become synonymous with a group of assembled connexins though this was not shown in vivo. Biochemical analysis of gap junction isolated from various tissues demonstrated a family of connexins.
The ultrastructure and biochemistry of isolated gap junctions already referenced had indicated the connexins preferentially group in gap junction plaques or domains and connexins were the best characterized constituent. It has been noted that the organisation of proteins into arrays with a gap junction plaque may be significant. It is likely this early work was already reflecting the presence of more than just connexins in gap junctions. Combining the emerging fields of freeze-fracture to see inside membranes and immunocytochemistry to label cell components (Freeze-fracture replica immunolabelling or FRIL and thin section immunolabelling) showed gap junction plaques in vivo contained the connexin protein. Later studies using immunofluorescence microscopy of larger areas of tissue clarified diversity in earlier results. Gap junction plaques were confirmed to have variable composition being home to connexon and non-connexin proteins as well making the modern usage of the terms "gap junction" and "gap junction plaque" non-interchangeable. To summarize, in early literature the term "gap junction" referred to the regular gap between membranes in vertebrates and non-vertebrates apparently bridged by "globules". The junction correlated with the cell's ability to directly couple with its neighbors through pores in their membranes. Then for a while gap junctions were only referring to a structure that contains connexins and nothing more was thought to be involved. Later, the gap junction "plaque" was also found to contain other molecules that helped define it and make it function.
The "plaque" or "formation plaque"
Early descriptions of gap junctions, connexons or innexons did not refer to them as such; many other terms were used. It is likely that synaptic disks were an accurate reference to gap junction plaques. While the detailed structure and function of the connexon was described in a limited way at the time the gross disk structure was relatively large and easily seen by various TEM techniques. Disks allowed researchers using TEM to easily locate the connexons contained within the disk like patches in vivo and in vitro. The disk or plaque appeared to have structural properties different from those imparted by the connexons/innexons alone. It was thought that if the area of membrane in the plaque transmitted signals, the area of membrane would have to be sealed in some way to prevent leakage.
Later studies showed gap junction plaques are home to non-connexin proteins, making the modern usage of the terms "gap junction" and "gap junction plaque" non-interchangeable as the area of the gap junction plaque may contain proteins other than connexins. Just as connexins do not always occupy the entire area of the plaque, the other components described in the literature may be only long-term or short-term residents.
Studies allowing views inside the plane of the membrane of gap junctions during formation indicated that a "formation plaque" formed between two cells prior to the connexins moving in. They were particle free areas—when observed by TEM FF, indicated very small or no transmembrane proteins were likely present. Little is known about what structures make up the formation plaque or how the formation plaque's structure changes when connexins and other components move in and out. One of the earlier studies of the formation of small gap junctions describes rows of particles and particle free halos. With larger gap junctions they were described as formation plaques with connexins moving into them. The particulate gap junctions were thought to form 4–6 hours after the formation plaques appeared. How the connexins may be transported to the plaques using tubulin is becoming clearer.
The formation of plaque and the non-connexin part of the classical gap junction plaque have been difficult for early researchers to analyse. It appears in TEM FF and thin section to be a lipid membrane domain that can somehow form a comparatively rigid barrier to other lipids and proteins. There has been indirect evidence for certain lipids being preferentially involved with the formation plaque, however this cannot be considered definitive. It is difficult to envisage breaking up the membrane to analyse membrane plaques without affecting their composition. By study of connexins still in membranes lipids associated with the connexins have been studied. It was found that specific connexins tended to associate preferentially with specific phospholipids. As formation plaques precede connexins these results still give no certainty as to what is unique about the composition of plaques themselves. Other findings show connexins associate with protein scaffolds used in another junction, the zonula occludens ZO-1. While this helps us understand how connexins may be moved into a gap junction formation plaque, the composition of the plaque itself is still somewhat sketchy. Some headway on the in vivo composition of the gap junction plaque is being made using TEM FRIL.
See also
Gap junction modulation
Gap junction protein
Innexin
Vinnexin
Intercalated disc
Ion channel
Junctional complex
Tight junction
References
Further reading
External links
Cell communication
Cell signaling
Cell anatomy
Articles containing video clips | Gap junction | [
"Biology"
] | 6,180 | [
"Cell communication",
"Cellular processes"
] |
337,196 | https://en.wikipedia.org/wiki/Neuroanatomy | Neuroanatomy is the study of the structure and organization of the nervous system. In contrast to animals with radial symmetry, whose nervous system consists of a distributed network of cells, animals with bilateral symmetry have segregated, defined nervous systems. Their neuroanatomy is therefore better understood. In vertebrates, the nervous system is segregated into the internal structure of the brain and spinal cord (together called the central nervous system, or CNS) and the series of nerves that connect the CNS to the rest of the body (known as the peripheral nervous system, or PNS). Breaking down and identifying specific parts of the nervous system has been crucial for figuring out how it operates. For example, much of what neuroscientists have learned comes from observing how damage or "lesions" to specific brain areas affects behavior or other neural functions.
For information about the composition of non-human animal nervous systems, see nervous system. For information about the typical structure of the Homo sapiens nervous system, see human brain or peripheral nervous system. This article discusses information pertinent to the study of neuroanatomy.
History
The first known written record of a study of the anatomy of the human brain is an ancient Egyptian document, the Edwin Smith Papyrus. In Ancient Greece, interest in the brain began with the work of Alcmaeon, who appeared to have dissected the eye and related the brain to vision. He also suggested that the brain, not the heart, was the organ that ruled the body (what Stoics would call the hegemonikon) and that the senses were dependent on the brain.
The debate regarding the hegemonikon persisted among ancient Greek philosophers and physicians for a very long time. Those who argued for the brain often contributed to the understanding of neuroanatomy as well. Herophilus and Erasistratus of Alexandria were perhaps the most influential with their studies involving dissecting human brains, affirming the distinction between the cerebrum and the cerebellum, and identifying the ventricles and the dura mater. The Greek physician and philosopher Galen, likewise, argued strongly for the brain as the organ responsible for sensation and voluntary motion, as evidenced by his research on the neuroanatomy of oxen, Barbary apes, and other animals.
The cultural taboo on human dissection continued for several hundred years afterward, which brought no major progress in the understanding of the anatomy of the brain or of the nervous system. However, Pope Sixtus IV effectively revitalized the study of neuroanatomy by altering the papal policy and allowing human dissection. This resulted in a flush of new activity by artists and scientists of the Renaissance, such as Mondino de Luzzi, Berengario da Carpi, and Jacques Dubois, and culminating in the work of Andreas Vesalius.
In 1664, Thomas Willis, a physician and professor at Oxford University, coined the term neurology when he published his text Cerebri Anatome which is considered the foundation of modern neuroanatomy. The subsequent three hundred and fifty some years has produced a great deal of documentation and study of the neural system.
Composition
At the tissue level, the nervous system is composed of neurons, glial cells, and extracellular matrix. Both neurons and glial cells come in many types (see, for example, the nervous system section of the list of distinct cell types in the adult human body). Neurons are the information-processing cells of the nervous system: they sense our environment, communicate with each other via electrical signals and chemicals called neurotransmitters which generally act across synapses (close contacts between two neurons, or between a neuron and a muscle cell; note also extrasynaptic effects are possible, as well as release of neurotransmitters into the neural extracellular space), and produce our memories, thoughts, and movements. Glial cells maintain homeostasis, produce myelin (oligodendrocytes, Schwann cells), and provide support and protection for the brain's neurons. Some glial cells (astrocytes) can even propagate intercellular calcium waves over long distances in response to stimulation, and release gliotransmitters in response to changes in calcium concentration. Wound scars in the brain largely contain astrocytes. The extracellular matrix also provides support on the molecular level for the brain's cells, vehiculating substances to and from the blood vessels.
At the organ level, the nervous system is composed of brain regions, such as the hippocampus in mammals or the mushroom bodies of the fruit fly. These regions are often modular and serve a particular role within the general systemic pathways of the nervous system. For example, the hippocampus is critical for forming memories in connection with many other cerebral regions. The peripheral nervous system also contains afferent or efferent nerves, which are bundles of fibers that originate from the brain and spinal cord, or from sensory or motor sorts of peripheral ganglia, and branch repeatedly to innervate every part of the body. Nerves are made primarily of the axons or dendrites of neurons (axons in case of efferent motor fibres, and dendrites in case of afferent sensory fibres of the nerves), along with a variety of membranes that wrap around and segregate them into nerve fascicles.
The vertebrate nervous system is divided into the central and peripheral nervous systems. The central nervous system (CNS) consists of the brain, retina, and spinal cord, while the peripheral nervous system (PNS) is made up of all the nerves and ganglia (packets of peripheral neurons) outside of the CNS that connect it to the rest of the body. The PNS is further subdivided into the somatic and autonomic nervous systems. The somatic nervous system is made up of "afferent" neurons, which bring sensory information from the somatic (body) sense organs to the CNS, and "efferent" neurons, which carry motor instructions out to the voluntary muscles of the body. The autonomic nervous system can work with or without the control of the CNS (that's why it is called 'autonomous'), and also has two subdivisions, called sympathetic and parasympathetic, which are important for transmitting motor orders to the body's basic internal organs, thus controlling functions such as heartbeat, breathing, digestion, and salivation. Autonomic nerves, unlike somatic nerves, contain only efferent fibers. Sensory signals coming from the viscera course into the CNS through the somatic sensory nerves (e.g., visceral pain), or through some particular cranial nerves (e.g., chemosensitive or mechanic signals).
Orientation in neuroanatomy
In anatomy in general and neuroanatomy in particular, several sets of topographic terms are used to denote orientation and location, which are generally referred to the body or brain axis (see Anatomical terms of location). The axis of the CNS is often wrongly assumed to be more or less straight, but it actually shows always two ventral flexures (cervical and cephalic flexures) and a dorsal flexure (pontine flexure), all due to differential growth during embryogenesis. The pairs of terms used most commonly in neuroanatomy are:
Dorsal and ventral: Dorsal refers more or less to the top or upper side of the brain, which is symbolized by the floor plate, and ventral to the bottom or lower side. These descriptors originally were used for dorsum and ventrum – back and belly – of the body; the belly of most animals is oriented towards the ground; the erect posture of humans places our ventral aspect anteriorly, and the dorsal aspect becomes posterior. The case of the head and the brain is peculiar, since the belly does not properly extend into the head, unless we assume that the mouth represents an extended belly element. Therefore, in common use, those brain parts that lie close to the base of the cranium, and through it to the mouth cavity, are called ventral – i.e., at its bottom or lower side, as defined above – whereas dorsal parts are closer to the enclosing cranial vault. Reference to the roof and floor plates of the brain is less prone to confusion, also allow us to keep an eye on the axial flexures mentioned above. Dorsal and ventral are thus relative terms in the brain, whose exact meaning depends on the specific location.
Rostral and caudal: rostral refers in general anatomy to the front of the body (towards the nose, or rostrum in Latin), and caudal refers to the tail end of the body (towards the tail; cauda in Latin). The rostrocaudal dimension of the brain corresponds to its length axis, which runs across the cited flexures from the caudal tip of the spinal cord into a rostral end roughly at the optic chiasma. In the erect Man, the directional terms "superior" and "inferior" essentially refer to this rostrocaudal dimension, because our body and brain axes are roughly oriented vertically in the erect position. However, all vertebrates develop a very marked ventral kink in the neural tube that is still detectable in the adult central nervous system, known as the cephalic flexure. The latter bends the rostral part of the CNS at a 180-degree angle relative to the caudal part, at the transition between the forebrain (axis ending rostrally at the optic chiasma) and the brainstem and spinal cord (axis roughly vertical, but including additional minor kinks at the pontine and cervical flexures) These flexural changes in axial dimension are problematic when trying to describe relative position and sectioning planes in the brain. There is abundant literature that wrongly disregards the axial flexures and assumes a relatively straight brain axis.
Medial and lateral: medial refers to being close, or relatively closer, to the midline (the descriptor median means a position precisely at the midline). Lateral is the opposite (a position more or less separated away from the midline).
Note that such descriptors (dorsal/ventral, rostral/caudal; medial/lateral) are relative rather than absolute (e.g., a lateral structure may be said to lie medial to something else that lies even more laterally).
Commonly used terms for planes of orientation or planes of section in neuroanatomy are "sagittal", "transverse" or "coronal", and "axial" or "horizontal". Again in this case, the situation is different for swimming, creeping or quadrupedal (prone) animals than for Man, or other erect species, due to the changed position of the axis. Due to the axial brain flexures, no section plane ever achieves a complete section series in a selected plane, because some sections inevitably result cut oblique or even perpendicular to it, as they pass through the flexures. Experience allows to discern the portions that result cut as desired.
A mid-sagittal plane divides the body and brain into left and right halves; sagittal sections, in general, are parallel to this median plane, moving along the medial-lateral dimension (see the image above). The term sagittal refers etymologically to the median suture between the right and left parietal bones of the cranium, known classically as sagittal suture, because it looks roughly like an arrow by its confluence with other sutures (sagitta; arrow in Latin).
A section plane orthogonal to the axis of any elongated form in principle is held to be transverse (e.g., a transverse section of a finger or of the vertebral column); if there is no length axis, there is no way to define such sections, or there are infinite possibilities. Therefore, transverse body sections in vertebrates are parallel to the ribs, which are orthogonal to the vertebral column, which represents the body axis both in animals and man. The brain also has an intrinsic longitudinal axis – that of the primordial elongated neural tube – which becomes largely vertical with the erect posture of Man, similarly as the body axis, except at its rostral end, as commented above. This explains that transverse spinal cord sections are roughly parallel to our ribs, or to the ground. However, this is only true for the spinal cord and the brainstem, since the forebrain end of the neural axis bends crook-like during early morphogenesis into the chiasmatic hypothalamus, where it ends; the orientation of true transverse sections accordingly changes, and is no longer parallel to the ribs and ground, but perpendicular to them; lack of awareness of this morphologic brain peculiarity (present in all vertebrate brains without exceptions) has caused and still causes much erroneous thinking on forebrain brain parts. Acknowledging the singularity of rostral transverse sections, tradition has introduced a different descriptor for them, namely coronal sections. Coronal sections divide the forebrain from rostral (front) to caudal (back), forming a series orthogonal (transverse) to the local bent axis. The concept cannot be applied meaningfully to the brainstem and spinal cord, since there the coronal sections become horizontal to the axial dimension, being parallel to the axis. In any case, the concept of 'coronal' sections is less precise than that of 'transverse', since often coronal section planes are used which are not truly orthogonal to the rostral end of the brain axis. The term is etymologically related to the coronal suture of the craneum and this to the position where crowns are worn (Latin corona means crown). It is not clear what sort of crown was meant originally (maybe just a diadema), and this leads unfortunately to ambiguity in the section plane defined merely as coronal.
A coronal plane across the human head and brain is modernly conceived to be parallel to the face (the plane in which a king's crown sits on his head is not exactly parallel to the face, and exportation of the concept to less frontally endowed animals than us is obviously even more conflictive, but there is an implicit reference to the coronal suture of the cranium, which forms between the frontal and temporal/parietal bones, giving a sort of diadema configuration which is roughly parallel to the face). Coronal section planes thus essentially refer only to the head and brain, where a diadema makes sense, and not to the neck and body below.
Horizontal sections by definition are aligned (parallel) with the horizon. In swimming, creeping and quadrupedal animals the body axis itself is horizontal, and, thus, horizontal sections run along the length of the spinal cord, separating ventral from dorsal parts. Horizontal sections are orthogonal to both transverse and sagittal sections, and in theory, are parallel to the length axis. Due to the axial bend in the brain (forebrain), true horizontal sections in that region are orthogonal to coronal (transverse) sections (as is the horizon relative to the face).
According to these considerations, the three directions of space are represented precisely by the sagittal, transverse and horizontal planes, whereas coronal sections can be transverse, oblique or horizontal, depending on how they relate to the brain axis and its incurvations.
Tools
Modern developments in neuroanatomy are directly correlated to the technologies used to perform research. Therefore, it is necessary to discuss the various tools that are available. Many of the histological techniques used to study other tissues can be applied to the nervous system as well. However, there are some techniques that have been developed especially for the study of neuroanatomy.
Cell staining
In biological systems, staining is a technique used to enhance the contrast of particular features in microscopic images.
Nissl staining uses aniline basic dyes to intensely stain the acidic polyribosomes in the rough endoplasmic reticulum, which is abundant in neurons. This allows researchers to distinguish between different cell types (such as neurons and glia), and neuronal shapes and sizes, in various regions of the nervous system cytoarchitecture.
The classic Golgi stain uses potassium dichromate and silver nitrate to fill selectively with a silver chromate precipitate a few neural cells (neurons or glia, but in principle, any cells can react similarly). This so-called silver chromate impregnation procedure stains entirely or partially the cell bodies and neurites of some neurons -dendrites, axon- in brown and black, allowing researchers to trace their paths up to their thinnest terminal branches in a slice of nervous tissue, thanks to the transparency consequent to the lack of staining in the majority of surrounding cells. Modernly, Golgi-impregnated material has been adapted for electron-microscopic visualization of the unstained elements surrounding the stained processes and cell bodies, thus adding further resolutive power.
Histochemistry
Histochemistry uses knowledge about biochemical reaction properties of the chemical constituents of the brain (including notably enzymes) to apply selective methods of reaction to visualize where they occur in the brain and any functional or pathological changes. This applies importantly to molecules related to neurotransmitter production and metabolism, but applies likewise in many other directions chemoarchitecture, or chemical neuroanatomy.
Immunocytochemistry is a special case of histochemistry that uses selective antibodies against a variety of chemical epitopes of the nervous system to selectively stain particular cell types, axonal fascicles, neuropiles, glial processes or blood vessels, or specific intracytoplasmic or intranuclear proteins and other immunogenetic molecules, e.g., neurotransmitters. Immunoreacted transcription factor proteins reveal genomic readout in terms of translated protein. This immensely increases the capacity of researchers to distinguish between different cell types (such as neurons and glia) in various regions of the nervous system.
In situ hybridization uses synthetic RNA probes that attach (hybridize) selectively to complementary mRNA transcripts of DNA exons in the cytoplasm, to visualize genomic readout, that is, distinguish active gene expression, in terms of mRNA rather than protein. This allows identification histologically (in situ) of the cells involved in the production of genetically-coded molecules, which often represent differentiation or functional traits, as well as the molecular boundaries separating distinct brain domains or cell populations.
Genetically encoded markers
By expressing variable amounts of red, green, and blue fluorescent proteins in the brain, the so-called "brainbow" mutant mouse allows the combinatorial visualization of many different colors in neurons. This tags neurons with enough unique colors that they can often be distinguished from their neighbors with fluorescence microscopy, enabling researchers to map the local connections or mutual arrangement (tiling) between neurons.
Optogenetics uses transgenic constitutive and site-specific expression (normally in mice) of blocked markers that can be activated selectively by illumination with a light beam. This allows researchers to study axonal connectivity in the nervous system in a very discriminative way.
Non-invasive brain imaging
Magnetic resonance imaging has been used extensively to investigate brain structure and function non-invasively in healthy human subjects. An important example is diffusion tensor imaging, which relies on the restricted diffusion of water in tissue in order to produce axon images. In particular, water moves more quickly along the direction aligned with the axons, permitting the inference of their structure.
Viral-based methods
Certain viruses can replicate in brain cells and cross synapses. So, viruses modified to express markers (such as fluorescent proteins) can be used to trace connectivity between brain regions across multiple synapses. Two tracer viruses which replicate and spread transneuronal/transsynaptic are the Herpes simplex virus type1 (HSV) and the Rhabdoviruses. Herpes simplex virus was used to trace the connections between the brain and the stomach, in order to examine the brain areas involved in viscero-sensory processing. Another study injected herpes simplex virus into the eye, thus allowing the visualization of the optical pathway from the retina into the visual system. An example of a tracer virus which replicates from the synapse to the soma is the pseudorabies virus. By using pseudorabies viruses with different fluorescent reporters, dual infection models can parse complex synaptic architecture.
Dye-based methods
Axonal transport methods use a variety of dyes (horseradish peroxidase variants, fluorescent or radioactive markers, lectins, dextrans) that are more or less avidly absorbed by neurons or their processes. These molecules are selectively transported anterogradely (from soma to axon terminals) or retrogradely (from axon terminals to soma), thus providing evidence of primary and collateral connections in the brain. These 'physiologic' methods (because properties of living, unlesioned cells are used) can be combined with other procedures, and have essentially superseded the earlier procedures studying degeneration of lesioned neurons or axons. Detailed synaptic connections can be determined by correlative electron microscopy.
Connectomics
Serial section electron microscopy has been extensively developed for use in studying nervous systems. For example, the first application of serial block-face scanning electron microscopy was on rodent cortical tissue. Circuit reconstruction from data produced by this high-throughput method is challenging, and the Citizen science game EyeWire has been developed to aid research in that area.
Computational neuroanatomy
Is a field that utilizes various imaging modalities and computational techniques to model and quantify the spatiotemporal dynamics of neuroanatomical structures in both normal and clinical populations.
Model systems
Aside from the human brain, there are many other animals whose brains and nervous systems have received extensive study as model systems, including mice, zebrafish, fruit fly, and a species of roundworm called C. elegans. Each of these has its own advantages and disadvantages as a model system. For example, the C. elegans nervous system is extremely stereotyped from one individual worm to the next. This has allowed researchers using electron microscopy to map the paths and connections of all of the 302 neurons in this species. The fruit fly is widely studied in part because its genetics is very well understood and easily manipulated. The mouse is used because, as a mammal, its brain is more similar in structure to our own (e.g., it has a six-layered cortex, yet its genes can be easily modified and its reproductive cycle is relatively fast).
Caenorhabditis elegans
The brain is small and simple in some species, such as the nematode worm, where the body plan is quite simple: a tube with a hollow gut cavity running from the mouth to the anus, and a nerve cord with an enlargement (a ganglion) for each body segment, with an especially large ganglion at the front, called the brain. The nematode Caenorhabditis elegans has been studied because of its importance in genetics. In the early 1970s, Sydney Brenner chose it as a model system for studying the way that genes control development, including neuronal development. One advantage of working with this worm is that the nervous system of the hermaphrodite contains exactly 302 neurons, always in the same places, making identical synaptic connections in every worm. Brenner's team sliced worms into thousands of ultrathin sections and photographed every section under an electron microscope, then visually matched fibers from section to section, to map out every neuron and synapse in the entire body, to give a complete connectome of the nematode. Nothing approaching this level of detail is available for any other organism, and the information has been used to enable a multitude of studies that would not have been possible without it.
Drosophila melanogaster
Drosophila melanogaster is a popular experimental animal because it is easily cultured en masse from the wild, has a short generation time, and mutant animals are readily obtainable.
Arthropods have a central brain with three divisions and large optical lobes behind each eye for visual processing. The brain of a fruit fly contains several million synapses, compared to at least 100 billion in the human brain. Approximately two-thirds of the Drosophila brain is dedicated to visual processing.
Thomas Hunt Morgan started to work with Drosophila in 1906, and this work earned him the 1933 Nobel Prize in Medicine for identifying chromosomes as the vector of inheritance for genes. Because of the large array of tools available for studying Drosophila genetics, they have been a natural subject for studying the role of genes in the nervous system. The genome has been sequenced and published in 2000. About 75% of known human disease genes have a recognizable match in the genome of fruit flies. Drosophila is being used as a genetic model for several human neurological diseases including the neurodegenerative disorders Parkinson's, Huntington's, spinocerebellar ataxia and Alzheimer's disease. In spite of the large evolutionary distance between insects and mammals, many basic aspects of Drosophila neurogenetics have turned out to be relevant to humans. For instance, the first biological clock genes were identified by examining Drosophila mutants that showed disrupted daily activity cycles.
See also
Connectogram
Outline of the human brain
Outline of brain mapping
List of regions in the human brain
Medical image computing
Neurology
Neurodiversity
Neuroscience
Computational anatomy
Citations
Sources
External links
Neuroanatomy, an annual journal of clinical neuroanatomy
Mouse, Rat, Primate and Human Brain Atlases (UCLA Center for Computational Biology)
brainmaps.org: High-Resolution Neuroanatomically-Annotated Brain Atlases
BrainInfo for Neuroanatomy
Brain Architecture Management System, several atlases of brain anatomy
White Matter Atlas, Diffusion Tensor Imaging Atlas of the Brain's White Matter Tracts
Nervous system | Neuroanatomy | [
"Biology"
] | 5,496 | [
"Organ systems",
"Nervous system"
] |
337,279 | https://en.wikipedia.org/wiki/Self-ionization%20of%20water | The self-ionization of water (also autoionization of water, autoprotolysis of water, autodissociation of water, or simply dissociation of water) is an ionization reaction in pure water or in an aqueous solution, in which a water molecule, H2O, deprotonates (loses the nucleus of one of its hydrogen atoms) to become a hydroxide ion, OH−. The hydrogen nucleus, H+, immediately protonates another water molecule to form a hydronium cation, H3O+. It is an example of autoprotolysis, and exemplifies the amphoteric nature of water.
History and notation
The self-ionization of water was first proposed in 1884 by Svante Arrhenius as part of the theory of ionic dissociation which he proposed to explain the conductivity of electrolytes including water. Arrhenius wrote the self-ionization as H2O <=> H+ + OH-. At that time, nothing was yet known of atomic structure or subatomic particles, so he had no reason to consider the formation of an H+ ion from a hydrogen atom on electrolysis as any less likely than, say, the formation of a Na+ ion from a sodium atom.
In 1923 Johannes Nicolaus Brønsted and Martin Lowry proposed that the self-ionization of water actually involves two water molecules: H2O + H2O <=> H3O+ + OH-. By this time the electron and the nucleus had been discovered and Rutherford had shown that a nucleus is very much smaller than an atom. This would include a bare ion H+ which would correspond to a proton with zero electrons. Brønsted and Lowry proposed that this ion does not exist free in solution, but always attaches itself to a water (or other solvent) molecule to form the hydronium ion H3O+ (or other protonated solvent).
Later spectroscopic evidence has shown that many protons are actually hydrated by more than one water molecule. The most descriptive notation for the hydrated ion is H+(aq), where aq (for aqueous) indicates an indefinite or variable number of water molecules. However the notations H+ and H3O+ are still also used extensively because of their historical importance. This article mostly represents the hydrated proton as H3O+, corresponding to hydration by a single water molecule.
Equilibrium constant
Chemically pure water has an electrical conductivity of 0.055 μS/cm. According to the theories of Svante Arrhenius, this must be due to the presence of ions. The ions are produced by the water self-ionization reaction, which applies to pure water and any aqueous solution:
H2O + H2O H3O+ + OH−
Expressed with chemical activities , instead of concentrations, the thermodynamic equilibrium constant for the water ionization reaction is:
which is numerically equal to the more traditional thermodynamic equilibrium constant written as:
under the assumption that the sum of the chemical potentials of H+ and H3O+ is formally equal to twice the chemical potential of H2O at the same temperature and pressure.
Because most acid–base solutions are typically very dilute, the activity of water is generally approximated as being equal to unity, which allows the ionic product of water to be expressed as:
In dilute aqueous solutions, the activities of solutes (dissolved species such as ions) are approximately equal to their concentrations. Thus, the ionization constant, dissociation constant, self-ionization constant, water ion-product constant or ionic product of water, symbolized by Kw, may be given by:
where [H3O+] is the molarity (molar concentration) of hydrogen cation or hydronium ion, and [OH−] is the concentration of hydroxide ion. When the equilibrium constant is written as a product of concentrations (as opposed to activities) it is necessary to make corrections to the value of depending on ionic strength and other factors (see below).
At 24.87 °C and zero ionic strength, Kw is equal to . Note that as with all equilibrium constants, the result is dimensionless because the concentration is in fact a concentration relative to the standard state, which for H+ and OH− are both defined to be 1 molal (= 1 mol/kg) when molality is used or 1 molar (= 1 mol/L) when molar concentration is used. For many practical purposes, the molality (mol solute/kg water) and molar (mol solute/L solution) concentrations can be considered as nearly equal at ambient temperature and pressure if the solution density remains close to one (i.e., sufficiently diluted solutions and negligible effect of temperature changes). The main advantage of the molal concentration unit (mol/kg water) is to result in stable and robust concentration values which are independent of the solution density and volume changes (density depending on the water salinity (ionic strength), temperature and pressure); therefore, molality is the preferred unit used in thermodynamic calculations or in precise or less-usual conditions, e.g., for seawater with a density significantly different from that of pure water, or at elevated temperatures, like those prevailing in thermal power plants.
We can also define pKw −log10 Kw (which is approximately 14 at 25 °C). This is analogous to the notations pH and pKa for an acid dissociation constant, where the symbol p denotes a cologarithm. The logarithmic form of the equilibrium constant equation is pKw = pH + pOH.
Dependence on temperature, pressure and ionic strength
The dependence of the water ionization on temperature and pressure has been investigated thoroughly. The value of pKw decreases as temperature increases from the melting point of ice to a minimum at c. 250 °C, after which it increases up to the critical point of water c. 374 °C. It decreases with increasing pressure
With electrolyte solutions, the value of pKw is dependent on ionic strength of the electrolyte. Values for sodium chloride are typical for a 1:1 electrolyte. With 1:2 electrolytes, MX2, pKw decreases with increasing ionic strength.
The value of Kw is usually of interest in the liquid phase. Example values for superheated steam (gas) and supercritical water fluid are given in the table.
{| class="wikitable" style="text-align:center"
|+Comparison of pKw values for liquid water, superheated steam, and supercritical water.
|-
! !! 350 °C !! 400 °C !! 450 °C !! 500 °C !! 600 °C !! 800 °C
|-
! scope="row" |0.1 MPa
||| 47.961b || 47.873b || 47.638b || 46.384b ||40.785b
|-
! scope="row" |17 MPa
|11.920 (liquid)a || || || || ||
|-
! scope="row" |25 MPa
|11.551 (liquid)c ||16.566||18.135||18.758||19.425||20.113
|-
! scope="row" |100 MPa
|10.600 (liquid)c ||10.744||11.005||11.381||12.296||13.544
|-
! scope="row" |1000 MPa
|8.311 (liquid)c ||8.178||8.084||8.019||7.952||7.957
|}
Notes to the table. The values are for supercritical fluid except those marked: a at saturation pressure corresponding to 350 °C. b superheated steam. c compressed or subcooled liquid.
Isotope effects
Heavy water, D2O, self-ionizes less than normal water, H2O;
D2O + D2O D3O+ + OD−
This is due to the equilibrium isotope effect, a quantum mechanical effect attributed to oxygen forming a slightly stronger bond to deuterium because the larger mass of deuterium results in a lower zero-point energy.
Expressed with activities a, instead of concentrations, the thermodynamic equilibrium constant for the heavy water ionization reaction is:
Assuming the activity of the D2O to be 1, and assuming that the activities of the D3O+ and OD− are closely approximated by their concentrations
The following table compares the values of pKw for H2O and D2O.
{| class="wikitable" style="text-align:center"
|+pKw values for pure water
|-
! scope="row" |T/°C
|10||20|| 25||30|| 40 || 50
|-
! scope="row" |H2O
|14.535 || 14.167|| 13.997|| 13.830|| 13.535 ||13.262
|-
! scope="row" |D2O
|15.439||15.049||14.869||14.699||14.385|| 14.103
|}
Ionization equilibria in water–heavy water mixtures
In water–heavy water mixtures equilibria several species are involved: H2O, HDO, D2O, H3O+, D3O+, H2DO+, HD2O+, HO−, DO−.
Mechanism
The rate of reaction for the ionization reaction
2 H2O → H3O+ + OH−
depends on the activation energy, ΔE‡. According to the Boltzmann distribution the proportion of water molecules that have sufficient energy, due to thermal population, is given by
where k is the Boltzmann constant. Thus some dissociation can occur because sufficient thermal energy is available. The following sequence of events has been proposed on the basis of electric field fluctuations in liquid water. Random fluctuations in molecular motions occasionally (about once every 10 hours per water molecule) produce an electric field strong enough to break an oxygen–hydrogen bond, resulting in a hydroxide (OH−) and hydronium ion (H3O+); the hydrogen nucleus of the hydronium ion travels along water molecules by the Grotthuss mechanism and a change in the hydrogen bond network in the solvent isolates the two ions, which are stabilized by solvation. Within 1 picosecond, however, a second reorganization of the hydrogen bond network allows rapid proton transfer down the electric potential difference and subsequent recombination of the ions. This timescale is consistent with the time it takes for hydrogen bonds to reorientate themselves in water.
The inverse recombination reaction
H3O+ + OH− → 2 H2O
is among the fastest chemical reactions known, with a reaction rate constant of at room temperature. Such a rapid rate is characteristic of a diffusion-controlled reaction, in which the rate is limited by the speed of molecular diffusion.
Relationship with the neutral point of water
Water molecules dissociate into equal amounts of H3O+ and OH−, so their concentrations are almost exactly at 25 °C and 0.1 MPa. A solution in which the H3O+ and OH− concentrations equal each other is considered a neutral solution. In general, the pH of the neutral point is numerically equal to pKw.
Pure water is neutral, but most water samples contain impurities. If an impurity is an acid or base, this will affect the concentrations of hydronium ion and hydroxide ion. Water samples that are exposed to air will absorb some carbon dioxide to form carbonic acid (H2CO3) and the concentration of H3O+ will increase due to the reaction H2CO3 + H2O = HCO3− + H3O+. The concentration of OH− will decrease in such a way that the product [H3O+][OH−] remains constant for fixed temperature and pressure. Thus these water samples will be slightly acidic. If a pH of exactly 7.0 is required, it must be maintained with an appropriate buffer solution.
See also
Acid–base reaction
Chemical equilibrium
Molecular autoionization (of various solvents)
Standard hydrogen electrode
References
External links
General Chemistry – Autoionization of Water
Ionization
Water chemistry
Equilibrium chemistry
Water
de:Protolyse#Autoprotolyse | Self-ionization of water | [
"Physics",
"Chemistry"
] | 2,675 | [
"Ionization",
"Acid–base chemistry",
"Physical phenomena",
"Equilibrium chemistry",
"nan"
] |
337,301 | https://en.wikipedia.org/wiki/National%20Ignition%20Facility | The National Ignition Facility (NIF) is a laser-based inertial confinement fusion (ICF) research device, located at Lawrence Livermore National Laboratory in Livermore, California, United States. NIF's mission is to achieve fusion ignition with high energy gain. It achieved the first instance of scientific breakeven controlled fusion in an experiment on December 5, 2022, with an energy gain factor of 1.5. It supports nuclear weapon maintenance and design by studying the behavior of matter under the conditions found within nuclear explosions.
NIF is the largest and most powerful ICF device built to date. The basic ICF concept is to squeeze a small amount of fuel to reach pressure and temperature necessary for fusion. NIF hosts the world's most energetic laser. The laser indirectly heats the outer layer of a small sphere. The energy is so intense that it causes the sphere to implode, squeezing the fuel inside. The implosion reaches a peak speed of , raising the fuel density from about that of water to about 100 times that of lead. The delivery of energy and the adiabatic process during implosion raises the temperature of the fuel to hundreds of millions of degrees. At these temperatures, fusion processes occur in the tiny interval before the fuel explodes outward.
Construction on the NIF began in 1997. NIF was completed five years behind schedule and cost almost four times its original budget. Construction was certified complete on March 31, 2009, by the U.S. Department of Energy. The first large-scale experiments were performed in June 2009 and the first "integrated ignition experiments" (which tested the laser's power) were declared completed in October 2010.
From 2009 to 2012 experiments were conducted under the National Ignition Campaign, with the goal of reaching ignition just after the laser reached full power, some time in the second half of 2012. The campaign officially ended in September 2012, at about the conditions needed for ignition. Thereafter NIF has been used primarily for materials science and weapons research. In 2021, after improvements in fuel target design, NIF produced 70% of the energy of the laser, beating the record set in 1997 by the JET reactor at 67% and achieving a burning plasma. On December 5, 2022, after further technical improvements, NIF reached "ignition", or scientific breakeven, for the first time, achieving a 154% energy yield compared to the input energy. However, while this was scientifically a success, the experiment in practice produced less than 1% of the energy the facility used to create it: while 3.15 MJ of energy was yielded from 2.05 MJ input, the lasers delivering the 2.05 MJ of energy took about 300 MJ to produce in the facility.
Inertial confinement fusion basics
Inertial confinement fusion (ICF) devices use intense energy to rapidly heat the outer layers of a target in order to compress it. Nuclear fission provides the energy source for thermonuclear warheads, while sources such as laser beams and particle beams are used in non-weapon devices.
The target is a small spherical pellet containing a few milligrams of fusion fuel, typically a mix of deuterium (D) and tritium (T), as this composition has the lowest ignition temperature.
The lasers can either heat the surface of the fuel pellet directly – known as direct drive – or heat the inner surface of a hollow metal cylinder around the pellet – known as indirect drive. In the indirect drive case, the cylinder, called a hohlraum (German for 'hollow room' or 'cavity'), becomes hot enough to re-emit the energy as even higher frequency X-rays. These X-rays, which are more symmetrically distributed than the original laser light, heat the surface of pellet.
In either case, the material on the outside of the pellet is turned into a plasma, which explodes away from the surface. The rest of the pellet is driven inward on all sides, into a small volume of extremely high density. The surface explosion creates shock waves that travel inward. At the center of the fuel, a small volume is further heated and compressed. When the temperature and density are high enough, fusion reactions occur. The energy must be delivered quickly and spread extremely evenly across the target's outer surface in order to compress the fuel symmetrically.
The reactions release high-energy particles, some of which, primarily alpha particles, collide with unfused fuel and heat it further, potentially triggering additional fusion. At the same time, the fuel is also losing heat through x-ray losses and hot electrons leaving the fuel area. Thus the rate of alpha heating must be greater than the loss rate, termed bootstrapping. Given the right conditions—high enough density, temperature, and duration—bootstrapping results in a chain reaction, burning outward from the center. This is known as ignition, which fuses a significant portion of the fuel and releases large amounts of energy.
As of 1998, most ICF experiments had used laser drivers. Other drivers have been examined, such as heavy ions driven by particle accelerators.
Design
System
NIF primarily uses the indirect drive method of operation, in which the laser heats a small metal cylinder surrounding the capsule inside it, which then emits X-rays that heat the fuel pellet. Experimental systems, including the OMEGA and Nova lasers, validated this approach. The NIF's high power supports a much larger target than OMEGA or Nova; the baseline pellet design is about 2 mm in diameter. It is chilled to about 18 kelvin (−255 °C) and lined with a layer of frozen deuterium–tritium (DT) fuel. The hollow interior contains a small amount of DT gas.
In a typical experiment, the laser generates 3 MJ of infrared laser energy of a possible 4. About 1.5 MJ remains after conversion to UV, and another 15 percent is lost in the hohlraum. About 15 percent of the resulting x-rays, about 150 kJ, are absorbed by the target's outer layers. The coupling between the capsule and the x-rays is lossy, and ultimately only about 10 to 14 kJ of energy is deposited in the fuel.
The fuels in the center of the target are compressed to a density of about 1000 g/cm3. For comparison, lead has a density of about 11 g/cm3. The pressure is the equivalent of 300 billion atmospheres.
Before NIF was constructed, it was expected based on simulations that 10–15 MJ of fusion energy would be released, resulting in a net fusion energy gain, denoted Q, of about 5–8 (fusion energy out/UV laser energy in). Due to the design of the target chamber, the baseline design limited the maximum possible fusion energy release to 45 MJ, equivalent to about 11 kg of TNT exploding.
When NIF was built and used in 2011, the fusion energy was far lower than expected – less than 1 kJ. Performance was gradually improved until, as of 2024, the fusion energy routinely exceeded 2 MJ.
To be useful for energy production, a fusion facility must produce fusion output at least an order of magnitude more than the energy used to power the laser amplifiers – 400 MJ in the case of NIF. Commercial laser fusion systems would use much more efficient diode-pumped solid state lasers, where wall-plug efficiencies of 10 percent have been demonstrated, and efficiencies 16–18 percent were expected with advanced concepts under development in 1996.
Laser
As of 2010 NIF aimed to create a single 500 terawatt (TW) peak flash of light that reaches the target from numerous directions within a few picoseconds. The design uses 192 beamlines in a parallel system of flashlamp-pumped, neodymium-doped phosphate glass lasers.
To ensure that the output of the beamlines is uniform, the laser is amplified from a single source in the Injection Laser System (ILS). This starts with a low-power flash of 1053-nanometer (nm) infrared light generated in an ytterbium-doped optical fiber laser termed Master Oscillator. Its light is split and directed into 48 Preamplifier Modules (PAMs). Each PAM conducts a two-stage amplification process via xenon flash lamps. The first stage is a regenerative amplifier in which the pulse circulates 30 to 60 times, increasing its energy from nanojoules to tens of millijoules. The second stage sends the light four times through a circuit containing a neodymium glass amplifier similar to (but much smaller than) the ones used in the main beamlines, boosting the millijoules to about 6 joules. According to LLNL, designing the PAMs was one of the major challenges. Subsequent improvements allowed them to surpass their initial design goals.<ref>, Lawrence Livermore National Laboratory. Retrieved on October 2, 2007</ref>
The main amplification takes place in a series of glass amplifiers located at one end of the beamlines. Before firing, the amplifiers are first optically pumped by a total of 7,680 flash lamps. The lamps are powered by a capacitor bank that stores 400 MJ (110 kWh). When the wavefront passes through them, the amplifiers release some of the energy stored in them into the beam. The beams are sent through the main amplifier four times, using an optical switch located in a mirrored cavity. These amplifiers boost the original 6 J to a nominal 4 MJ. Given the time scale of a few nanoseconds, the peak UV power delivered to the target reaches 500 TW.
Near the center of each beamline, and taking up the majority of the total length, are spatial filters. These consist of long tubes with small telescopes at the end that focus the beam to a tiny point in the center of the tube, where a mask cuts off any stray light outside the focal point. The filters ensure that the beam image is extremely uniform. Spatial filters were a major step forward. They were introduced in the Cyclops laser, an earlier LLNL experiment.
The end-to-end length of the path the laser beam travels, including switches, is about . The various optical elements in the beamlines are generally packaged into Line Replaceable Units (LRUs), standardized boxes about the size of a vending machine that can be dropped out of the beamline for replacement from below.
After amplification is complete the light is switched back into the beamline, where it runs to the far end of the building to the target chamber. The target chamber is a multi-piece steel sphere weighing . Just before reaching the target chamber, the light is reflected off mirrors in the switchyard and target area in order to hit the target from different directions. Since the path length from the Master Oscillator to the target is different for each beamline, optics are used to delay the light in order to ensure that they all reach the center within a few picoseconds of each other.
One of the last steps before reaching the target chamber is to convert the infrared (IR) light at 1053 nm into the ultraviolet (UV) at 351 nm in a device known as a frequency converter. These are made of thin sheets (about 1 cm thick) cut from a single crystal of potassium dihydrogen phosphate. When the 1053 nm (IR) light passes through the first of two of these sheets, frequency addition converts a large fraction of the light into 527 nm light (green). On passing through the second sheet, frequency combination converts much of the 527 nm light and the remaining 1053 nm light into 351 nm (UV) light. Infrared (IR) light is much less effective than UV at heating the targets, because IR couples more strongly with hot electrons that absorb a considerable amount of energy and interfere with compression. The conversion process can reach peak efficiencies of about 80 percent for a laser pulse that has a flat temporal shape, but the temporal shape needed for ignition varies significantly over the duration of the pulse. The actual conversion process is about 50 percent efficient, reducing delivered energy to a nominal 1.8 MJ.
As of 2010, one important aspect of any ICF research project was ensuring that experiments could be carried out on a timely basis. Previous devices generally had to cool down for many hours to allow the flashlamps and laser glass to regain their shapes after firing (due to thermal expansion), limiting their use to one or fewer firings per day. One of the goals for NIF has been to reduce this time to less than four hours, in order to allow 700 firings a year.
Other concepts
NIF is also exploring new types of targets. Previous experiments generally used plastic ablators, typically polystyrene (CH). NIF targets are constructed by coating a plastic form with a layer of sputtered beryllium or beryllium–copper alloy, and then oxidizing the plastic out of the center. Beryllium targets offer higher implosion efficiencies from x-ray inputs.
Although NIF was primarily designed as an indirect drive device, the energy in the laser as of 2008 was high enough to be used as a direct drive system, where the laser shines directly on the target without conversion to x-rays. The power delivered by NIF UV rays was estimated to be more than enough to cause ignition, allowing fusion energy gains of about 40x, somewhat higher than the indirect drive system.
As of 2005, scaled implosions on the OMEGA laser and computer simulations showed NIF to be capable of ignition using a polar direct drive (PDD) configuration where the target was irradiated directly by the laser only from the top and bottom, without changes to the NIF beamline layout.
As of 2005, other targets, called saturn targets, were specifically designed to reduce the anisotropy and improve the implosion. They feature a small plastic ring around the "equator" of the target, which becomes a plasma when hit by the laser. Some of the laser light is refracted through this plasma back towards the equator of the target, evening out the heating. NIF ignition with gains of just over 35 times are thought to be possible, producing results almost as good as the fully symmetric direct drive approach.
History
Impetus, 1957
The history of ICF at Lawrence Livermore National Laboratory in Livermore, California, started with physicist John Nuckolls, who started considering the problem after a 1957 meeting arranged by Edward Teller there. During these meetings, the idea later known as PACER emerged. PACER envisioned the explosion of small hydrogen bombs in large caverns to generate steam that would be converted into electrical power. After identifying problems with this approach, Nuckolls wondered how small a bomb could be made that would still generate net positive power.
A typical hydrogen bomb has two parts: a plutonium-based fission bomb known as the primary, and a cylindrical arrangement of fusion fuels known as the secondary. The primary releases x-rays, which are trapped within the bomb casing. They heat and compress the secondary until it ignites. The secondary consists of lithium deuteride (LiD) fuel, which requires an external neutron source. This is normally in the form of a small plutonium "spark plug" in the center of the fuel. Nuckolls's idea was to explore how small the secondary could be made, and what effects this would have on the energy needed from the primary to cause ignition. The simplest change is to replace the LiD fuel with DT gas, removing the need for the spark plug. This allows secondaries of any size – as the secondary shrinks, so does the amount of energy needed for ignition. At the milligram level, the energy levels started to approach those available through several known devices.
By the early 1960s, Nuckolls and several other weapons designers had developed ICF's outlines. The DT fuel would be placed in a small capsule, designed to rapidly ablate when heated and thereby maximize compression and shock wave formation. This capsule would be placed within an engineered shell, the hohlraum, which acts like the bomb casing. The hohlraum did not have to be heated by x-rays; any source of energy could be used as long as it delivered enough energy to heat the hohlraum and produce x-rays. Ideally the energy source would be located some distance away, to mechanically isolate both ends of the reaction. A small atomic bomb could be used as the energy source, as in a hydrogen bomb, but ideally smaller energy sources would be used. Using computer simulations, the teams estimated that about 5 MJ of energy would be needed from the primary, generating a 1 MJ beam. To put this in perspective, a small (0.5 kt ) fission primary releases 2 TJ.
ICF program, 1970s
While Nuckolls and LLNL were working on hohlraum-based concepts, UCSD physicist Keith Brueckner was independently working on direct drive. In the early 1970s, Brueckner formed KMS Fusion to commercialize this concept. This sparked an intense rivalry between KMS and the weapons labs. Formerly ignored, ICF became a hot topic and most of the labs started ICF work. LLNL decided to concentrate on glass lasers, while other facilities studied gas lasers using carbon dioxide (e.g. ANTARES, Los Alamos National Laboratory) or KrF (e.g. Nike laser, Naval Research Laboratory).
Throughout these early stages, much of the understanding of the fusion process was the result of computer simulations, primarily LASNEX. LASNEX simplified the reaction to a 2-dimensional approximation, which was all that was possible with the available computing power. LASNEX estimated that laser drivers in the kJ range could reach low gain, which was just within the state of the art. This led to the Shiva laser project which was completed in 1977. Shiva fell far short of its goals. The densities reached were thousands of times smaller than predicted. This was traced to issues with the way the laser delivered heat to the target. Most of its energy energized electrons rather than the entire fuel mass. Further experiments and simulations demonstrated that this process could be dramatically improved by using shorter wavelengths.
Further upgrades to the simulation programs, accounting for these effects, predicted that a different design would reach ignition. This system took the form of the 20-beam 200 kJ Nova laser. During the construction phase, Nuckolls found an error in his calculations, and an October 1979 review chaired by former LLNL director John S. Foster Jr. confirmed that Nova would not reach ignition. It was modified into a smaller 10-beam design that converted the light to 351 nm and increase coupling efficiency. Nova was able to deliver about 30 kJ of UV laser energy, about half of what was expected, primarily due to optical damage to the final focusing optics. Even at those levels, it was clear that the predictions for fusion production were wrong; even at the limited powers available, fusion yields were far below predictions.
Halite and Centurion, 1978
Each experiment showed that the energy needed to reach ignition continued to be underestimated. The Department of Energy (DOE) decided that direct experimentation was the best way to settle the issue, and in 1978 they started a series of underground experiments at the Nevada Test Site that used small nuclear bombs to illuminate ICF targets. The tests were known as Halite (LLNL) and Centurion (LANL).
The basic concept behind the tests had been developed in the 1960s as a way to develop anti-ballistic missile warheads. It was found that bombs that exploded outside the atmosphere gave off bursts of X-rays that could damage an enemy warhead at long range. To test the effectiveness of this system, and to develop countermeasures to protect US warheads, the Defense Atomic Support Agency (now the Defense Threat Reduction Agency) developed a system that placed the targets at the end of long tunnels behind fast-shutting doors. The doors were timed to shut in the brief period between the arrival of the X-rays and the subsequent blast. This saved the reentry vehicle (RV) from blast damage and allowed them to be inspected.
ICF tests used the same system, replacing the RVs by hohlraums. Each test simultaneously illuminated many targets, each at a different distance from the bomb to test the effect of varying of illumination. Another question was how large the fuel assembly had to be in order for the fuel to self-heat from the fusion reactions and thus reach ignition. Initial data were available by mid-1984, and the testing ceased in 1988. Ignition was achieved for the first time during these tests. The amount of energy and the size of the fuel targets needed to reach ignition was far higher than predicted. During this same period, experiments began on Nova using similar targets to understand their behavior under laser illumination, allowing direct comparison against the bomb tests.
This data suggested that about 10 MJ of X-ray energy would be needed to reach ignition, far beyond what had earlier been calculated. If those X-rays are created by beaming an IR laser to a hohlraum, as in Nova or NIF, then dramatically more laser energy would be required, on the order of 100 MJ.
This triggered a debate in the ICF community. One group suggested an attempt to build a laser of this power; Leonardo Mascheroni and Claude Phipps designed a new type of hydrogen fluoride laser pumped by high-energy electrons and reach the 100 MJ threshold. Others used the same data and new versions of their computer simulations to suggest that careful shaping of the laser pulse and more beams spread more evenly could achieve ignition with a laser powered between 5 and 10 MJ.John Lindl, Development of the Indirect-Drive Approach to Inertial Confinement Fusion and the Target Physics Basis for Ignition and Gain, Physics of Plasmas Vol. 2, No. 11, November 1995; pp. 3933–4024
These results prompted the DOE to request a custom military ICF facility named the "Laboratory Microfusion Facility" (LMF). LMF would use a driver on the order of 10 MJ, delivering fusion yields of between 100 and 1,000 MJ. A 1989–1990 review of this concept by the National Academy of Sciences suggested that LMF was too ambitious, and that fundamental physics needed to be further explored. They recommended further experiments before attempting to move to a 10 MJ system. Nevertheless, the authors noted, "Indeed, if it did turn out that a 100 MJ driver were required for ignition and gain, one would have to rethink the entire approach to, and rationale for, ICF".
Laboratory Microfusion Facility and Nova Upgrade, 1990
As of 1992, the Laboratory Microfusion Facility was estimated to cost about $1 billion. LLNL initially submitted a design with a 5 MJ 350 nm (UV) driver that would be able to reach about 200 MJ yield, which was enough to attain the majority of the LMF goals.That program was estimated to cost about $600 million FY 1989 dollars. An additional $250 million would pay to upgrade it to a full 1,000 MJ. The total would surpass $1 billion to meet all of the goals requested by the DOE.
The NAS review led to a reevaluation of these plans, and in July 1990, LLNL responded with the Nova Upgrade, which would reuse most of Nova, along with the adjacent Shiva facility. The resulting system would be much lower power than the LMF concept, with a driver of about 1 MJ. The new design included features that advanced the state of the art in the driver section, including multi-pass in the main amplifiers, and 18 beamlines (up from 10) that were split into 288 "beamlets" as they entered the target area. The plans called for the installation of two main banks of beamlines, one in the existing Nova beamline room, and the other in the older Shiva building next door, extending through its laser bay and target area into an upgraded Nova target area. The lasers would deliver about 500 TW in a 4 ns pulse. The upgrades were expected to produce fusion yields of between 2 and 10 MJ. The initial estimates from 1992 estimated construction costs around $400 million, with construction taking place from 1995 to 1999.
NIF, 1994
Throughout this period, the ending of the Cold War led to dramatic changes in defense funding and priorities. The political support for nuclear weapons declined and arms agreements led to a reduction in warhead count and less design work. The US was faced with the prospect of losing a generation of nuclear weapon designers able to maintain existing stockpiles, or design new weapons. At the same time, the Comprehensive Nuclear-Test-Ban Treaty (CTBT) was signed in 1996, which would ban all criticality testing and made the development of newer generations of nuclear weapons more difficult.
Out of these changes came the Stockpile Stewardship and Management Program (SSMP), which, among other things, included funds for the development of methods to design and build nuclear weapons without having to test them explosively. In a series of meetings that started in 1995, an agreement formed between the labs to divide up SSMP efforts. An important part of this would be confirmation of computer models using low-yield ICF experiments. The Nova Upgrade was too small to use for these experiments. A redesign matured into NIF in 1994. The estimated cost of the project remained almost $1 billion, with completion in 2002.
In spite of the agreement, the large project cost combined with the ending of similar projects at other labs resulted in critical comments by scientists at other labs, Sandia National Laboratories in particular. In May 1997, Sandia fusion scientist Rick Spielman publicly stated that NIF had "virtually no internal peer review on the technical issues" and that "Livermore essentially picked the panel to review themselves". A retired Sandia manager, Bob Puerifoy, was even more blunt than Spielman: "NIF is worthless ... it can't be used to maintain the stockpile, period". Ray Kidder, one of the original developers of the ICF concept at LLNL, was also highly critical. He stated in 1997 that its primary purpose was to "recruit and maintain a staff of theorists and experimentalists" and that while some of the experimental data would prove useful for weapons design, differences in the experimental setup limit their relevance. "Some of the physics is the same; but the details, 'wherein the devil lies,' are quite different. It would therefore also be wrong to assume that NIF will be able to support for the long term a staff of weapons designers and engineers with detailed design competence comparable to that of those now working at the weapons design laboratories."
In 1997, Victor Reis, assistant secretary for Defense Programs within DOE and SSMP chief architect defended the program telling the U.S. House Armed Services Committee that NIF was "designed to produce, for the first time in a laboratory setting, conditions of temperature and density of matter close to those that occur in the detonation of nuclear weapons. The ability to study the behavior of matter and the transfer of energy and radiation under these conditions is key to understanding the basic physics of nuclear weapons and predicting their performance without underground nuclear testing." In 1998, two JASON panels, composed of scientific and technical experts, stated that NIF is the most scientifically valuable of all programs proposed for science-based stockpile stewardship.
Despite the initial criticism, Sandia, as well as Los Alamos, supported the development of many NIF technologies, and both laboratories later became partners with NIF in the National Ignition Campaign.
Construction of first unit, 1994–1998
Work on the NIF started with a single beamline demonstrator, Beamlet. Beamlet successfully operated between 1994 and 1997. It was then sent to Sandia National Laboratories as a light source in their Z machine. A full-sized demonstrator then followed, in AMPLAB, which started operations in 1997. The official groundbreaking on the main NIF site was on May 29, 1997.
At the time, the DOE was estimating that the NIF would cost approximately $1.1 billion and another $1 billion for related research, and would be complete as early as 2002. Later in 1997 the DOE approved an additional $100 million in funding and pushed the operational date back to 2004. As late as 1998 LLNL's public documents stated the overall price was $1.2 billion, with the first eight lasers coming online in 2001 and full completion in 2003.
The facility's physical scale alone made the construction project challenging. By the time the "conventional facility" (the shell for the laser) was complete in 2001, more than 210,000 cubic yards of soil had been excavated, more than 73,000 cubic yards of concrete had been poured, 7,600 tons of reinforcing steel rebar had been placed, and more than 5,000 tons of structural steel had been erected. To isolate the laser system from vibration, the foundation of each laser bay was made independent of the rest of the structure. Three-foot-thick, 420-foot-long and 80-foot-wide slabs required continuous concrete pours to achieve their specifications.
In November 1997, an El Niño storm dumped two inches of rain in two hours, flooding the NIF site with 200,000 gallons of water just three days before the scheduled foundation pour. The earth was so soaked that the framing for the retaining wall sank six inches, forcing the crew to disassemble and reassemble it. Construction was halted in December 1997, when 16,000-year-old mammoth bones were discovered. Paleontologists were called in to remove and preserve the bones, delaying construction by four days.
A variety of research and development, technology and engineering challenges arose, such as creating an optics fabrication capability to supply the laser glass for NIF's 7,500 meter-sized optics. State-of-the-art optics measurement, coating and finishing techniques were developed to withstand NIF's high-energy lasers, as were methods for amplifying the laser beams to the needed energy levels. Continuous-pour glass, rapid-growth crystals, innovative optical switches, and deformable mirrors were among NIF's technology innovations developed.
Sandia, with extensive experience in pulsed power delivery, designed the capacitor banks used to feed the flashlamps, completing the first unit in October 1998. To everyone's surprise, the Pulsed Power Conditioning Modules (PCMs) suffered capacitor failures that led to explosions. This required a redesign of the module to contain the debris, but since the concrete had already been poured, this left the new modules so tightly packed that in-place maintenance was impossible. Another redesign followed, this time allowing the modules to be removed from the bays for servicing. Continuing problems further delayed operations, and in September 1999, an updated DOE report stated that NIF required up to $350 million more and completion occur only in 2006.
Re-baseline and GAO report, 1999–2000
Throughout this period the problems with NIF were not reported up the management chain. In 1999 then Secretary of Energy Bill Richardson reported to Congress that NIF was on time and budget, as project leaders had reported. In August that year it was revealed that neither claim was close to the truth. As the Government Accountability Office (GAO) would later note, "Furthermore, the Laboratory's former laser director, who oversaw NIF and all other laser activities, assured Laboratory managers, DOE, the university, and the Congress that the NIF project was adequately funded and staffed and was continuing on cost and schedule, even while he was briefed on clear and growing evidence that NIF had serious problems". A DOE Task Force reported to Richardson in January 2000 that "organizations of the NIF project failed to implement program and project management procedures and processes commensurate with a major research and development project... [and that] ...no one gets a passing grade on NIF Management: not the DOE's office of Defense Programs, not the Lawrence Livermore National Laboratory and not the University of California".
Given the budget problems, the US Congress requested an independent GAO review. They returned a critical report in August 2000 estimating that the cost was likely to be $3.9 billion, including R&D, and that the facility was unlikely to be completed anywhere near on time.GAO Report Cites New NIF Cost Estimate, FYI, American Institute of Physics, Number 101: August 30, 2000. Retrieved on May 7, 2008. The report noted management problems for the overruns, and criticized the program for failing to budget money for target fabrication, including it in operational costs instead of development.
In 2000, the DOE began a comprehensive "rebaseline review" because of the technical delays and project management issues, and adjusted the schedule and budget accordingly. John Gordon, National Nuclear Security Administrator, stated "We have prepared a detailed bottom-up cost and schedule to complete the NIF project... The independent review supports our position that the NIF management team has made significant progress and resolved earlier problems". The report revised their budget estimate to $2.25 billion, not including related R&D which pushed it to $3.3 billion total, and pushed back the completion date to 2006 with the first lines coming online in 2004.More on New NIF Cost and Schedule, FYI, American Institute of Physics, Number 65, June 15, 2000. Retrieved on May 7, 2008. A follow-up report the next year pushed the budget to $4.2 billion, and the completion date to 2008.
The project got a new management teamCampbell Investigation Triggers Livermore Management Changes, Fusion Power Report, Sep 1, 1999
http://www.thefreelibrary.com/Campbell+Investigation+Triggers+Livermore+Management+Changes.-a063375944 (retrieved July 13, 2012) in September 1999, headed by George Miller, who was named acting associate director for lasers. Ed Moses, former head of the Atomic Vapor Laser Isotope Separation (AVLIS) program at LLNL, became NIF project manager. Thereafter, NIF management received many positive reviews and the project met the budgets and schedules approved by Congress. In October 2010, the project was named "Project of the Year" by the Project Management Institute, which cited NIF as a "stellar example of how properly applied project management excellence can bring together global teams to deliver a project of this scale and importance efficiently."
Tests and construction completion, 2003–2009
In May 2003, the NIF achieved "first light" on a bundle of four beams, producing a 10.4 kJ IR pulse in a single beamline. In 2005 the first eight beams produced 153 kJ of IR, eclipsing OMEGA as the planet's highest energy laser (per pulse). By January 2007 all of the LRUs in the Master Oscillator Room (MOOR) were complete and the computer room had been installed. By August 2007, 96 laser lines were completed and commissioned, and "A total infrared energy of more than 2.5 megajoules has now been fired. This is more than 40 times what the Nova laser typically operated at the time it was the world's largest laser".
In 2005, an independent review by the JASON Defense Advisory Group that was generally positive, concluded that "The scientific and technical challenges in such a complex activity suggest that success in the early attempts at ignition in 2010, while possible, is unlikely". On January 26, 2009, the final line replaceable unit (LRU) was installed, unofficially completing construction. On February 26, 2009, NIF fired all 192 laser beams into the target chamber. On March 10, 2009, NIF became the first laser to break the megajoule barrier, delivering 1.1 MJ of UV light, known as 3ω (from third-harmonic generation), to the target chamber center in a shaped ignition pulse. The main laser delivered 1.952 MJ of IR.
Operations, 2009–2012
On May 29, 2009, the NIF was dedicated in a ceremony attended by thousands. The first laser shots into a hohlraum target were fired in late June.
Buildup to main experiments, 2010
On January 28, 2010, NIF reported the delivery of a 669 kJ pulse to a gold hohlraum, breaking records for laser power delivery, and analysis suggested that suspected interference by generated plasma would not be a problem in igniting a fusion reaction. Due to the size of the test hohlraums, laser/plasma interactions produced plasma-optics gratings, acting like tiny prisms, which produced symmetric X-ray drive on the capsule inside the hohlraum.
After gradually altering the wavelength of the laser, scientists compressed a spherical capsule evenly and heated it to 3.3 million kelvins (285 eV). The capsule contained cryogenically cooled gas, acting as a substitute for the deuterium and tritium fuel capsules to be used later. Plasma Physics Group Leader Siegfried Glenzer said that they could maintain the precise fuel layers needed in the lab, but not yet within the laser system.
As of January 2010, the NIF reached 1.8 megajoules. The target chamber then needed to be equipped with shields to block neutrons.
National Ignition Campaign 2010–2012
With the main construction complete, NIF started its National Ignition Campaign (NIC) to reach ignition. At the time, articles appeared in science magazines stating that ignition was imminent. Scientific American opened a 2010 review article with the statement "Ignition is close now. Within a year or two..."
The first test was carried out on October 8, 2010, at slightly over 1 MJ. However, problems slowed the drive toward ignition-level laser energies in the 1.4–1.5 MJ range.
One problem was the potential for damage from overheating due to a greater concentration of energy on optical components. Other issues included problems layering the fuel inside the target, and minute quantities of dust on the capsule surface.
The power level continued to increase and targets became more sophisticated. Then minute amounts of water vapor appeared in the target chamber and froze to the windows on the ends of the hohlraums, causing an asymmetric implosion. This was solved by adding a second layer of glass on either end, in effect creating a storm window.
Shots halted from February to April 2011, to conduct SSMP materials experiments. Then, NIF was upgraded, improving diagnostic and measurement instruments. The Advanced Radiographic Capability (ARC) system was added, which uses 4 of the NIF's 192 beams as a backlight for imaging the implosion sequence. ARC is essentially a petawatt-class laser with peak power exceeding a quadrillion (1015) watts. It is designed to produce brighter, more penetrating, higher-energy x rays. ARC became the world's highest-energy short-pulse laser, capable of creating picosecond-duration laser pulses to produce energetic x rays in the range of 50–100 keV.
NIC runs restarted in May 2011 with the goal of more precisely timing the four laser shock waves that compress the fusion target.
In January 2012, Mike Dunne, director of NIF's laser fusion energy program, predicted that ignition would be achieved at NIF by October. In the same month, the NIF fired a record high 57 shots. On March 15 NIF produced a laser pulse with 411 TW of peak power. On July 5, it produced a shorter pulse of 1.85 MJ and increased power of 500 TW.
DOE Report, July 19, 2012
NIC was periodically reviewed. The 6th review, was published on July 19, 2012. The report praised the quality of the installation: lasers, optics, targets, diagnostics, and operations. However:
The integrated conclusion based on this extensive period of experimentation, however, is that considerable hurdles must be overcome to reach ignition or the goal of observing unequivocal alpha heating. Indeed the reviewers note that given the unknowns with the present 'semi-empirical' approach, the probability of ignition before the end of December is extremely low and even the goal of demonstrating unambiguous alpha heating is challenging.
Further, the report expressed deep concerns that the gaps between observed performance and simulation codes implied that the current codes were of limited utility. Specifically, they found a lack of predictive ability of the radiation drive to the capsule and inadequately modeled laser–plasma interactions. Pressure was reaching only one half to one third of that required for ignition, far below the predicted values. The memo discussed the mixing of ablator material and capsule fuel likely due to hydrodynamics instabilities in the ablator's outer surface.
The report suggested using a thicker ablator, although this would increase its inertia. To keep the required implosion speed, they proposed that the NIF energy be increased to 2MJ. It questioned whether or not the energy was sufficient to compress a large enough capsule to avoid the mix limit and reach ignition. The report concluded that ignition within the calendar year 2012 was 'highly unlikely'.
NIC officially ended on September 30, 2012. Media reports suggested that NIF would shift its focus toward materials research.
In 2008, LLNL began the Laser Inertial Fusion Energy program (LIFE), to explore ways to use NIF technologies as the basis for a commercial power plant design. The focus was on pure fusion devices, incorporating technologies that developed in parallel with NIF that would greatly improve the performance of the design. In April 2014, LIFE ended.
Fuel gain breakeven, 2013
A NIF fusion shot on September 27, 2013, produced more energy than was absorbed by the deuterium–tritium fuel. This has been confused with having reached "scientific breakeven", defined as the fusion energy exceeding the laser input energy. Using this definition gives 14.4 kJ out and 1.8 MJ in, a ratio of 0.008.
Stockpile experiments, 2013–2015
In 2013, NIF shifted focus to materials and weapons research. Experiments beginning in FY 2015 used plutonium targets. Plutonium shots simulate the compression of the primary in a nuclear bomb by high explosives, which had not seen direct testing since CNTB took effect. Plutonium use ranged from less than a milligram to 10 milligrams.
In FY 2014, NIF performed 191 shots, slightly more than one every two days. As of April 2015 NIF was on track to meet its goal of 300 laser shots in FY 2015.
Back to fusion, 2016–present
On January 28, 2016, NIF successfully executed its first gas pipe experiment intended to study the absorption of large amounts of laser light within long targets relevant to high-gain magnetized liner inertial fusion (MagLIF). In order to investigate key aspects of the propagation, stability, and efficiency of laser energy coupling at full scale for high-gain MagLIF target designs, a single quad of NIF was used to deliver 30 kJ of energy to a target during a 13 nanosecond shaped pulse. Data return was favorable.
In 2018, improvements in controlling compression asymmetry was demonstrated in a shot with an output of 1.9×1016 neutrons, resulting in 0.054 MJ of fusion energy released by a 1.5 MJ laser pulse.
Burning plasma achieved, 2021
Experiments in 2020 and 2021 yielded the world's first burning plasmas, in which most of the plasma heating came from nuclear fusion reactions. This result was followed on August 8, 2021 by the world's first ignited plasma, in which the fusion heating was sufficient to sustain the thermonuclear reaction. It produced excess neutrons consistent with a short-lived chain reaction of around 100 trillionths of a second.
The fusion energy yield of the 2021 experiment was estimated to be 70% of the laser energy incident on the plasma. This result slightly beat the former record of 67% set by the JET torus in 1997. Taking the energy efficiency of the laser itself into account, the experiment used about 477 MJ of electrical energy to get 1.8 MJ of energy into the target to create 1.3 MJ of fusion energy.
Several design changes enabled this result. The material of the capsule shell was changed to diamond to increase the absorbance of secondary x-rays created by the laser burst, thus increasing the efficacy of the collapse, and its surface was further smoothed. The size of the hole in the capsule used to inject fuel was reduced. The holes in the gold cylinder surrounding the capsule were shrunk to reduce energy loss. The laser pulse was extended.
Scientific breakeven achieved, 2022
The NIF became the first fusion experiment to achieve scientific breakeven on December 5, 2022, with an experiment producing 3.15 megajoules of energy from a 2.05 megajoule input of laser light for an energy gain of about 1.5. Charging the laser consumed "well above 400 megajoules". In a public announcement on December 13, the Secretary of Energy Jennifer Granholm announced the facility had achieved ignition. While this was often characterized as a "net energy gain" from fusion, this was only true with respect to the energy delivered by the laser; reports sometimes omitted the ~300 MJ power input required.
The feat required the use of a slightly thicker and smoother capsule surrounding the fuel and a 2.05 MJ laser (up from 1.9 MJ in 2021), yielding 3.15 MJ, a 54% surplus. They also redistributed the energy among the split laser beams, which produced a more symmetrical (spherical) implosion.
The NIF achieved breakeven for a second time on July 30, 2023 yielding 3.88 MJ, an 89% surplus. At least four of six shots performed after the first successful one in December 2022 achieved breakeven. These successes led the DOE to fund three additional research centers. Lawrence Livermore planned to raise laser energy to 2.2 MJ per shot through upgraded optics and lasers , reaching it on the experiment held on October 30, 2023.
Similar projects
Some similar experimental ICF projects are:
Laser Mégajoule (LMJ)
Nike laser
High Power laser Energy Research facility (HiPER)
Laboratory for Laser Energetics (LLE)
Magnetized liner inertial fusion (MagLIF)
Shenguang-II High Power Laser
Pictures
In popular culture
The NIF was used as the set for the starship Enterprise's warp core in the 2013 movie Star Trek Into Darkness''.
See also
Z Pulsed Power Facility
Chain reaction
HiPER
Inertial confinement fusion
ITER
Laser Mégajoule
Nuclear fusion
Nuclear reactor
Notes
References
External links
Nuclear research institutes
Lawrence Livermore National Laboratory
Laboratories in California
Research institutes in the San Francisco Bay Area
United States Department of Energy facilities
Engineering projects
Inertial confinement fusion research lasers
Nuclear stockpile stewardship
Articles containing video clips | National Ignition Facility | [
"Engineering"
] | 9,717 | [
"Nuclear research institutes",
"Nuclear organizations",
"nan"
] |
337,307 | https://en.wikipedia.org/wiki/Object%20file | An object file is a file that contains machine code or bytecode, as well as other data and metadata, generated by a compiler or assembler from source code during the compilation or assembly process. The machine code that is generated is known as object code.
The object code is usually relocatable, and not usually directly executable. There are various formats for object files, and the same machine code can be packaged in different object file formats. An object file may also work like a shared library.
The metadata that object files may include can be used for linking or debugging; it includes information to resolve symbolic cross-references between different modules, relocation information, stack unwinding information, comments, program symbols, and debugging or profiling information. Other metadata may include the date and time of compilation, the compiler name and version, and other identifying information.
The term "object program" dates from at least the 1950s:
A linker is used to combine the object code into one executable program or library pulling in precompiled system libraries as needed.
Object file formats
There are many different object file formats; originally each type of computer had its own unique format, but with the advent of Unix and other portable operating systems, some formats, such as ELF and COFF, have been defined and used on different kinds of systems.
Some systems make a distinction between formats which are directly executable and formats which require processing by the linker. For example, OS/360 and successors call the first format a load module and the second an object module. In this case the files have entirely different formats. DOS and Windows also have different file formats for executable files and object files, such as Portable Executable for executables and COFF for object files in 32-bit and 64-bit Windows.
Unix and Unix-like systems have used the same format for executable and object files, starting with the original a.out format. Some formats can contain machine code for different processors, with the correct one chosen by the operating system when the program is loaded.
The design and/or choice of an object file format is a key part of overall system design. It affects the performance of the linker and thus programmer turnaround while a program is being developed. If the format is used for executables, the design also affects the time programs take to begin running, and thus the responsiveness for users.
The GNU Project's Binary File Descriptor library (BFD library) provides a common API for the manipulation of object files in a variety of formats.
Absolute files
Many early computers, or small microcomputers, support only an absolute object format. Programs are not relocatable; they need to be assembled or compiled to execute at specific, predefined addresses. The file contains no relocation or linkage information. These files can be loaded into read/write memory, or stored in read-only memory. For example, the Motorola 6800 MIKBUG monitor contains a routine to read an absolute object file (SREC Format) from paper tape. DOS COM files are a more recent example of absolute object files.
Segmentation
Most object file formats are structured as separate sections of data, each section containing a certain type of data. These sections are known as "segments" due to the term "memory segment", which was previously a common form of memory management. When a program is loaded into memory by a loader, the loader allocates various regions of memory to the program. Some of these regions correspond to sections of the object file, and thus are usually known by the same names. Others, such as the stack, only exist at run time. In some cases, relocation is done by the loader (or linker) to specify the actual memory addresses. However, for many programs or architectures, relocation is not necessary, due to being handled by the memory management unit or by position-independent code. On some systems the segments of the object file can then be copied (paged) into memory and executed, without needing further processing. On these systems, this may be done lazily, that is, only when the segments are referenced during execution, for example via a memory-mapped file backed by the object file.
Types of data supported by typical object file formats:
Header (descriptive and control information)
Code segment ("text segment", executable code)
Data segment (initialized static variables)
Read-only data segment (rodata, initialized static constants)
BSS segment (uninitialized static data, both variables and constants)
External definitions and references for linking
Relocation information
Dynamic linking information
Debugging information
Segments in different object files may be combined by the linker according to rules specified when the segments are defined. Conventions exist for segments shared between object files; for instance, in DOS there are different memory models that specify the names of special segments and whether or not they may be combined.
The debugging data format of debugging information may either be an integral part of the object file format, as in COFF, or a semi-independent format which may be used with several object formats, such as stabs or DWARF.
See also
OS/360 Object File Format
Intel hexadecimal object file format (typically with file extension .HEX, but sometimes also with .OBJ)
Object Module Format (ICL) (OMF for ICL VME)
Object Module Format (Intel) (OMF for Intel 8080/8085, OBJ for Intel 8086)
Mach-O
References
Further reading
Code: Errata:
(NB. Description of the Microsoft REL file format for relocatable objects, also used by Digital Research.)
(16 pages)
(1+23 pages)
(1 page) (NB. Describes the history and relationship of IEEE 695 with CUFOM and MUFOM.)
(NB. Superseeds IEEE 695-1985 (1985-09-09)).
Executable file formats
Compiler construction
Computer libraries
Programming language implementation | Object file | [
"Technology"
] | 1,250 | [
"IT infrastructure",
"Computer libraries"
] |
337,353 | https://en.wikipedia.org/wiki/Safety%20data%20sheet | A safety data sheet (SDS), material safety data sheet (MSDS), or product safety data sheet (PSDS) is a document that lists information relating to occupational safety and health for the use of various substances and products. SDSs are a widely used type of fact sheet used to catalogue information on chemical species including chemical compounds and chemical mixtures. SDS information may include instructions for the safe use and potential hazards associated with a particular material or product, along with spill-handling procedures. The older MSDS formats could vary from source to source within a country depending on national requirements; however, the newer SDS format is internationally standardized.
An SDS for a substance is not primarily intended for use by the general consumer, focusing instead on the hazards of working with the material in an occupational setting. There is also a duty to properly label substances on the basis of physico-chemical, health, or environmental risk. Labels often include hazard symbols such as the European Union standard symbols. The same product (e.g. paints sold under identical brand names by the same company) can have different formulations in different countries. The formulation and hazards of a product using a generic name may vary between manufacturers in the same country.
Globally Harmonized System
The Globally Harmonized System of Classification and Labelling of Chemicals contains a standard specification for safety data sheets. The SDS follows a 16 section format which is internationally agreed and for substances especially, the SDS should be followed with an Annex which contains the exposure scenarios of this particular substance. The 16 sections are:
SECTION 1: Identification of the substance/mixture and of the company/undertaking
1.1. Product identifier
1.2. Relevant identified uses of the substance or mixture and uses advised against
1.3. Details of the supplier of the safety data sheet
1.4. Emergency telephone number
SECTION 2: Hazards identification
2.1. Classification of the substance or mixture
2.2. Label elements
2.3. Other hazards
SECTION 3: Composition/information on ingredients
3.1. Substances
3.2. Mixtures
SECTION 4: First aid measures
4.1. Description of first aid measures
4.2. Most important symptoms and effects, both acute and delayed
4.3. Indication of any immediate medical attention and special treatment needed
SECTION 5: Firefighting measures
5.1. Extinguishing media
5.2. Special hazards arising from the substance or mixture
5.3. Advice for firefighters
SECTION 6: Accidental release measure
6.1. Personal precautions, protective equipment and emergency procedures
6.2. Environmental precautions
6.3. Methods and material for containment and cleaning up
6.4. Reference to other sections
SECTION 7: Handling and storage
7.1. Precautions for safe handling
7.2. Conditions for safe storage, including any incompatibilities
7.3. Specific end use(s)
SECTION 8: Exposure controls/personal protection
8.1. Control parameters
8.2. Exposure controls
SECTION 9: Physical and chemical properties
9.1. Information on basic physical and chemical properties
9.2. Other information
SECTION 10: Stability and reactivity
10.1. Reactivity
10.2. Chemical stability
10.3. Possibility of hazardous reactions
10.4. Conditions to avoid
10.5. Incompatible materials
10.6. Hazardous decomposition products
SECTION 11: Toxicological information
11.1. Information on toxicological effects
SECTION 12: Ecological information
12.1. Toxicity
12.2. Persistence and degradability
12.3. Bioaccumulative potential
12.4. Mobility in soil
12.5. Results of PBT and vPvB assessment
12.6. Other adverse effects
SECTION 13: Disposal considerations
13.1. Waste treatment methods
SECTION 14: Transport information
14.1. UN number
14.2. UN proper shipping name
14.3. Transport hazard class(es)
14.4. Packing group
14.5. Environmental hazards
14.6. Special precautions for user
14.7. Transport in bulk according to Annex II of MARPOL and the IBC Code
SECTION 15: Regulatory information
15.1. Safety, health and environmental regulations/legislation specific for the substance or mixture
15.2. Chemical safety assessment
SECTION 16: Other information
16.2. Date of the latest revision of the SDS
National and international requirements
Canada
In Canada, the program known as the Workplace Hazardous Materials Information System (WHMIS) establishes the requirements for SDSs in workplaces and is administered federally by Health Canada under the Hazardous Products Act, Part II, and the Controlled Products Regulations.
European Union
Safety data sheets have been made an integral part of the system of Regulation (EC) No 1907/2006 (REACH). The original requirements of REACH for SDSs have been further adapted to take into account the rules for safety data sheets of the Global Harmonised System (GHS) and the implementation of other elements of the GHS into EU legislation that were introduced by Regulation (EC) No 1272/2008 (CLP) via an update to Annex II of REACH.
The SDS must be supplied in an official language of the Member State(s) where the substance or mixture is placed on the market, unless the Member State(s) concerned provide(s) otherwise (Article 31(5) of REACH).
The European Chemicals Agency (ECHA) has published a guidance document on the compilation of safety data sheets.
Germany
In Germany, safety data sheets must be compiled in accordance with REACH Regulation No. 1907/2006. The requirements concerning national aspects are defined in the Technical Rule for Hazardous Substances (TRGS) 220 "National aspects when compiling safety data sheets". A national measure mentioned in SDS section 15 is as example the water hazard class (WGK) it is based on regulations governing systems for handling substances hazardous to waters (AwSV).
The Netherlands
Dutch Safety Data Sheets are well known as veiligheidsinformatieblad or Chemiekaarten. This is a collection of Safety Data Sheets of the most widely used chemicals. The Chemiekaarten boek is commercially available, but also made available through educational institutes, such as the web site offered by the University of Groningen.
South Africa
This section contributes to a better understanding of the regulations governing SDS within the South African framework. As regulations may change, it is the responsibility of the reader to verify the validity of the regulations mentioned in text.
As globalisation increased and countries engaged in cross-border trade, the quantity of hazardous material crossing international borders amplified. Realising the detrimental effects of hazardous trade, the United Nations established a committee of experts specialising in the transportation of hazardous goods. The committee provides best practises governing the conveyance of hazardous materials and goods for land including road and railway; air as well as sea transportation. These best practises are constantly updated to remain current and relevant.
There are various other international bodies who provide greater detail and guidance for specific modes of transportation such as the International Maritime Organisation (IMO) by means of the International Maritime Code and the International Civil Aviation Organisation (ICAO) via the Technical Instructions for the safe transport of dangerous goods by air as well as the International Air Transport Association (IATA) who provides regulations for the transport of dangerous goods.
These guidelines prescribed by the international authorities are applicable to the South African land, sea and air transportation of hazardous materials and goods. In addition to these rules and regulations to International best practice, South Africa has also implemented common laws which are laws based on custom and practise. Common laws are a vital part of maintaining public order and forms the basis of case laws. Case laws, using the principles of common law are interpretations and decisions of statutes made by courts. Acts of parliament are determinations and regulations by parliament which form the foundation of statutory law. Statutory laws are published in the government gazette or on the official website. Lastly, subordinate legislation are the bylaws issued by local authorities and authorised by parliament.
Statutory law gives effect to the Occupational Health and Safety Act of 1993 and the National Road Traffic Act of 1996. The Occupational Health and Safety Act details the necessary provisions for the safe handling and storage of hazardous materials and goods whilst the transport act details with the necessary provisions for the transportation of the hazardous goods.
Relevant South African legislation includes the Hazardous Chemicals Agent regulations of 2021 under the Occupational Health and Safety Act of 1993, the Chemical Substance Act 15 of 1973, and the National Road Traffic Act of 1996, and the Standards Act of 2008.
There has been selective incorporation of aspects of the Globally Harmonised System (GHS) of Classification and Labelling of Chemicals into South African legislation. At each point of the chemical value chain, there is a responsibility to manage chemicals in a safe and responsible manner. SDS is therefore required by law. A SDS is included in the requirements of Occupational Health and Safety Act, 1993 (Act No.85 of 1993) Regulation 1179 dated 25 August 1995.
The categories of information supplied in the SDS are listed in SANS 11014:2010; dangerous goods standards – Classification and information. SANS 11014:2010 supersedes the first edition SANS 11014-1:1994 and is an identical implementation of ISO 11014:2009. According to SANS 11014:2010:
United Kingdom
In the U.K., the Chemicals (Hazard Information and Packaging for Supply) Regulations 2002 - known as CHIP Regulations - impose duties upon suppliers, and importers into the EU, of hazardous materials.
NOTE: Safety data sheets (SDS) are no longer covered by the CHIP regulations. The laws that require a SDS to be provided have been transferred to the European REACH Regulations.
The Control of Substances Hazardous to Health (COSHH) Regulations govern the use of hazardous substances in the workplace in the UK and specifically require an assessment of the use of a substance. Regulation 12 requires that an employer provides employees with information, instruction and training for people exposed to hazardous substances. This duty would be very nearly impossible without the data sheet as a starting point. It is important for employers therefore to insist on receiving a data sheet from a supplier of a substance.
The duty to supply information is not confined to informing only business users of products. SDSs for retail products sold by large DIY shops are usually obtainable on those companies' web sites.
Web sites of manufacturers and large suppliers do not always include them even if the information is obtainable from retailers but written or telephone requests for paper copies will usually be responded to favourably.
United Nations
The United Nations (UN) defines certain details used in SDSs such as the UN numbers used to identify some hazardous materials in a standard form while in international transit.
United States
In the U.S., the Occupational Safety and Health Administration requires that SDSs be readily available to all employees for potentially harmful substances handled in the workplace under the Hazard Communication Standard. The SDS is also required to be made available to local fire departments and local and state emergency planning officials under Section 311 of the Emergency Planning and Community Right-to-Know Act. The American Chemical Society defines Chemical Abstracts Service Registry Numbers (CAS numbers) which provide a unique number for each chemical and are also used internationally in SDSs.
Reviews of material safety data sheets by the U.S. Chemical Safety and Hazard Investigation Board have detected dangerous deficiencies.
The board's Combustible Dust Hazard Study analyzed 140 data sheets of substances capable of producing combustible dusts. None of the SDSs contained all the information the board said was needed to work with the material safely, and 41 percent failed to even mention that the substance was combustible.
As part of its study of an explosion and fire that destroyed the Barton Solvents facility in Valley Center, Kansas, in 2007, the safety board reviewed 62 material safety data sheets for commonly used nonconductive flammable liquids. As in the combustible dust study, the board found all the data sheets inadequate.
In 2012, the US adopted the 16 section Safety Data Sheet to replace Material Safety Data Sheets. This became effective on 1 December 2013. These new Safety Data Sheets comply with the Globally Harmonized System of Classification and Labeling of Chemicals (GHS). By 1 June 2015, employers were required to have their workplace labeling and hazard communication programs updated as necessary – including all MSDSs replaced with SDS-formatted documents.
SDS authoring
Many companies offer the service of collecting, or writing and revising, data sheets to ensure they are up to date and available for their subscribers or users. Some jurisdictions impose an explicit duty of care that each SDS be regularly updated, usually every three to five years. However, when new information becomes available, the SDS must be revised without delay. If a full SDS is not feasible, then a reduced workplace label should be authored.
See also
Occupational exposure banding
References
Chemical safety
Documents
Environmental law
Industrial hygiene
Materials
Occupational safety and health
Regulation of chemicals in the European Union
Safety engineering
Toxicology | Safety data sheet | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,622 | [
"Systems engineering",
"Chemical accident",
"Regulation of chemicals in the European Union",
"Toxicology",
"Safety engineering",
"Regulation of chemicals",
"Materials",
"nan",
"Chemical safety",
"Matter"
] |
337,411 | https://en.wikipedia.org/wiki/Gertrude%20B.%20Elion | Gertrude "Trudy" Belle Elion (January 23, 1918 – February 21, 1999) was an American biochemist and pharmacologist, who shared the 1988 Nobel Prize in Physiology or Medicine with George H. Hitchings and Sir James Black for their use of innovative methods of rational drug design for the development of new drugs. This new method focused on understanding the target of the drug rather than simply using trial-and-error. Her work led to the creation of the anti-retroviral drug AZT, which was the first drug widely used against AIDS. Her well known works also include the development of the first immunosuppressive drug, azathioprine, used to fight rejection in organ transplants, and the first successful antiviral drug, acyclovir (ACV), used in the treatment of herpes infection.
Early life and education
Elion was born in New York City on January 23, 1918, to parents Robert Elion, a Lithuanian Jewish immigrant and a dentist, and Bertha Cohen, a Polish Jewish immigrant. Her family lost their wealth after the Wall Street Crash of 1929. Elion was an excellent student who graduated from Walton High School at the age of 15. When she was 15, her grandfather died of stomach cancer, and being with him during his last moments inspired Elion to pursue a career in science and medicine in college. She was Phi Beta Kappa at Hunter College, which she was able to attend for free due to her grades, graduating summa cum laude in 1937 with a degree in chemistry. Unable to find a paying research job after graduating because she was a woman, Elion worked as a secretary and high school teacher before working in an unpaid position at a chemistry lab. Eventually, she saved up enough money to attend New York University and she earned her M.Sc. in 1941, while working as a high school teacher during the day. In an interview after receiving her Nobel Prize, she stated that she believed the sole reason she was able to further her education as a young woman was because she was able to attend Hunter College for free. Her fifteen financial aid applications for graduate school were turned down due to gender bias, so she enrolled in a secretarial school, where she attended only six weeks before she found a job.
Unable to obtain a graduate research position, she worked as a food quality supervisor at A&P supermarkets and for a food lab in New York, testing the acidity of pickles and the color of egg yolk going into mayonnaise. She moved to a position at Johnson & Johnson that she hoped would be more promising, but ultimately involved testing the strength of sutures. In 1944, she left to work as an assistant to George H. Hitchings at the Burroughs-Wellcome pharmaceutical company (now GlaxoSmithKline) in Tuckahoe, New York. Hitchings was using a new way of developing drugs, by intentionally imitating natural compounds instead of through trial and error. Specifically, he was interested in synthesizing antagonists to nucleic acid derivatives, with the goal that these antagonists would integrate into biological pathways. He believed that if he could trick cancer cells into accepting artificial compounds for their growth, they could be destroyed without also destroying normal cells.Elion synthesized anti-metabolites of purines, and in 1950, she developed the anti-cancer drugs tioguanine and mercaptopurine.
She pursued graduate studies at night school at New York University Tandon School of Engineering (then Brooklyn Polytechnic Institute), but after several years of long-range commuting, she was informed that she would no longer be able to continue her doctorate on a part-time basis, but would need to give up her job and go to school full-time. Elion made a critical decision in her life, and stayed with her job and give up the pursuit of her doctorate. She never obtained a formal Ph.D., but was later awarded an honorary Ph.D. from New York University Tandon School of Engineering (then Polytechnic University of New York) in 1989 and an honorary S.D. degree from Harvard University in 1998.
Personal life
Soon after graduating from Hunter College, Elion met Leonard Canter, an outstanding statistics student at City College of New York (CCNY). They planned to marry, but Leonard became ill. On June 25, 1941, he died from bacterial endocarditis, an infection of his heart valves. In her Nobel interview, she stated that this furthered her drive to become a research scientist and pharmacologist.
Elion never married or had children. However, her brother, whom she was close with, married and had three sons and a daughter that she took pride in being able to watch grow. She listed her hobbies as photography, travel, opera and ballet, and listening to music. After Burroughs Wellcome moved to Research Triangle Park in North Carolina, Elion moved to nearby Chapel Hill. She retired in 1983 from Burroughs Wellcome to spend more time traveling and attending the opera. She continued to make important scientific contributions after her retirement. One of her passions during this time was encouraging other women to pursue careers in science.
Gertrude Elion died in North Carolina in 1999, aged 81.
Career and research
While Elion had many jobs to support herself and put herself through school, Elion had also worked for the National Cancer Institute, American Association for Cancer Research, and World Health Organization, among other organizations. From 1967 to 1983, she was the head of the department of experimental therapy for Burroughs Wellcome. She officially retired from Burroughs and Wellcome in 1983.
She was affiliated with Duke University as adjunct professor of pharmacology and of experimental medicine from 1971 to 1983 and research professor from 1983 to 1999. During her time at Duke, she focused on mentoring medical and graduate students. She published more than 25 papers with the students she mentored at Duke.
Even after her retirement from Burroughs Wellcome, Gertrude continued almost full-time work at the lab. She played a significant role in the development of AZT, one of the first drugs used to treat HIV and AIDS. She also was crucial in the development of nelarabine, which she worked on until her death in 1999.
Rather than relying on trial and error, Elion and Hitchings discovered new drugs using rational drug design, which used the differences in biochemistry and metabolism between normal human cells and pathogens (disease-causing agents such as cancer cells, protozoa, bacteria, and viruses) to design drugs that could kill or inhibit the reproduction of particular pathogens without harming human cells. The drugs they developed are used to treat a variety of maladies, such as leukemia, malaria, lupus, hepatitis, arthritis, gout, organ transplant rejection (azathioprine), as well as herpes (acyclovir, which was the first selective and effective drug of its kind). Most of Elion's early work came from the use and development of purine derivatives.
Elion's research contributed to the development of:
Mercaptopurine (Purinethol), the first treatment for leukemia, also used in organ transplantation.
Azathioprine (Imuran), the first immuno-suppressive agent, used for organ transplants.
Allopurinol (Zyloprim), for gout.
Pyrimethamine (Daraprim), for malaria.
Trimethoprim (Proloprim, Monoprim, others), for meningitis, sepsis, and bacterial infections of the urinary and respiratory tracts.
Acyclovir (Zovirax), for viral herpes.
Nelarabine for cancer treatment.
Selected works by Gertrude B. Elion
“Interaction of Anticancer Drugs with Enzymes.” In Pharmacological Basis of Cancer Chemotherapy (1975).
Awards and honors
In 1988 Elion received the Nobel Prize in Physiology or Medicine, together with Hitchings and Sir James Black for discoveries of "important new principles of drug treatment". Elion was the fifth woman Nobel laureate in Medicine and the ninth in science in general, and one of only a handful of laureates without a doctoral degree. She was the only woman honored with a Nobel Prize that year. She was elected a member of the National Academy of Sciences in 1990,
a member of the Institute of Medicine in 1991 and a Fellow of the American Academy of Arts and Sciences also in 1991.
Her awards include the Garvan-Olin Medal (1968), the Sloan-Kettering Institute Judd Award (1983), the American Chemical Society Distinguished Chemist Award (1985), the American Academy of Achievement's Golden Plate Award (1989), the American Association for Cancer Research Cain Award (1985), the American Cancer Society Medal of Honor (1990), the National Medal of Science (1991), and the Lemelson-MIT Lifetime Achievement Award (1997). In 1991 Elion became the first woman to be inducted into the National Inventors Hall of Fame. She was inducted into the National Women's Hall of Fame also in 1991. In 1992, she was elected to the Engineering and Science Hall of Fame. She was elected a Foreign Member of the Royal Society (ForMemRS) in 1995.
See also
Timeline of women in science
List of Jewish Nobel laureates
References
Further reading
Nobel Prize Women in Science by Sharon Bertsch McGrayne
Mary Ellen Avery Gertrude Elion National Academy of Sciences Biographical Memoirs, VOLUME 78, 2000 BY NATIONAL ACADEMY PRESS WASHINGTON, D.C
Royal Society biographical memoir, Volume 54 in 2008.
External links
1918 births
1999 deaths
Nobel laureates in Physiology or Medicine
Women Nobel laureates
American Nobel laureates
American women biochemists
American pharmacologists
Women pharmacologists
American people of Polish-Jewish descent
American people of Lithuanian-Jewish descent
American women inventors
Jewish American scientists
Jewish women scientists
Fellows of the American Academy of Arts and Sciences
Members of the United States National Academy of Sciences
Members of the National Academy of Medicine
National Medal of Science laureates
Recipients of the Garvan–Olin Medal
Foreign members of the Royal Society
Duke University faculty
Lemelson–MIT Prize
Hunter College alumni
Polytechnic Institute of New York University alumni
Jewish chemists
20th-century American chemists
20th-century American inventors
20th-century American women scientists
Scientists from the Bronx | Gertrude B. Elion | [
"Technology"
] | 2,127 | [
"Women Nobel laureates",
"Women in science and technology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.