id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
9,416,468
https://en.wikipedia.org/wiki/Leratiomyces%20ceres
Leratiomyces ceres, commonly known as the chip cherry or redlead roundhead, is mushroom which has a bright red to orange cap and dark purple-brown spore deposit. It is usually found growing gregariously on wood chips and is one of the most common and most distinctive mushrooms found in that habitat. It is common on wood chips and lawns in North America, Europe, Australia, New Zealand and elsewhere. The name Stropharia aurantiaca has been used extensively but incorrectly for this mushroom (together with a number of similar synonyms). Description L. ceres may be described as follows. Cap: 2 to 6 cm in diameter, with thin flesh and a bright red to orange top which is convex to plane in age. Has white partial veil remnants when young. The cap surface is usually dry, but can be slightly viscid when moist. Gills: White to pale grey at first, later darker purple/brown or purplish grey with whitish edges. Attached (adnexed to adnate) and often notched. Stipe: Whitish, often with dark orange stains in age (most evident around base), 3–6 cm long and 0.5 to 1 cm wide, equal to slightly larger at the base, which often has mycelium attached. The veil is thin and leaves a fragile, indistinct ring, sometimes missing with age. The stalk is smooth above the ring zone and is fluffy with tiny scales below, which often wash off in rain. Spores: Dark purple/brown. 10–13.5 × 6–8.5 m. Elliptical and smooth. Other microscopic features: Chrysocystidia are present both on the edges and on the faces of the gills. Naming There has been some confusion between L. ceres, which has a fairly thick white stem, and L. squamosus var. thaustus, which has a slender stem and prominent scales below the ring zone (although the two taxa are quite easy to distinguish by sight). Around 1885 Mordecai Cubitt Cooke originated the names Agaricus squamosus f. aurantiacus and Agaricus thraustus var. aurantiacus, and this later gave rise to the name Stropharia aurantiaca. This name is defined by Cooke's illustration to his Handbook of British Fungi and in 2004 Richard Fortey discovered that this illustration was not of L. ceres, as had generally been assumed, but it was L. squamosus var. thaustus. Thus the name aurantiaca is best avoided, being wrong when applied to L. ceres. The name Agaricus ceres was created in 1888 by Cooke and Massee for the white-stemmed species, and was reclassified as Psilocybe ceres (in 1891) and Leratiomyces ceres (in 2008). Similar species Similar species include L. squamosus, Agrocybe putaminum, Gymnopilus sapineus, Psathyrella corrugis, Stropharia squamosa, S. thrausta, and Tubaria furfuracea. In psilocybin mushroom hunting communities in Australia and New Zealand, L. ceres (or "Larrys" as commonly nicknamed) are scorned as lookalikes and imposters of Psilocybe species on wood chip. Prolific growth in the same habitats and a similar appearance from afar can give false hope of a large bounty, but on closer inspection the species are not particularly alike. References External links Mykoweb - Leratiomyces ceres Mushroom Expert - Leratiomyces ceres Strophariaceae Fungi of Europe Fungi of North America Fungi native to Australia Fungus species
Leratiomyces ceres
[ "Biology" ]
780
[ "Fungi", "Fungus species" ]
9,417,783
https://en.wikipedia.org/wiki/Phragmosome
The phragmosome is a sheet of cytoplasm forming in highly vacuolated plant cells in preparation for mitosis. In contrast to animal cells, plant cells often contain large central vacuoles occupying up to 90% of the total cell volume and pushing the nucleus against the cell wall. In order for mitosis to occur, the nucleus has to move into the center of the cell. This happens during G2 phase of the cell cycle. Initially, cytoplasmic strands form that penetrate the central vacuole and provide pathways for nuclear migration. Actin filaments along these cytoplasmic strands pull the nucleus into the center of the cell. These cytoplasmic strands fuse into a transverse sheet of cytoplasm along the plane of future cell division, forming the phragmosome. Phragmosome formation is only clearly visible in dividing plant cells that are highly vacuolated. Just before mitosis, a dense band of microtubules appears around the phragmosome and the future division plane just below the plasma membrane. This preprophase band marks the equatorial plane of the future mitotic spindle as well as the future fusion sites for the new cell plate with the existing cell wall. It disappears as soon as the nuclear envelope breaks down and the mitotic spindle forms. When mitosis is completed, the cell plate and new cell wall form starting from the center along the plane occupied by the phragmosome. The cell plate grows outwards until it fuses with the cell wall of the dividing cell at exactly the spots predicted by the preprophase band. References Further reading Cell cycle Mitosis Plant cells Cell anatomy
Phragmosome
[ "Biology" ]
345
[ "Cell cycle", "Cellular processes", "Mitosis" ]
9,418,169
https://en.wikipedia.org/wiki/Piwi
Piwi (or PIWI) genes were identified as regulatory proteins responsible for stem cell and germ cell differentiation. Piwi is an abbreviation of P-element Induced WImpy testis in Drosophila. Piwi proteins are highly conserved RNA-binding proteins and are present in both plants and animals. Piwi proteins belong to the Argonaute/Piwi family and have been classified as nuclear proteins. Studies on Drosophila have also indicated that Piwi proteins have no slicer activity conferred by the presence of the Piwi domain. In addition, Piwi associates with heterochromatin protein 1, an epigenetic modifier, and piRNA-complementary sequences. These are indications of the role Piwi plays in epigenetic regulation. Piwi proteins are also thought to control the biogenesis of piRNA as many Piwi-like proteins contain slicer activity which would allow Piwi proteins to process precursor piRNA into mature piRNA. Protein structure and function The structure of several Piwi and Argonaute proteins (Ago) have been solved. Piwi proteins are RNA-binding proteins with 2 or 3 domains: The N-terminal PAZ domain binds the 3'-end of the guide RNA; the middle MID domain binds the 5'-phosphate of RNA; and the C-terminal PIWI domain acts as an RNase H endonuclease that can cleave RNA. The small RNA partners of Ago proteins are microRNAs (miRNAs). Ago proteins utilize miRNAs to silence genes post-transcriptionally or use small-interfering RNAs (siRNAs) in both transcription and post-transcription silencing mechanisms. Piwi proteins interact with piRNAs (28–33 nucleotides) that are longer than miRNAs and siRNAs (~20 nucleotides), suggesting that their functions are distinct from those of Ago proteins. Human Piwi proteins Presently there are four known human Piwi proteins—PIWI-like protein 1, PIWI-like protein 2, PIWI-like protein 3 and PIWI-like protein 4. Human Piwi proteins all contain two RNA binding domains, PAZ and Piwi. The four PIWI-like proteins have a spacious binding site within the PAZ domain which allows them to bind the bulky 2’-OCH3 at the 3’ end of piwi-interacting RNA. One of the major human homologues, whose upregulation is implicated in the formation of tumours such as seminomas, is called hiwi (for human piwi). Homologous proteins in mice have been called miwi (for mouse piwi). Role in germline cells PIWI proteins play a crucial role in fertility and germline development across animals and ciliates. Recently identified as a polar granule component, PIWI proteins appear to control germ cell formation so much so that in the absence of PIWI proteins there is a significant decrease in germ cell formation. Similar observations were made with the mouse homologs of PIWI, MILI, MIWI and MIWI2. These homologs are known to be present in spermatogenesis. Miwi is expressed in various stages of spermatocyte formation and spermatid elongation where Miwi2 is expressed in Sertoli cells. Mice deficient in either Mili or Miwi-2 have experienced spermatogenic stem cell arrest and those lacking Miwi-2 underwent a degradation of spermatogonia. The effects of piwi proteins in human and mouse germlines seems to stem from their involvement in translation control as Piwi and the small noncoding RNA, piwi-interacting RNA (piRNA), have been known to co-fractionate polysomes. The piwi-piRNA pathway also induces heterochromatin formation at centromeres, thus affecting transcription. The piwi-piRNA pathway also appears to protect the genome. First observed in Drosophila, mutant piwi-piRNA pathways led to a direct increase in dsDNA breaks in ovarian germ cells. The role of the piwi-piRNA pathway in transposon silencing may be responsible for the reduction in dsDNA breaks in germ cells. Role in RNA interference The piwi domain is a protein domain found in piwi proteins and a large number of related nucleic acid-binding proteins, especially those that bind and cleave RNA. The function of the domain is double stranded-RNA-guided hydrolysis of single stranded-RNA that has been determined in the argonaute family of related proteins. Argonautes, the most well-studied family of nucleic-acid binding proteins, are RNase H-like enzymes that carry out the catalytic functions of the RNA-induced silencing complex (RISC). In the well-known cellular process of RNA interference, the argonaute protein in the RISC complex can bind both small interfering RNA (siRNA) generated from exogenous double-stranded RNA and microRNA (miRNA) generated from endogenous non-coding RNA, both produced by the ribonuclease Dicer, to form an RNA-RISC complex. This complex binds and cleaves complementary base pairing messenger RNA, destroying it and preventing its translation into protein. Crystallised piwi domains have a conserved basic binding site for the 5' end of bound RNA; in the case of argonaute proteins binding siRNA strands, the last unpaired nucleotide base of the siRNA is also stabilised by base stacking-interactions between the base and neighbouring tyrosine residues. Recent evidence suggests that the functional role of piwi proteins in germ-line determination is due to their capacity to interact with miRNAs. Components of the miRNA pathway appear to be present in pole plasm and to play a key role in early development and morphogenesis of Drosophila melanogaster embryos, in which germ-line maintenance has been extensively studied. piRNAs and transposon silencing A novel class of longer-than-average miRNAs known as Piwi-interacting RNAs (piRNAs) has been defined in mammalian cells, about 26-31 nucleotides long as compared to the more typical miRNA or siRNA of about 21 nucleotides. These piRNAs are expressed mainly in spermatogenic cells in the testes of mammals. But studies have reported that piRNA expression can be found in the ovarian somatic cells and neuron cells in invertebrates, as well as in many other mammalian somatic cells. piRNAs have been identified in the genomes of mice, rats, and humans, with an unusual "clustered" genomic organization that may originate from repetitive regions of the genome such as retrotransposons or regions normally organized into heterochromatin, and which are normally derived exclusively from the antisense strand of double-stranded RNA. piRNAs have thus been classified as repeat-associated small interfering RNAs (rasiRNAs). Although their biogenesis is not yet well understood, piRNAs and Piwi proteins are thought to form an endogenous system for silencing the expression of selfish genetic elements such as retrotransposons and thus preventing the gene products of such sequences from interfering with germ cell formation. Footnotes References External links – Piwi domain in SCOP – Piwi domain in PROSITE UNIPROT Piwi - Piwi domains Proteins Protein domains
Piwi
[ "Chemistry", "Biology" ]
1,555
[ "Biomolecules by chemical classification", "Protein classification", "Protein domains", "Molecular biology", "Proteins" ]
9,418,352
https://en.wikipedia.org/wiki/Spongin
Spongin, a modified type of collagen protein, forms the fibrous skeleton of most organisms among the phylum Porifera, the sponges. It is secreted by sponge cells known as spongocytes. Spongin gives a sponge its flexibility. True spongin is found only in members of the class Demospongiae. Research directions Use in the removal of phenolic compounds from wastewater Researchers have found spongin to be useful in the photocatalytic degradation and removal of bisphenols (such as BPA) in wastewater. A heterogeneous catalyst consisting of a spongin scaffold for iron phthalocyanine (SFe) in conjunction with peroxide and UV radiation has been shown to remove phenolic wastes more quickly and efficiently than conventional methods. Other research using spongin scaffolds for the immobilization of Trametes versicolor Laccase has shown similar results in phenol degradation. References Marine biology Collagens
Spongin
[ "Chemistry", "Biology" ]
215
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry", "Marine biology" ]
9,418,538
https://en.wikipedia.org/wiki/Diagnostic%20board
In electronic systems a diagnostic board is a specialized device with diagnostic circuitry on a printed circuit board that connects to a computer or other electronic equipment replacing an existing module, or plugging into an expansion card slot. A multi-board electronic system such as a computer comprises multiple printed circuit boards or cards connected via connectors. When a fault occurs in the system, it is sometimes possible to isolate or identify the fault by replacing one of the boards with a diagnostic board. A diagnostic board can range from extremely simple to extremely sophisticated. Simple standard diagnostic plug-in boards for computers are available that display numeric codes to assist in identifying issues detected during the power-on self-test executed automatically during system startup. Dummy board A dummy board provides a minimal interface. This type of diagnostic board in intended to confirm that the interface is correctly implemented. For example, a PC motherboard manufacturer can test PCI functionality of a PC motherboard by connecting a dummy PCI board into each PCI slot on the motherboard Extender board An extender board (or board extender, card extender, extender card) is a simple circuit board that interposes between a card cage backplane and the circuit board of interest to physically 'extend' the circuit board of interest out from the card cage allowing access to both sides of the circuit board to connect diagnostic equipment such as an oscilloscope or systems analyzer. For example, a PCI extender board can be plugged into a PCI slot on a computer motherboard, and then a PCI card connected to the extender board to 'extend' the board into free space for access. This approach was common in the 1970s and 1980s particularly on S-100 bus systems. The concept can become unworkable when signal timing is affected by the length of the signal paths on the diagnostic board, as well as introducing Radio Frequency Interference (RFI) into the circuit under test because of a lack of adequate shielding. The use of extender boards is declining because of the wider use of multilayer flexible circuit boards and overall cheaper components, particularly in the consumer end of the electronics market. Sources Vector Electronics & Technology in North Hollywood Calif. is one of the few companies still making these legacy boards. Electronic engineering
Diagnostic board
[ "Technology", "Engineering" ]
457
[ "Electrical engineering", "Electronic engineering", "Computer engineering" ]
9,418,557
https://en.wikipedia.org/wiki/Preprophase
Preprophase is an additional phase during mitosis in plant cells that does not occur in other eukaryotes such as animals or fungi. It precedes prophase and is characterized by two distinct events: The formation of the preprophase band, a dense microtubule ring underneath the plasma membrane. The initiation of microtubule nucleation at the nuclear envelope. Function of preprophase in the cell cycle Plant cells are fixed with regards to their neighbor cells within the tissues they are growing in. In contrast to animals where certain cells can migrate within the embryo to form new tissues, the seedlings of higher plants grow entirely based on the orientation of cell division and subsequent elongation and differentiation of cells within their cell walls. Therefore, the accurate control of cell division planes and placement of the future cell wall in plant cells is crucial for the correct architecture of plant tissues and organs. The preprophase stage of somatic plant cell mitosis serves to establish the precise location of the division plane and future cell wall before the cell enters prophase. This is achieved through the formation of a transient microtubule structure, the preprophase band, and a so far unknown mechanism by which the cell is able to "memorize" the position of the preprophase band to guide the new cell wall growing during cytokinesis to the correct location. In gametophyte tissues during the reproductive phase of the plant life cycle, cell division planes may be established without the use of a preprophase band. In highly vacuolated plant cells, preprophase may be preceded by the formation of a phragmosome. The function of the phragmosome is to suspend the cell nucleus in the center of the cell in preparation for mitosis. If a phragmosome is visible, the preprophase band will appear at its outer edge. Preprophase band formation At the beginning of preprophase, the cortical microtubules of a plant cell disappear and aggregate into a dense ring underneath the plasma membrane. This preprophase band runs around the equatorial plane of the future mitotic spindle and marks the plane of cell division and future fusion site for the cell plate. It consists of microtubules and microfilaments (actin) and persists into prophase. Spindle formation occurs during prophase with the axis perpendicular to the plane surrounded by the preprophase band. Microtubule nucleation In contrast to animal cells, plant cells do not possess centrosomes to organize their mitotic spindles. Instead, the nuclear envelope acts as a microtubule organizing center (MTOC) for spindle formation during preprophase. The first sign is a clear, actin-free zone appearing around the nuclear envelope. This zone fills with microtubules nucleating on the surface of the nucleus. The preprophase spindle forms by self-assembly of these microtubules in the cytoplasm surrounding the nuclear envelope. It is reinforced through chromosome (kinetochore)-mediated spindle assembly after the nuclear envelope breaks down at the beginning of prometaphase. Transition into prophase During progression from preprophase into prophase, the randomly oriented microtubules align parallel along the nuclear surface according to the spindle axis. This structure is called the prophase spindle. Triggered by nuclear membrane breakdown at the beginning of prometaphase, the preprophase band disappears and the prophase spindle matures into the metaphase spindle occupying the space of the former nucleus. Experiments with drugs destroying microfilaments indicate that actin may play a role in keeping the cellular "memory" of the position of the division plane after the preprophase band breaks down to direct cytokinesis in telophase. Notes and references Bibliography P.H. Raven, R.F. Evert, S.E. Eichhorn (2005): Biology of Plants, 7th Edition, W.H. Freeman and Company Publishers, New York, Cell cycle Mitosis Plant cells
Preprophase
[ "Biology" ]
857
[ "Cell cycle", "Cellular processes", "Mitosis" ]
9,419,642
https://en.wikipedia.org/wiki/Implication%20graph
In mathematical logic and graph theory, an implication graph is a skew-symmetric, directed graph composed of vertex set and directed edge set . Each vertex in represents the truth status of a Boolean literal, and each directed edge from vertex to vertex represents the material implication "If the literal is true then the literal is also true". Implication graphs were originally used for analyzing complex Boolean expressions. Applications A 2-satisfiability instance in conjunctive normal form can be transformed into an implication graph by replacing each of its disjunctions by a pair of implications. For example, the statement can be rewritten as the pair . An instance is satisfiable if and only if no literal and its negation belong to the same strongly connected component of its implication graph; this characterization can be used to solve instances in linear time. In CDCL SAT-solvers, unit propagation can be naturally associated with an implication graph that captures all possible ways of deriving all implied literals from decision literals, which is then used for clause learning. References Boolean algebra Application-specific graphs Directed graphs Graph families
Implication graph
[ "Mathematics" ]
227
[ "Boolean algebra", "Fields of abstract algebra", "Mathematical logic" ]
9,420,449
https://en.wikipedia.org/wiki/495%20%28number%29
495 (four hundred [and] ninety-five) is the natural number following 494 and preceding 496. Mathematics The Kaprekar's routine algorithm is defined as follows for three-digit numbers: Take any three-digit number, other than repdigits such as 111. Leading zeros are allowed. Arrange the digits in descending and then in ascending order to get two three-digit numbers, adding leading zeros if necessary. Subtract the smaller number from the bigger number. Go back to step 2 and repeat. Repeating this process will always reach 495 in a few steps. Once 495 is reached, the process stops because 954 – 459 = 495. The number 6174 has the same property for the four-digit numbers, albeit has a much greater percentage of workable numbers. See also Collatz conjecture — sequence of unarranged-digit numbers always ends with the number 1. References Integers fr:Nombres 400 à 499#495 ja:400#481 から 499
495 (number)
[ "Mathematics" ]
212
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
9,420,905
https://en.wikipedia.org/wiki/Mobile%20Brigade%20Corps
The Mobile Brigade Corps () abbreviated Brimob is the special operations, paramilitary, and tactical unit of the Indonesian National Police (Polri). It is one of the oldest existing units within Polri. Some of its main duties are counter-terrorism, riot control, high-risk law enforcement where the use of firearms are present, search and rescue, hostage rescue, and bomb disposal operations. The Mobile Brigade Corps is a large component of the Indonesian National Police trained for counter-separatist and counter-insurgency duties, often in conjunction with military operations. The Mobile Brigade Corps consists of 2 (two) branches, namely Gegana and Pelopor. Gegana is tasked with carrying out more specific special police operations tasks such as: Bomb Disposal, CBR Handling (Chemistry, Biology, and Radioactivity), Anti-Terror (Counter Terrorism), and Intelligence. Meanwhile, the Pelopor are tasked with carrying out broader and paramilitary operations, such as: Riot control, Search and Rescue (SAR), security of vital installations, and guerrilla operations. Brimob is classified as a Police Tactical Unit (PTU) and is operationally a Police Special Weapons and Tactics (SWAT) unit (including Densus 88 and Gegana Brimob) which supports other general police units. Each regional police (Polda) in Indonesia has its own Brimob unit. History Formed in late 1945 as a special police corps named Pasukan Polisi Istimewa (Special Police Troops) with the task of disarming remnants of the Japanese Imperial Army and protecting the chief of state and the capital city. Under the Japanese, it was called . It fought in the revolution and was the first military unit to engage in the Battle of Surabaya under the command of Police Inspector Moehammad Jasin. On 14 November 1946, Prime Minister Sutan Sjahrir reorganised all Polisi Istimewa, Barisan Polisi Istimewa and Pasukan Polisi Istimewa, merged into the Mobile Brigade (Mobrig). This day is celebrated as the anniversary of this Blue Beret Corps. This Corps was reconstituted to suppress military and police conflicts and even coups d'etat. On 1 December 1947, Mobrig was militarized and later deployed in various conflicts and confrontations like the PKI rebellion in Madiun, the Darul Islam rebellion (1947), the APRA coup d'état and proclamation of the Republic of South Maluku (1950), the PRRI rebellion (1953), and the Permesta (1958). As of 14 November 1961, the Mobrig changed its name to Korps Brigade Mobil (Brimob), and its troops took part in the military confrontation with Malaysia in the early 1960s and in the conflict in East Timor in the mid-1970s. After that, Brimob was placed under the command of the Indonesian National Police. The Mobile Brigade, which began forming in late 1946 and was used during the anti-Dutch Revolution, started sending students for US Army SF training on Okinawa in January 1959. In April 1960 a second contingent arrived for two months of Ranger training. By the mid-1960s the three-battalion Mobile Brigade, commonly known as Brimob, had been converted into an elite shock force. A Brimob airborne training centre was established in Bandung. Following the 1965 coup attempt, one Brimob battalion was used during anti-Communist operations in West Kalimantan. In December 1975 a Brimob battalion was used during the East Timor operation. During the late 1970s, Brimob assumed VIP security and urban anti-terrorist duties. In 1989, Brimob still contained airborne-qualified elements. Pelopor ('Ranger') and airborne training takes place in Bandung and at a training camp outside Jakarta. Historically, Brimob wore the Indonesian spot camouflage pattern during the early 1960s as their uniform. In 1981, the Mobile Brigade spawned a new unit called the "Jihandak" (Penjinak Bahan Peledak), an explosive ordnance disposal (EOD) unit. Task The Implementation and mobilization of the Brimob Corps is to cope with high-level interruption of society mainly: mass riots, organized crime armed with fire, search and rescue, explosives, chemicals, biological and radioactive threats along with other police operational implementing elements in order to realize legal order and peace of society throughout juridical of Indonesia and other tasks assigned to the corps. Qualifications The Pelopor qualifications, which are the basic capabilities of every Brimob member, are the following basic skills: Ability to navigate with map and compass Intelligence Anti-terror (counter-terrorism) Riot control Guerrilla war, Close / Urban war tactics Bomb disposal Handle high intensity crimes where the use of firearms is present Search and rescue Surveillance, disguise and prosecution. Other individual and unit capabilities. Function The function of the Police's Mobile Brigade Corps as the Polri's Principal Operating Unit which has specific capabilities (Riot control, Combat Countermeasures, Mobile Detective, Counter-terrorism, Bomb Disposal, and Search and Rescue) in the framework of High-level domestic security and community-supported search and rescue personnel who are well trained and have solid leadership, equipment and supplies with modern technology. Role The role of Brimob is together with other police functions is to act against high-level criminals, mainly mass riots, organized crime of firearms, bombs, chemicals, biology and radio active threats in order to realize the legal order and peace of society in all juridical areas of Indonesia. Roles undertaken include: Role to help other police functions, Role to complement in territorial police operations carried out in conjunction with other police functions, Role to protect members of other police units as well civilians who are under threat, Role to strengthen other police functions in the implementation of regional operational tasks, Serve to replace and handle territorial police duties if the situation or task objective has already led to a high-grade crime. Organisation In 1992 the Mobile Brigade was essentially a paramilitary organisation trained and organised along military lines. It had a strength of about 12,000. The brigade was used primarily as an elite unit for emergencies and supporting police operations as a rapid response unit. The unit was mainly deployed for domestic security and defense operations, but now has gained and achieved many specialties in the scope of policing duties such as implementing SWAT operations, Search and Rescue operations, Riot control and CBR (Chemical, biological and radiological) defense. Brimob also are usually sent to do domestic security operations with the TNI. Since the May 1998 upheaval, PHH (Pasukan Anti Huru-Hara, Anti Riot Unit) have received special anti-riot training. Elements of the unit are cross trained for airborne and Search and Rescue operations. In each Police HQ that represents a province (which is known as POLDA) in Indonesia each has an organized BRIMOB force which consists of a command headquarters, several Detachments of Pelopor police personnel organized into a regiment and usually 1 or 2 detachments of GEGANA. The Chief of the Indonesian National Police, known as KAPOLRI, has the highest command in each police operation including BRIMOB, orders are delivered by the police chief and then executed by his Operational Assistant Agent with then further notification to the Corps Commandant and then to the concerned regional commanders. National Level Units Corps HQ and HQ Services Brimob Corps Training School Gegana Battalion HHC Pelopor Brigade Brigade HQ and HQ Company I (1st) Pelopor Regiment II (2nd) Pelopor Regiment III (3rd) Pelopor Regiment IV (4th) Pelopor Regiment Intelligence Unit Training Command Pelopor Pelopor (lit."Pioneer") is the main reaction force of the Mobile Brigade Corps, it acts as a troop formation and has the roles of mainly riot control and conducting paramilitary operations assigned to the corps to cope with high-level threat of society disturbance. It also specializes in the field of Guerrilla, and Search and Rescue (SAR) operations. There are today 4 national regiments of Pelopor in the Brimob corps which are: I' Pelopor Regiment II Pelopor Regiment III Pelopor Regiment IV Pelopor Regiment In a historical view, this unit was called as "Brimob Rangers" during the Post Independence era. In 1959, during its first formation, Brimob Rangers troops conducted a test mission in the area of Cibeber, Ciawi and Cikatomas which borders Tasikmalaya-Garut in West Java. It was the baptism of fire of the Rangers, in which the newly acquired skills of guerrilla warfare and counter-insurgency operations were applied against remnants of Darul Islam in these communities. The actions against the Islamic Army of Indonesia (TII) units in the province weakened the DI even further, leading the total collapse of the local DI provincial chapter in 1962, ending a decade-long war of violence there. The official first forward deployment of the Brimob Rangers was the Fourth Military Operations Movement in South Sumatra, West Sumatra and North Sumatra (in response to the Permesta rebellion of 1958). It was the Brimob Rangers troops became part of the Bangka Belitung Infantry Battalion led by Lieutenant Colonel (Inf) Dani Effendi. Rangers were tasked to capture the remains of the PRRI prison in Sumatra's forests led by Major Malik, which was then under rebel hands. In 1961, under the express orders of then Chief of Police General Soekarno Djoyonegoro, Brimob Rangers troops were officially renamed Pelopor Troops of the Mobile Brigade. This is in accordance with the wishes of President Sukarno who wanted Indonesian names for units within both the TNI (Indonesian National Armed Forces) and POLRI (Indonesian National Police). At this time also the Pelopor constables and NCOs received many brand new weapons for police and counter-insurgency operations, including the more famous AR-15 assault rifles. The subsequent assignment of this force was to infiltrate West Irian in Fak-Fak in May 1962 and engage in combat with the servicemen of the Royal Netherlands Army during Operation Trikora. The troops were also involved in the Confrontation of Malaysia in 1964 and at that time the Brimob Rangers troops (now Pelopor) in Indonesia faced the British Special Air Service. Pelopor Troops play a role as a troop formation unit and is still active in the Brimob's operational system. Aside from the national regiments, each Police region has a Pelopor regiment of two to four battalions. Gegana Gegana is a special branch detachment within the Brimob corps which have special abilities mainly in bomb disposal and counter-terrorist operations. On the other hand, it also specializes in the field of hostage rescue, intelligence and CBR (Chemical, biological and radiological) defense. The national Gegana unit is organized into a battalion headquarters company and (five) detachments which are: Intelligence Detachment Bomb Disposal Detachment Anti-Terror Detachment Anti-Anarchist Detachment CBR (Chemical, biological and radiological) Detachment This unit was formed in 1976 as a detachment. At first, it was meant to deal with aircraft hijacking. Later in 1995, with the expansion of Brimob, the Gegana Detachment was expanded to become the 2nd BRIMOB Regiment. However, there are a select few specialists who are very skilled in these specialties. Gegana does not have battalions or companies. The regiment is broken down into several detachments. Within each detachment they are split into sub-detachments (sub-den), and within each sub-den they are further sub-divided into several units. Each unit usually consists of 10 personnel. One sub-den consists of 40 personnel, and one detachment consists of about 280 personnel. One operation is usually assigned to one unit. Therefore, from the 10 people in that unit, six are required to have special skills: two for EOD (Explosives and Ordnance Disposal), two for search and rescue operations, and two for counter-terrorist operations. In any operation, two experts are designated Operators One and Two while the rest of the unit members become the Support Team. For example, in counter-terrorism operations, the designated Operators must have sharp-shooting skills, ability to negotiate, and be an expert in storm-and-arrest procedures. These skills and operations are not meant to be lethal because the main goal of every Gegana operation is to arrest suspects and bring them to the court. Unless there is a situation that is compromised, Gegana avoids the use of lethal force. In Search and Rescue operations, the personnel are required to have the basic capabilities of diving, rappelling, shooting, and first aid. In anti-bomb operation, the Operators have to be the expert in their respective fields. Each Gegana personnel has been introduced to various types of bombs in general, including the risks of handling them. There are specific procedures for handling each bomb, including the required timing. Currently, Gegana's national battalion has three Explosive Ordnance Disposal (EOD) tactical vehicles. Gegana battalions or companies are present in each provincial police unit. Unit composition Alongside the national units, regional formations of the Mobile Brigade are present in all 38 Regional Police Forces (Polda) in Indonesia which represent a province. In each BRIMOB unit of a Police HQ in a province (Polda), there are about several detachments of MBC Pelopor units (organized into a regiment) and usually 1 - 2 detachment of Gegana (small battalions or companies). A Brimob unit of a regional police headquarters consists of the following: Regional Mobile Brigade HQ Section (Si-yanma) Planning and Administration Section (Subbagrenmin) Intelligence Section (Si-intel) Operational Section (Si-ops) Provost (Internal affairs) Section (Si-provos) Communications Technology Section (Si-tekkom) Medical and Fitness Section (Si-kesjas) Search and Rescue (SAR) Unit Pelopor Regiment composed of: Regional Pelopor Regimental HQ A Detachment (Den-A) B Detachment (Den-B) C Detachment (Den-C) D Detachment (Den-D) (large departments only) Support units Gegana Detachment (Den Gegana) Detachment HQ 1-3 Subdetachments/Platoons For some regional police headquarters, Pelopor detachments only consists up-to "C" Detachments only (3 battalions each). But for bigger regional police HQs such as the Jakarta Regional Metropolitan Police (Polda Metro Jaya), it consists up-to "D" Detachment with a total of four (4) detachments. Each Pelopor Detachment consists of 4 (four) Companies, and each Company consists of 3 (three) Platoons. The Gegana detachment is organized as a company in most police regions, but in larger ones is organized as a full battalion of two detachments and a headquarters company. In the 2020s, the regional organization was amended with regional divisional commands (Pasukan Brigade Mobil), under which each provincial brigade reports directly. The BRIMOB divisions are led by Police Brigadier Generals. Controversies 2020s In the Kanjuruhan Stadium disaster on 1 October 2022, the police, especially Brimob as crowd control unit, were deployed tear gas, which triggered a stampede of people in the stadium trying to escape from the effects of the gas. A crush formed at an exit, resulting in fans being asphyxiated. The disaster claimed 135 lives, including two police officers and dozens of children under the age of 17. Several officers who operated the tear gas launcher were questioned. Only three officers, two high ranked officers (non-Brimob) and one Brimob commander, were accused in the trial. Of the three, only the Mobile Brigade Commander was sentenced. He was sentenced with 1 year and 6 months for violating Article 359, Article 360 paragraph 1, and Article 360 paragraph 2 of the Criminal Code (KUHP), namely as a result of his negligence causing the death of another person or injuring others. On 16 January 2023, Mobile Brigade members "intimidate" and disrupted the trial by chanting in the courtroom. Gallery See also Detachment 88 or Densus 88, Indonesian special counter-terrorism squad Mobile Police Command, a Vietnamese equivalence to the BRIMOB References External links Video about Brimob 51 Tahun Si Baret Biru February 1962 – Summer 1963: In to Action Gegana Operators In Action on Instagram 1946 establishments in Indonesia Specialist law enforcement agencies of Indonesia Non-military counterterrorist organizations Bomb disposal Non-military counterinsurgency organizations Military units and formations of Indonesia in the Indonesian War of Independence
Mobile Brigade Corps
[ "Chemistry" ]
3,428
[ "Explosion protection", "Bomb disposal" ]
9,421,020
https://en.wikipedia.org/wiki/Copper%20silicide
Copper silicide can refer to either or pentacopper silicide, . Pentacopper silicide is a binary compound of silicon with copper. It is an intermetallic compound, meaning that it has properties intermediate between an ionic compound and an alloy. This solid crystalline material is a silvery solid that is insoluble in water. It forms upon heating mixtures of copper and silicon. Applications Copper silicide thin film is used for passivation of copper interconnects, where it serves to suppress diffusion and electromigration and serves as a diffusion barrier. Copper silicides are invoked in the Direct process, the industrial route to organosilicon compounds. In this process, copper, in the form of its silicide, catalyses the addition of methyl chloride to silicon. An illustrative reaction affords the industrially useful dimethyldichlorosilane: 2 CH3Cl + Si → (CH3)2SiCl2 References Copper compounds Transition metal silicides
Copper silicide
[ "Chemistry" ]
212
[ "Inorganic compounds", "Inorganic compound stubs" ]
9,421,045
https://en.wikipedia.org/wiki/Jimm
Jimm is an alternative open-source instant messaging client for the ICQ network. It is written in Java ME and should work in most of mobile devices that follow MIDP specification. Jimm is licensed under the terms of the GNU General Public License. History Creator of Jimm is Manuel Linsmayer. In 2003 he released a client Mobicq. The client allows to view a list of contacts and exchange messages on a protocol OSCAR (ICQ v8). In 2004 AOL banned the use of the name "Mobicq" because it contains a part belonging to company trademark "ICQ". At that time, client was able to display status, display information about user, play sounds and display messages in the chat. It was decided to rename Mobicq to Jimm. The name "Jimm" means "Java Instant Mobile Messenger". Jimm development team Manuel Linsmayer (founder of the Jimm project) Andreas "Rossi" Rossbacher Denis "ArtDen" Artemov Ivan "Rad1st" Mikitevich External links Jimm Website 2004 software Free instant messaging clients Free software programmed in Java (programming language)
Jimm
[ "Technology" ]
238
[ "Computing stubs", "Software stubs" ]
9,421,082
https://en.wikipedia.org/wiki/Order-5%20square%20tiling
In geometry, the order-5 square tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of {4,5}. Related polyhedra and tiling This tiling is topologically related as a part of sequence of regular polyhedra and tilings with vertex figure (4n). This hyperbolic tiling is related to a semiregular infinite skew polyhedron with the same vertex figure in Euclidean 3-space. References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) See also Square tiling Uniform tilings in hyperbolic plane List of regular polytopes Medial rhombic triacontahedron External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Isohedral tilings Order-5 tilings Regular tilings Square tilings
Order-5 square tiling
[ "Physics" ]
228
[ "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Isohedral tilings", "Symmetry" ]
9,421,111
https://en.wikipedia.org/wiki/Order-4%20pentagonal%20tiling
In geometry, the order-4 pentagonal tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of {5,4}. It can also be called a pentapentagonal tiling in a bicolored quasiregular form. Symmetry This tiling represents a hyperbolic kaleidoscope of 5 mirrors meeting as edges of a regular pentagon. This symmetry by orbifold notation is called *22222 with 5 order-2 mirror intersections. In Coxeter notation can be represented as [5*,4], removing two of three mirrors (passing through the pentagon center) in the [5,4] symmetry. The kaleidoscopic domains can be seen as bicolored pentagons, representing mirror images of the fundamental domain. This coloring represents the uniform tiling t1{5,5} and as a quasiregular tiling is called a pentapentagonal tiling. Related polyhedra and tiling This tiling is topologically related as a part of sequence of regular polyhedra and tilings with pentagonal faces, starting with the dodecahedron, with Schläfli symbol {5,n}, and Coxeter diagram , progressing to infinity. This tiling is also topologically related as a part of sequence of regular polyhedra and tilings with four faces per vertex, starting with the octahedron, with Schläfli symbol {n,4}, and Coxeter diagram , with n progressing to infinity. This tiling is topologically related as a part of sequence of regular polyhedra and tilings with vertex figure (4n). References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) , invited lecture, ICM, Amsterdam, 1954. See also Square tiling Tilings of regular polygons List of uniform planar tilings List of regular polytopes External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Isohedral tilings Order-4 tilings Pentagonal tilings Regular tilings
Order-4 pentagonal tiling
[ "Physics" ]
483
[ "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Isohedral tilings", "Symmetry" ]
9,421,333
https://en.wikipedia.org/wiki/Truncated%20heptagonal%20tiling
In geometry, the truncated heptagonal tiling is a semiregular tiling of the hyperbolic plane. There are one triangle and two tetradecagons on each vertex. It has Schläfli symbol of t{7,3}. The tiling has a vertex configuration of 3.14.14. Dual tiling The dual tiling is called an order-7 triakis triangular tiling, seen as an order-7 triangular tiling with each triangle divided into three by a center point. Related polyhedra and tilings This hyperbolic tiling is topologically related as a part of sequence of uniform truncated polyhedra with vertex configurations (3.2n.2n), and [n,3] Coxeter group symmetry. From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are eight forms. See also Truncated hexagonal tiling Heptagonal tiling Tilings of regular polygons List of uniform tilings References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Heptagonal tilings Hyperbolic tilings Isogonal tilings Semiregular tilings Truncated tilings
Truncated heptagonal tiling
[ "Physics" ]
343
[ "Semiregular tilings", "Truncated tilings", "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Symmetry" ]
9,421,365
https://en.wikipedia.org/wiki/Rhombitriheptagonal%20tiling
In geometry, the rhombitriheptagonal tiling is a semiregular tiling of the hyperbolic plane. At each vertex of the tiling there is one triangle and one heptagon, alternating between two squares. The tiling has Schläfli symbol rr{7, 3}. It can be seen as constructed as a rectified triheptagonal tiling, r{7,3}, as well as an expanded heptagonal tiling or expanded order-7 triangular tiling. Dual tiling The dual tiling is called a deltoidal triheptagonal tiling, and consists of congruent kites. It is formed by overlaying an order-3 heptagonal tiling and an order-7 triangular tiling. Related polyhedra and tilings From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms. Symmetry mutations This tiling is topologically related as a part of sequence of cantellated polyhedra with vertex figure (3.4.n.4), and continues as tilings of the hyperbolic plane. These vertex-transitive figures have (*n32) reflectional symmetry. See also Rhombitrihexagonal tiling Order-3 heptagonal tiling Tilings of regular polygons List of uniform tilings Kagome lattice References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Semiregular tilings
Rhombitriheptagonal tiling
[ "Physics" ]
416
[ "Semiregular tilings", "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Symmetry" ]
9,421,390
https://en.wikipedia.org/wiki/Snub%20triheptagonal%20tiling
In geometry, the order-3 snub heptagonal tiling is a semiregular tiling of the hyperbolic plane. There are four triangles and one heptagon on each vertex. It has Schläfli symbol of sr{7,3}. The snub tetraheptagonal tiling is another related hyperbolic tiling with Schläfli symbol sr{7,4}. Images Drawn in chiral pairs, with edges missing between black triangles: Dual tiling The dual tiling is called an order-7-3 floret pentagonal tiling, and is related to the floret pentagonal tiling. Related polyhedra and tilings This semiregular tiling is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3.n) and Coxeter–Dynkin diagram . These figures and their duals have (n32) rotational symmetry, being in the Euclidean plane for n=6, and hyperbolic plane for any higher n. The series can be considered to begin with n=2, with one set of faces degenerated into digons. From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms. References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) See also Snub hexagonal tiling Floret pentagonal tiling Order-3 heptagonal tiling Tilings of regular polygons List of uniform planar tilings Kagome lattice External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Chiral figures Hyperbolic tilings Isogonal tilings Semiregular tilings Snub tilings
Snub triheptagonal tiling
[ "Physics", "Chemistry" ]
449
[ "Snub tilings", "Semiregular tilings", "Isogonal tilings", "Tessellation", "Stereochemistry", "Chirality", "Hyperbolic tilings", "Stereochemistry stubs", "Chiral figures", "Symmetry" ]
9,421,457
https://en.wikipedia.org/wiki/Truncated%20order-7%20triangular%20tiling
In geometry, the order-7 truncated triangular tiling, sometimes called the hyperbolic soccerball, is a semiregular tiling of the hyperbolic plane. There are two hexagons and one heptagon on each vertex, forming a pattern similar to a conventional soccer ball (truncated icosahedron) with heptagons in place of pentagons. It has Schläfli symbol of t{3,7}. Hyperbolic soccerball (football) This tiling is called a hyperbolic soccerball (football) for its similarity to the truncated icosahedron pattern used on soccer balls. Small portions of it as a hyperbolic surface can be constructed in 3-space. Dual tiling The dual tiling is called a heptakis heptagonal tiling, named for being constructible as a heptagonal tiling with every heptagon divided into seven triangles by the center point. Related tilings This hyperbolic tiling is topologically related as a part of sequence of uniform truncated polyhedra with vertex configurations (n.6.6), and [n,3] Coxeter group symmetry. From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms. In popular culture This tiling features prominently in HyperRogue. See also Triangular tiling Order-3 heptagonal tiling Order-7 triangular tiling Tilings of regular polygons List of uniform tilings References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Geometric explorations on the hyperbolic football by Frank Sottile Hyperbolic tilings Isogonal tilings Order-7 tilings Semiregular tilings Triangular tilings Truncated tilings
Truncated order-7 triangular tiling
[ "Physics" ]
451
[ "Semiregular tilings", "Truncated tilings", "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Symmetry" ]
9,421,673
https://en.wikipedia.org/wiki/Parts%20cleaning
Parts cleaning is a step in various industrial processes, either as preparation for surface finishing or to safeguard delicate components. One such process, electroplating, is particularly sensitive to part cleanliness, as even thin layers of oil can hinder coating adhesion. Cleaning methods encompass solvent cleaning, hot alkaline detergent cleaning, electro-cleaning, and acid etch. In industrial settings, the water-break test is a common practice to assess machinery cleanliness. This test involves thoroughly rinsing and vertically holding the surface. Hydrophobic contaminants, like oils, cause water to bead and break, leading to rapid drainage. In contrast, perfectly clean metal surfaces are hydrophilic and retain an unbroken sheet of water without beading or draining off. It is important to note that this test may not detect hydrophilic contaminants, but they can be displaced during the water-based electroplating process. Surfactants like soap can reduce the test's sensitivity and should be thoroughly rinsed off. Definitions and classifications For the activities described here, the following terms are often found: metal cleaning, metal surface cleaning, component cleaning, degreasing, parts washing, and parts cleaning. These are well established in technical language usage, but they have their shortcomings. Metal cleaning can easily be mixed up with the refinement of un-purified metals. Metal surface cleaning and metal cleaning do not consider the increasing usage of plastics and composite materials in this sector. The term component cleaning leaves out the cleaning of steel sections and sheets, and finally, degreasing only describes a part of the topic, as in most cases, chips, fines, particles, salts, etc. also have to be removed. The terms "commercial and industrial parts cleaning", "parts cleaning in craft and industry", or "commercial parts cleaning" probably best describe this field of activity. There are some specialists who prefer the term "industrial parts cleaning", because they want to exclude maintenance of buildings, rooms, areas, windows, floors, tanks, machinery, hygiene, hands washing, showers, and other non-commercial objects. Elements and their interactions Cleaning activities in this sector can only be characterized sufficiently by a description of several factors. These are outlined in the first image above. Parts and materials to be cleaned First, consider the parts to be cleaned. They may comprise non-processed or hardly processed sections, sheets and wires, but also machined parts or assembled components needing cleaning. Therefore, they may be composed of different metals or different combinations of metals. Plastics and composite materials can frequently be found and indeed are on the increase because, e.g. the automobile industry, as well as others, are using more and lighter materials. Mass can be very important for the selection of cleaning methods. For example, big shafts for ships are usually cleaned manually, whereas tiny shafts for electrical appliances are often cleaned in bulk in highly automated plants. Similarly important is the geometry of the parts. Long, thin, branching, threaded holes, which could contain jammed chips, feature among the greatest challenges in this technical field. High pressure and the power wash process are one way to remove these chips, as well as robots, which are programmed to exactly flush the drilled holes under high pressure. Contaminations The parts are usually covered by unwanted substances, contaminants, or soiling. The definition used is quite different. In certain cases, these coverings may be desired: e.g. one may not wish to remove a paint layer but only the material on top. In another cases, where crack proofing is necessary, one has to remove the paint layer, as it is regarded as an unwanted substance. The classification of soiling follows the layer structure, starting from the base material: Deformed boundary layer, > 1 μm Reaction layer, 1–10 nm Sorption layer, 1–10 nm Contamination layer, > 1 μm See illustration 2: Structure of a metallic surface The closer a layer is to the substrate surface, the more energy is needed to remove it. Correspondingly, the cleaning itself can be structured according to the type of energy input: Mechanical – abrasive: blasting, grinding Mechanical – non-abrasive: stirring, mixing, ultrasound, spraying Thermal – reactive: heat treatment much above 100 °C in reactive gases Thermal – non-reactive: temperature below 100 °C, increased bath temperature, vapor degreasing Chemical – abrasive/reactive: pickling in liquids, plasma-assisted, sputter-cleaning, electropolishing Chemical – non-reactive: organic solvents, aqueous solutions, supercritical CO2 The contamination layer may then be further classified according to: Origin Composition: e.g. cooling lubricants may be composed differently. Single components may account for big problems, especially for job shop cleaners, who have no control over prior processes and thus don't know the contaminants. For example, silicates may obstruct nitriding. State of aggregation Chemical and physical properties The American Society for Testing and Materials (ASTM) presents six groups of contaminations in their manual "Choosing a cleaning process" and relates them to the most common cleaning methods, the suitability of cleaning methods for the removal of a given contaminate is discussed. In addition, they list exemplary cleaning processes for different typical applications. Since one has to consider very many different aspects when choosing a process, this can only serve as a first orientation. The groups of contaminants are stated: Pigmented drawing compounds Unpigmented oil and grease Chips and cutting fluids Polishing and buffing compounds Rust and scale Others Charging In order to select suitable equipment and media, it should be known also which amount and which throughput have to be handled. In larger factories, little amounts are virtually ever cleaned economically . Additionally, the pricing method needs to be determined. Sensitive parts sometimes need to be fixed in boxes. When dealing with large amounts, bulk charging can be used, but it's difficult to achieve a sufficient level of cleanliness with flat pieces clinging together. Drying can also be difficult in these cases. Place of cleaning Another consideration is the place of cleaning. Cleaning in a workshop calls for different methods as compared to cleaning that is to be done on site, which can be the case with maintenance and repair work. Usually, the cleaning takes place in a workshop. Several common methods include solvent degreasing, vapor degreasing, and the use of an aqueous parts washer. Companies often want the charging, loading and unloading to be integrated into the production line, which is much more demanding as regards size and throughout the ability of the cleaning system. Such cleaning systems often exactly match the requirements regarding parts, contaminants and charging methods (special production). Central cleaning equipment, often built as multi task systems, is commonly used. These systems can suit different cleaning requirements. Typical examples are the wash stands or the small cleaning machines, which are found in many industrial plants. Cleaning equipment and procedure First, one can differentiate among the following techniques (ordered from most to least technologically advanced): Manual Mechanical Automatic Robot supported The process may be performed in one step, which is especially true for the manual cleaning, but typically it requires several steps. Therefore, it is not uncommon to find 10 to 20 steps in large plants, e.g., for the medical and optical industry. This can be especially complex because non-cleaning steps may be integrated in such plants like application of corrosion protection layers or phosphating. Cleaning can also be simple: the cleaning processes are integrated into other processes, as it is the case with electroplating or galvanising, where it usually serves as a pre-treatment step. The following procedure is quite common: Pre-cleaning Main cleaning Rinsing Rinsing with deionised water Rinsing with corrosion protection Drying Each of these steps may take place in its own bath, chamber, or, in case of spray cleaning, in its own zone (line or multi-chamber equipment). But often these steps may have a single chamber into which the respective media are pumped in (single chamber plant). Cleaning media plays an important role as it removes the contaminants from the substrate. For liquid media, the following cleaners can be used: aqueous agents, semi-aqueous agents (an emulsion of solvents and water), hydrocarbon-based solvents, and halogenated solvents. Usually, the latter are referred to as chlorinated agents, but brominated and fluorinated substances can be used. The traditionally used chlorinated agents, TCE and PCE, which are hazardous, are now only applied in airtight plants and the modern volume shift systems limit any emissions. In the group of hydrocarbon-based solvents, there are some newly developed agents like fatty acid esters made of natural fats and oils, modified alcohols and dibasic esters. Aqueous cleaners are mostly a combination of various substances like alkaline builders, surfactants, and sequestering agents. With ferrous metal cleaning, rust inhibitors are added into the aqueous cleaner to prevent flash rusting after washing. Their use is on the rise as their results have proven to be most times as good or better than hydrocarbon cleaners. The waste generated is less hazardous, which reduces disposal costs. Aqueous cleaners have advantages as regards to particle and polar contaminants and only require higher inputs of mechanical and thermal energy to be effective, whereas solvents more easily remove oils and greases but have health and environmental risks. In addition, most solvents are flammable, creates fire and explosion hazards. Nowadays, with proper industrial parts washer equipment, it is accepted that aqueous cleaners remove oil and grease as easily as solvents. Another approach is with solid cleaning media (blasting) which comprises the CO2 dry ice process: For tougher requirements, pellets are used while for more sensitive materials or components CO2 in form of snow is applied. One drawback is the high energy consumption required to make dry ice. Last but not least, there are processes with no media like vibration, laser, brushing and blow/exhaust systems. All cleaning steps are characterized by media and applied temperatures and their individual agitation/application (mechanical impact). There is a wide range of different methods and combinations of these methods: Blasting Boiling under pressure Carbon dioxide cleaning Circulation of bath Flooding Gas or air injection into bath Hydroson Injection flooding Megasonic, see megasonic cleaning Movement of parts (turning, oscillating, pivoting) Power wash process Pressure flooding Spraying Sprinkling Ultrasonic, see ultrasonic cleaning Finally, every cleaning step is described by the time which the parts to be cleaned spends in the respective zone, bath, or chamber, and thus medium, temperature, and agitation can affect the contamination. Every item of cleaning equipment needs a so-called periphery. This term describes measures and equipment on the one hand side to maintain and control baths and side to protect human beings and the environment. In most plants, the cleaning agents are circulated until their cleaning power has eventually decreased and reached the maximum tolerable contaminant level. In order to delay the bath exchange as much as possible, there are sophisticated treatment attachments in use, removing contaminants and the used up agents from the system. Fresh cleaning agents or parts thereof have to be supplemented, which requires a bath control. The latter is more and more facilitated online and thus allows a computer aided change of the bath. With the help of oil separators, demulsifying agents and evaporators, aqueous processes can be conducted 'wastewater free'. Complete exchange of baths becomes only necessary every 3 to 12 months. When using organic solvents, the preferred method to achieve a long operating bath life is distillation, an especially effective method to separate contaminants and agents. The periphery also includes measures to protect the workers like encapsulation, automatic shutoff of power supply, automatic refill and sharpening of media (e.g., gas shuttle technique), explosion prevention measures, exhaust ventilation etc., and also measures to protect the environment, e.g. capturing of volatile solvents, impounding basins, extraction, treatment and disposal of resulting wastes. Solvents based cleaning processes have the advantage that the dirt and the cleaning agent can be more easily separated, whereas in aqueous processes is more complex. In processes without cleaning media, like laser ablation and vibration cleaning, only the removed dirt has to be disposed of as there is no cleaning agent. Quite little waste is generated in processes like CO2 blasting and automatic brush cleaning at the expense of higher energy costs. Quality requirements A standardization of the quality requirements for cleaned surfaces regarding the following process (e.g. coating, heat treatment) or from the point of view of technical functionality is difficult. However, it is possible to use general classifications. In Germany, it was attempted to define cleaning as a subcategory of metal treatment (DIN 8592: Cleaning as sub category of cutting processes), but this does not cope with all the complexities of cleaning. The rather general rules include the classification in intermediate cleaning, final cleaning, precision cleaning and critical cleaning (s. table), in practice seen only as a general guideline. (1) Related to the total dirt; (2) Only related to carbon Thus, the rule of thumb is still followed, stating that the quality requirements are met if the subsequent process (see below) does not cause any problems. For example, a paint coating does not flake off before the guarantee period ends. Where this is not sufficient, especially in case of external orders, because of missing standards, there are often specific customer requirements regarding remaining contamination, corrosion protection, spots and gloss level, etc. Measuring methods to ensure quality therefore do not play a bigger role in the workshops, although there are a broad scale of different methods, from visual control over simple testing methods (water break test, wipe test, measurement of contact angle, test inks, tape test, among others) to complex analysis methods (gravimetric test, particle counting, infrared spectroscopy, glow discharge spectroscopy, energy dispersive X-ray analysis, scanning electron microscopy and electrochemical methods, among others). There are only a few methods, which can be applied directly in the line and which offer reproducible and comparable results. It was not until recently that bigger advancements in this area have been made The general situation has changed, meanwhile, because of dramatically rising cleanliness requirements for certain components in the automotive industry. For example, brake systems and fuel-injection systems need to be fitted with increasingly smaller diameters and they have to withstand increasingly higher pressures. Therefore, a very minor particle contamination may lead to big problems. Because of the rising innovation speed, the industry cannot afford to identify possible failures at a relatively late stage. Therefore, the standard VDA 19/ISO 16232 'Road Vehicles – Cleanliness of Components of Fluid Circuits' was developed which describes methods that can control the compliance with the cleanliness requirements. Subsequent process When choosing cleaning techniques, cleaning agents and cleaning processes, the subsequent processes, i.e. the further processing of the cleaned parts, is of special interest. The classification follows basically the metal work theory: Machining Cutting Joining Coating Heat treatment Assembling Measuring, testing Repairing, maintenance In time, empirical values were established, how efficient the cleaning has to be, to assure the processes for the particular guarantee period and beyond. Choosing the cleaning method often starts from here. Challenges and trends The details above illustrate how extremely complex this specific field is. Small variations in requirements can cause completely different processes. It becomes more and more important to receive the required cleanliness as cost-effective as possible and with continuously minimized health and environmental risks, because cleaning has become of central importance for the supply chain in manufacturing. Applying companies usually rely on their suppliers, who—because of a big experience base—suggest adequate equipment and processes, which are then adapted to the detailed requirements in tests stations at the supplier's premises. However, they are limited to their scope of technology. To put practitioners in a position to consider all relevant possibilities meeting their requirements, some institutes have developed different tools: SAGE: Unfortunately, no longer in operation, the comprehensive expert system for parts cleaning and degreasing provided a graded list with relatively general processes of solvent and process alternatives. Developed by the Surface Cleaning Program at the Research Triangle Institute, Raleigh, North Carolina, USA, in cooperation with the U.S. EPA (used to be available under: http://clean.rti.org/). Cleantool: A 'Best Practice' database in seven languages with comprehensive and specific processes, directly recorded in companies. It contains furthermore an integrated evaluation tool, which covers the areas of technology, quality, health and safety at work, environmental protection and costs. Also included is a comprehensive glossary (seven languages, link see below). Bauteilreinigung: A selection system for component cleaning developed by the University of Dortmund, assisting the users to analyze their cleaning tasks regarding the suitable cleaning processes and cleaning agents (German only, link see below). TURI, Toxic Use Reduction Institute: A department of the University of Lowell, Massachusetts (USA). TURI's laboratory has been conducting evaluations on alternative cleaning products since 1993. A majority of these products were designed for metal surface cleaning. The results are available on-line through the Institute's laboratory database. See also Acoustic cleaning Brake cleaner Parts washer Solvent degreasing Sonication Ultrasonic cleaning Vapor degreasing References Further reading John B. Durkee: "Management of Industrial Cleaning Technology and Processes," 2006, Elsevier, Oxford, United Kingdom, . Carole A. LeBlanc: The search for safer and greener chemical solvents in surface cleaning : a proposed tool to support environmental decision-making. 2001, Erasmus University Centre for Environmental Studies, Rotterdam, the Netherlands. David S. Peterson: Practical guide to industrial metal cleaning. 1997, Hanser Gardner Publications, Cincinnati, Ohio, USA. Barbara Kanegsberg ed.: Handbook for critical cleaning. 2001, CRC Press, Boca Raton, Florida, USA. Malcolm C. McLaughlin et al.: The aqueous cleaning handbook : a guide to critical-cleaning procedures, techniques, and validation. 2000, The Morris-Lee Publishing Group, Rosemont, New Jersey, USA. Karen Thomas, John Laplante, Alan Buckley: Guidebook of part cleaning alternatives : making cleaning greener in Massachusetts. 1997, Toxics Use Reduction Institute, University of Massachusetts, Lowell, Massachusetts, USA ASM International: Choosing a cleaning process. 1996, ASM International, Materials Park, Ohio, USA. ASM International: Guide to acid, alkaline, emulsion, and ultrasonic cleaning. 1997, ASM International, Materials Park, Ohio, USA. ASM International: Guide to vapour degreasing and solvent cold cleaning. 1996, ASM International, Materials Park, Ohio, USA. ASM International: Guide to mechanical cleaning systems. 1996, ASM International, Materials Park, Ohio, USA. ASM International: Guide to pickling and descaling, and molten salt bath cleaning. 1996, ASM International, Materials Park, Ohio, USA. Klaus-Peter Müller: Praktische Oberflächentechnik. Edition 2003.XII, vieweg, Braunschweig/Wiesbaden, Thomas W. Jelinek: Reinigen und Entfetten in der Metallindustrie. 1. Edition 1999, Leuze Verlag, Saulgau, Brigitte Haase: Wie sauber muß eine Oberfläche sein? in: Journal Oberflächentechnik. Nr. 4, 1997 Brigitte Haase: Reinigen oder Vorbehandeln? Oberflächenzustand und Nitrierergebnis, Bauteilreinigung, Prozesskontrolle und –analytik. Hochschule Bremerhaven Bernd Künne: Online Fachbuch für industrielle Reinigung. in: bauteilreinigung.de. Universität Dortmund, Fachgebiet Maschinenelemente Reiner Grün: Reinigen und Vorbehandeln - Stand und Perspektiven. in: Galvanotechnik. 90, 1999, Nr. 7, S. 1836-1844 Günter Kreisel et al.: Ganzheitliche Bilanzierung/Bewertung von Reinigungs-/Vorbehandlungstechnologien in der Oberflächenbehandlung. 1998, Jena, Institut für Technische Chemie der FSU Cleaning Industrial processes Metalworking
Parts cleaning
[ "Chemistry" ]
4,303
[ "Cleaning", "Surface science" ]
9,421,870
https://en.wikipedia.org/wiki/Diagonal%20relationship
In chemistry, a diagonal relationship is said to exist between certain pairs of diagonally adjacent elements in the second and third periods (first 20 elements) of the periodic table. These pairs (lithium (Li) and magnesium (Mg), beryllium (Be) and aluminium (Al), boron (B) and silicon (Si), etc.) exhibit similar properties; for example, boron and silicon are both semiconductors, forming halides that are hydrolysed in water and have acidic oxides. Further diagonal similarities have also been suggested for carbon-phosphorus and nitrogen-sulfur, along with extending the Li-Mg and Be-Al relationships down into the transition elements (such as scandium). The organization of elements on the periodic table into horizontal rows and vertical columns makes certain relationships more apparent (periodic law). Moving rightward and descending the periodic table have opposite effects on atomic radii of isolated atoms. Moving rightward across the period decreases the atomic radii of atoms, while moving down the group will increase the atomic radii. Similarly, on moving rightward a period, the elements become progressively more covalent, less basic and more electronegative, whereas on moving down a group the elements become more ionic, more basic and less electronegative. Thus, on both descending a period and crossing a group by one element, the changes "cancel" each other out, and elements with similar properties which have similar chemistry are often found – the atomic radius, electronegativity, properties of compounds (and so forth) of the diagonal members are similar. The reasons for the existence of diagonal relationships are not fully understood, but charge density is a factor. For example, Li+ is a small cation with a +1 charge and Mg2+ is somewhat larger with a +2 charge, so the ionic potential of each of the two ions is roughly the same. It was revealed by an examination that the charge density of lithium is much closer to that of magnesium than to those of the other alkali metals. Using the Li–Mg pair (under room temperature and pressure): When combined with oxygen under standard conditions, Li and Mg form only normal oxides whereas Na forms peroxide and metals below Na, in addition, form superoxides. Li is the only group 1 element which forms a stable nitride, Li3N. Mg, as well as other group 2 elements, also form nitrides. Lithium carbonate, phosphate and fluoride are sparingly soluble in water. The corresponding group 2 salts are insoluble. (Think lattice and solvation energies). Both Li and Mg form covalent organometallic compounds. LiMe and MgMe2 (cf. Grignard reagents) are both valuable synthetic reagents. The other group 1 and group 2 analogues are ionic and extremely reactive (and hence difficult to manipulate). Chlorides of both Li and Mg are deliquescent (absorb moisture from surroundings) and soluble in alcohol and pyridine. Lithium chloride, like magnesium chloride (MgCl2·6H2O) separates out from hydrated crystal LiCl·2H2O. Lithium carbonate and magnesium carbonate are both unstable and can produce corresponding oxides and carbon dioxide when they are heated. References Inorganic chemistry Periodic table
Diagonal relationship
[ "Chemistry" ]
678
[ "Periodic table", "nan" ]
9,421,904
https://en.wikipedia.org/wiki/Stream%20thrust%20averaging
In fluid dynamics, stream thrust averaging is a process used to convert three-dimensional flow through a duct into one-dimensional uniform flow. It makes the assumptions that the flow is mixed adiabatically and without friction. However, due to the mixing process, there is a net increase in the entropy of the system. Although there is an increase in entropy, the stream thrust averaged values are more representative of the flow than a simple average as a simple average would violate the second law of thermodynamics. Equations for a perfect gas Stream thrust: Mass flow: Stagnation enthalpy: Solutions Solving for yields two solutions. They must both be analyzed to determine which is the physical solution. One will usually be a subsonic root and the other a supersonic root. If it is not clear which value of velocity is correct, the second law of thermodynamics may be applied. Second law of thermodynamics: The values and are unknown and may be dropped from the formulation. The value of entropy is not necessary, only that the value is positive. One possible unreal solution for the stream thrust averaged velocity yields a negative entropy. Another method of determining the proper solution is to take a simple average of the velocity and determining which value is closer to the stream thrust averaged velocity. References Equations of fluid dynamics Fluid dynamics
Stream thrust averaging
[ "Physics", "Chemistry", "Engineering" ]
272
[ "Equations of fluid dynamics", "Equations of physics", "Chemical engineering", "Piping", "Fluid dynamics" ]
9,422,165
https://en.wikipedia.org/wiki/Rubroboletus%20pulcherrimus
Rubroboletus pulcherrimus, known as Boletus pulcherrimus until 2015, and commonly known as the red-pored bolete, is a species of mushroom in the family Boletaceae. It is a large bolete from Western North America with distinguishing features that include a netted surface on the stem, a red to brown cap and stem color, and red pores that stain blue upon injury. Until 2005 this was the only bolete that has been implicated in the death of someone consuming it; a couple developed gastrointestinal symptoms in 1994 after eating this fungus with the husband succumbing. Autopsy revealed infarction of the midgut. Taxonomy American mycologists Harry D. Thiers and Roy E. Halling were aware of confusion on the west coast of North America over red-pored boletes; two species were traditionally recognised— Boletus satanas and Boletus eastwoodiae. However, they strongly suspected the type specimen of the latter species was in fact the former. In reviewing material they published a new name for the taxon, which Thiers had written about in local guidebooks as B. eastwoodiae, as they felt that name to be invalid. Hence in 1976 they formally described Boletus pulcherrimus, from the Latin pulcherrimus, meaning "very pretty". It was transferred to the genus Rubroboletus in 2015 along with several other allied reddish colored, blue-staining bolete species such as Rubroboletus eastwoodiae and Rubroboletus satanas. Description Colored various shades of olive- to reddish-brown, the cap may sometimes reach in diameter and is convex in shape before flattening at maturity. The cap surface may be smooth or velvety when young, but may be scaled in older specimens; the margin of the cap is curved inwards in young specimens but rolls out and flattens as it matures. The cap may reach a thickness of when mature. The adnate (attached squarely to the stem) poroid surface is bright red to dark red or red-brown and bruise dark blue or black; there are 2 to 3 pores per mm in young specimens, and in maturity they expand to about 1 or 2 per mm. In cross section, the tubes and flesh are yellow. The tubes are between long, while the angular pores are up to 1 mm in diameter; pores can range in color from dark red in young specimens to reddish brown in age. The pores will stain a blue color when cut or bruised. The solid, firm stem is long and thick—up to in diameter, at the base before tapering to at the top. It is yellow or yellow-brown in color and bears a network of red reticulations on the upper 2/3 of its length. The spore print is olive-brown. The taste of the flesh is reportedly mild, and the odor indistinct, or "slightly fragrant". Microscopic characters The spores are spindle-shaped or elliptical, thick-walled, smooth, and have dimensions of 13–16 by 5.5–6.5 μm. The basidia, the spore-bearing cells, are club-shaped (clavate), attached to 1 to 4 spores, and have dimensions of 35–90 by 9–12 μm. The cystidia (sterile, non-spore-bearing cells found interspersed among the basidia) in the hymenium have dimensions of 33–60 by 8–12 μm. Clamp connections are absent in the hyphae of B. pulcherrimus. Similar species Although the relatively large fruiting bodies of R. pulcherrimus are distinctive, they might be confused with superficially similar species, such as Rubroboletus eastwoodiae; the latter species has a much thicker stalk. Another similar species is R. haematinus, which may be distinguished by its yellower stem and cap colors that are various shades of brown. Its darker cap and lack of reticulation on the stipe differentiate it from R. satanas. Neoboletus luridiformis grow with oaks but is smaller and have non-reticulate stipe. Distribution and habitat Rubroboletus pulcherrimus is found in western North America, from New Mexico and California to Washington, and may feasibly occur in British Columbia, Canada. One source notes it grows at low altitudes in the Cascade Range and Olympic Mountains; another claims it grows at high elevations, over . Fruiting in autumn, it grows singly or in groups (although another source claims "never in groups") in humus in mixed woodlands. In the original publication describing the species, Thiers and Halling note that it is associated with forests containing tanbark oaks (Lithocarpus densiflora), Douglas fir (Pseudotsuga menziesii), and Grand Fir (Abies grandis). Smith and Weber mention increased fruitings after warm heavy fall rains following a humid summer. Toxicity In general, blue-staining red-pored boletes should be avoided for consumption. Thiers warned this species may be toxic after being alerted to severe gastrointestinal symptoms in one who had merely tasted it. Years later, in 1994, a couple developed gastrointestinal symptoms after eating this fungus and the husband died as a result. A subsequent autopsy revealed that the man had suffered an infarction of the midgut. Rubroboletus pulcherrimus was the only bolete that had been implicated in the death of someone consuming it, It is known to contain low levels of muscarine, a peripheral nervous system toxin. A 2005 report from Australia recorded a fatality from muscarinic syndrome after consuming a mushroom from the genus Rubinoboletus (but possibly a species of Chalciporus). See also List of deadly fungi List of North American boletes References Fungi described in 1976 Poisonous fungi pulcherrimus Deadly fungi Fungi of North America Taxa named by Harry Delbert Thiers Fungus species Taxa named by Roy Halling
Rubroboletus pulcherrimus
[ "Biology", "Environmental_science" ]
1,272
[ "Poisonous fungi", "Fungi", "Toxicology", "Fungus species" ]
9,422,278
https://en.wikipedia.org/wiki/Cochliobolus%20sativus
The fungus Cochliobolus sativus is the teleomorph (sexual stage) of Bipolaris sorokiniana (anamorph) which is the causal agent of a wide variety of cereal diseases. The pathogen can infect and cause disease on roots (where it is known as common root rot), leaf and stem, and head tissue. C. sativus is extremely rare in nature and thus it is the asexual or anamorphic stage which causes infections. The two most common diseases caused by B. sorokiniana are spot blotch and common root rot, mainly on wheat and barley crops. Identification The mycelium of B. sorokiniana is usually deep olive-brown. New cultures produce abundant simple conidiophores, which may be single or clustered and measure 6–10 x 110–220 μm with septations. Conidia develop laterally from pores beneath each conidiophore septum. Conidia are olive-brown and ovate to oblong, with rounded ends and a prominent basal scar. They measure 15–28 x 40–120 μm and are 3- to 10-septate. Some may be slightly curved. Their walls are smooth and noticeably thickened at the septa. The sexual state (C. sativus), when formed in culture, is in the form of black, globose pseudothecia 300–400 μm in diameter, with erect beaks 50–200 μm long. Asci are clavate and measure 20–35 x 150–250 μm. Ascospores are hyaline, uniformly filamentous, and spirally flexed within asci. They measure 5–10 x 200–250 μm and are 4- to 10-septate. Host species Agropyron cristatum1, Allium sp. 1, Alopecurus pratensis1, Aneurolepidium chinense1, Avena sativa1 Bromus inermis1, B. marginatus1, B. willdenowii1 Calluna vulgaris1, Chloris gayana1, Cicer arietinum1, Clinelymus dahuricus1, C. sibiricus1, Cynodon dactylon1, C. transvaalensis1 Dactylis glomerata1 Echinochloa crus-galli1, Elymus junceus1 Festuca sp. 1 Guzmania sp. 1 Hordeum brevisubulatum1, H. distichon1, H. sativum var. hexastichon1, H. vulgare1, H. vulgare var. hexastichon1 Lablab purpureus1, Linum usitatissimum1, Lolium multiflorum1 Pennisetum typhoides1 Roegneria semicostata1 Saccharum sp. 1, Secale cereale1, Setaria italica1, Sorghum sp. 1 Taraxacum kok-saghyz1, Trisetum aestivum1, Triticum aestivum1, T. secale1, T. turgidum subsp. durum, T. vulgare1 Zea mays1 Notes 1. USDA ARS Fungal Database Geographical distribution Cochliobolus sativus has a world-wide distribution. Notes 1. USDA ARS Fungal Database 2. Main diseases Common root rot (barley); Common root rot (wheat); spot blotch (barley); Spot blotch (wheat) Spot blotch of wheat This is most important disease in non-tradition wheat growing areas. The B. sorokiniana comes with Pyrenophora tritici-repentis and causes millions of tons of wheat loss each year. The symptoms are blotch as well as induced senescence due to premature chlorophyll losses Rosyara et al., 2007. References External links Index Fungorum USDA ARS Fungal Database Helminthosporium leaf blights: spot blotch and tan spot Diagnosis of Common Root Rot of Wheat and Barley Fungal plant pathogens and diseases Cereal diseases Barley diseases Wheat diseases Cochliobolus Fungi described in 1890 Fungus species
Cochliobolus sativus
[ "Biology" ]
894
[ "Fungi", "Fungus species" ]
9,422,319
https://en.wikipedia.org/wiki/2000s%20in%20science%20and%20technology
This article is a summary of the 2000s in science and technology. Science Using the Wilkinson Microwave Anisotropy Probe, scientists studying the universe measured that its age is 13.77 billion years; "solidly supported" that it has been expanding and cooling since the Big Bang; and calculated that the universe is composed of about 4.6% atoms, 24% dark matter, and 71% dark energy. The Mars Exploration Rover (MER) Mission successfully reached the surface of Mars in 2004, and sent detailed data and images of the landscape there back to Earth. Whilst NASA's original mission timeline of three months was incorrectly speculated, the mission was tremendously successful overall in the long term, as the MER Mission continued until 2018, lasting nearly 25 times the projected length. The Human Genome Project was completed in 2003. The National Geographic Society and IBM funded The Genographic Project. In 2002, Perelman posted the first of a series of eprints to the arXiv, in which he proved the Poincaré conjecture, 2004 – The astrophysicist and radio astronomer Naomi McClure-Griffiths identifies a new spiral arm of the Milky Way galaxy On 29 July 2005, the discovery of Eris, a Kuiper Belt object larger than Pluto, was announced. In August 2006 Pluto was "demoted" to a "dwarf planet" after being considered a planet for 76 years. Other "dwarf planets" in our solar system now include Ceres and Eris. Space tourism/Private spaceflight began with American Dennis Tito, paying Russia US$20 million for a week-long stay to the International Space Station. The Voyager 1 spacecraft entered the heliosheath, marking its departure from the Solar System. Scientists discovered water ice on the Moon in 2009. AFIS and CODIS became the main forensic tools for fingerprint and genetic code investigation in the industrialized world and some developing countries. infraluciferin became the go-to luciferin for in vivo imaging and threatening to replace natural luciferin entirely. Technology Information technology There was a huge jump in broadband internet usage globally - for example, it constituted only 6% of U.S. internet users in June 2000 and one mid-decade study predicted 62% adoption by 2010. Yet, by February 2007, over 80% of US Internet users were connected via broadband and broadband internet became almost a required standard for quality internet browsing. There were 77.4 million broadband subscribers in the US in December 2008, with 264 million broadband subscribers alone in the top 30 countries at that time. There was a boom in music downloading and the use of data compression to quickly transfer music over the Internet, with a corresponding rise of portable digital audio players, typified by Apple's iPod, along with other MP3 players. Digital music sales rose, accounting for 6% of all music sales in 2005. Digital music options were integrated into other devices such as smartphones and the popular PlayStation Portable (PSP). By the latter half of the decade, generic MP3 players were starting to mimic the features of the extremely popular iPod and Zune. As a result of the widespread popularity and social impact of Google Search, the word "google" came to be used as a verb. Adobe Flash technology reached the point of being able to make video players. As a result, YouTube, a website which allows uploading and viewing videos, was created. YouTube's popularity grew explosively and it was acquired by Google. Data storage prices continued to drop, going from approximately US$7 per GB in early 2000 to US$0.07 per GB in 2009. Due to an increase in capacity, USB flash drives rapidly replaced Zip disks and floppy discs (by Iomega) and 3.5-inch diskettes. The first 2 TB hard drives were developed and beginning to be used. Windows XP and Microsoft Office 2003 became ubiquitous in personal computer software, although their successors Windows Vista and, by the end of the decade, Windows 7, saw increasing market penetration. Open-source and free software continued to be a notable but minority interest, with versions of the Linux kernel gaining in popularity, as well as the Mozilla Firefox web browser and the OpenOffice.org productivity suite. Blogs, portals, intranets and wikis became common electronic dissemination methods for professionals, amateurs, and businesses to conduct knowledge management. Wikipedia began and grew, becoming both the largest encyclopedia, and the most widely read wiki in the world. Wireless networks became ever more commonplace in homes, education institutes and urban public spaces. Peer-to-peer technology was used in a major way, such as internet telephony (Skype), file-sharing. The Internet became a major source of all types of media, from music to movies, thanks initially to file-sharing peer-to-peer programs such as Kazaa and LimeWire. The debate continued over the ethics of file sharing. Legal music download services such as iTunes and streaming services such as Spotify opened up new markets. The video game industry's profits surpassed the movie industry's in 2004. The US tech bubble burst for the most part in early 2000s and after three years of negative growth the technology market began its rebound in 2003. Social networking websites like Myspace and Facebook and microblogging platforms like Twitter gained in popularity. Smartboards in schools gained acceptance and were adopted rapidly during the middle years of the decade. E-book readers using electronic paper technology were developed, and enjoyed modest popularity. Software development The Agile Manifesto was launched and agile project management approaches such as Scrum grew in popularity. However, due to factors such as inflexibility in procurement processes, and lack of expertise among civil servants, government computing projects continued to fail with regularity, notably in the United Kingdom. A large number of software development and software testing jobs in rich nations were offshored to less wealthy countries such as India and Russia, mirroring a globalisation trend that had already occurred in physical manufacturing. There was also a trend of offshoring software development work to cities like Dubai and Singapore - where Western developers rubbed shoulders with other foreign workers - and "offshoring" within the EU (including nearshoring). Video Digital cameras became very popular due to rapid decreases in size and cost while photo resolution steadily increased. As a result, sales of film reel cameras diminished greatly, and integration into mobile phones increased greatly; sexting by teenagers also became a controversial social issue, with teenagers - and even in one case a school administrator who investigated a sexting case - being arrested. Graphics processing units (GPUs) and video cards became powerful enough to render ultra-high-resolution (e.g. 2560 × 1600) scenes in real time with substantial detail and texture. Flat panel displays began displacing cathode ray tubes. This was a dramatic change during the decade, as very few flat panels were sold through the mid-2000s (decade) and the majority of stores sell only flat panel TVs by the end of the decade. Handheld projectors entered the market and were then integrated into cellphones. The digital switchover started to be enforced for television. The introduction of digital video recorders (DVRs) allowed consumers to modify content they watch on TV, and to record TV programs and watch them later, leading to problems as consumers could fast-forward through commercials, making them useless, and save TV shows for later viewing, causing a decline in live TV viewing. However, these problems were already present with video tapes. Internet usage surpassed TV viewing in 2004. Satellite TV and cable TV (with the exception of digital cable) lost ratings as network television ratings gradually increased. TV networks started streaming shows online. There was an increase in usage of online DVD rental services such as Netflix. DVDs, and subsequently Blu-ray Discs, replaced VCR technology as the common standard in homes and at video stores, although inexpensive VCRs and videocassettes could still be found at some thrift stores and discount stores. Vehicles and energy There were major advances in hybrid vehicles such as the Toyota Prius, Ford Escape, and the Honda Insight. Many more computers and other technologies were incorporated into vehicles, such as Xenon HID headlights, GPS, DVD players, self-diagnosing systems, advanced pre-collision safety systems, memory systems for car settings, back-up sensors and cameras, in-car media systems, MP3 player compatibility, USB drive compatibility, self-parking systems, keyless start and entry, satellite radio, voice-activation, cellphone connectivity, adaptive headlights, HUD (Head-Up-Display), infrared cameras, and Onstar (on GM models). There was greater interest in future energy development due to global warming and the potential scenario of peak oil, even though these problems had been known about for decades. Photovoltaics increased in popularity and decreased in cost as a result of increased public interest and generous public subsidies. Communications The popularity of mobile phones and text messaging surged in the 2000s decade in the Western world. The advent of text messaging made possible new forms of interaction that were not possible before, resulting in numerous boons such as the ability to receive information on the move. Nevertheless, it also led to negative social implications such as Text "bullying" and the rise of traffic collisions caused by drivers who were distracted as they were texting while driving. Due to the major success of broadband Internet connections, Voice over IP (VoIP) began to gain popularity as a replacement for traditional telephone lines. Major telecommunications carriers began converting their networks from TDM to VoIP. Unusually for a development heralded by science fiction, videophones were cheap and abundant, yet even by mid-decade, they had not received much attention, perhaps due to the high cost of video calls relative to ordinary calls. Mobile phones adopted features such as Internet access, PDA functions, running software applications, video calling, cameras and video recording, and music and video playback as standard. Higher end smartphones continue to offer extra features such as GPS and Wireless. Due to improvements in mobile phone displays and memories, most mobile phone carriers offered video viewing services, internet services, and some offered full music downloads, such as Sprint in 2005 and more common use of Bluetooth. This led to a virtual saturation of cell phone ownership among the public in the developed world, increasing the use of mobile phones as everyday carry items, and a sharp decline in the use and numbers of payphones. Robotics As in previous decades, robotics continued to develop, especially telerobotics in medicine, particularly for surgery. Home automation and home robotics advanced in North America; iRobot's "Roomba" was the most successful domestic robot and sold 1.5 million units. (Others of interest include: Robomower, and Scooba as of May 2006) The first robotic vehicle completed the DARPA Grand Challenge in 2005 and became the first vehicle to be able to navigate itself with no external interference. Humanoid robots and robot kits improved considerably, to the point of retailing as toys. This was typified by RoboSapien and Lego Mindstorms respectively. Space technology GPS (Global Positioning System) became very popular, especially in the tracking of items or people, and the use in cars. Games that utilize the system, such as geocaching, emerged and developed a niche following. The Space Shuttle Columbia disaster occurred in February 2003. SpaceShipOne made the first privately funded human spaceflight on June 21, 2004. Healthcare Corrective eye surgery became popular as costs and potential risk decreased and results further improved. 244 new drugs were approved by the U.S. Food and Drug Administration. General retail RFID (Radio Frequency ID) became widely used in retail giants such as Wal-Mart, as a way to track items and automate stocking and keeping track of items. Self-serve kiosks became very widely available, and were used for all kinds of shopping, airplane boarding passes, hotel check-ins, fast food, banking, and car rental. ATMs became nearly universal in much of the First World and very common even in poorer countries and their rural areas. See also 2000 in science 2001 in science 2002 in science 2003 in science 2004 in science 2005 in science 2006 in science 2007 in science 2008 in science 2009 in science 2010 in science History of science and technology List of science and technology articles by continent List of years in science 2010s in science and technology References Science and technology by decade 2000s-related lists 2000s decade overviews
2000s in science and technology
[ "Technology" ]
2,539
[ "Science and technology by decade" ]
9,422,452
https://en.wikipedia.org/wiki/Galactic%20tide
A galactic tide is a tidal force experienced by objects subject to the gravitational field of a galaxy such as the Milky Way. Particular areas of interest concerning galactic tides include galactic collisions, the disruption of dwarf or satellite galaxies, and the Milky Way's tidal effect on the Oort cloud of the Solar System. Effects on external galaxies Galaxy collisions Tidal forces are dependent on the gradient of a gravitational field, rather than its strength, and so tidal effects are usually limited to the immediate surroundings of a galaxy. Two large galaxies undergoing collisions or passing nearby each other will be subjected to very large tidal forces, often producing the most visually striking demonstrations of galactic tides in action. Two interacting galaxies will rarely (if ever) collide head-on, and the tidal forces will distort each galaxy along an axis pointing roughly towards and away from its perturber. As the two galaxies briefly orbit each other, these distorted regions, which are pulled away from the main body of each galaxy, will be sheared by the galaxy's differential rotation and flung off into intergalactic space, forming tidal tails. Such tails are typically strongly curved. If a tail appears to be straight, it is probably being viewed edge-on. The stars and gas that comprise the tails will have been pulled from the easily distorted galactic discs (or other extremities) of one or both bodies, rather than the gravitationally bound galactic centers. Two very prominent examples of collisions producing tidal tails are the Mice Galaxies and the Antennae Galaxies. Just as the Moon raises two water tides on opposite sides of the Earth, so a galactic tide produces two arms in its galactic companion. While a large tail is formed if the perturbed galaxy is equal to or less massive than its partner, if it is significantly more massive than the perturbing galaxy, then the trailing arm will be relatively minor, and the leading arm, sometimes called a bridge, will be more prominent. Tidal bridges are typically harder to distinguish than tidal tails: in the first instance, the bridge may be absorbed by the passing galaxy or the resulting merged galaxy, making it visible for a shorter duration than a typical large tail. Secondly, if one of the two galaxies is in the foreground, then the second galaxy — and the bridge between them — may be partially obscured. Together, these effects can make it hard to see where one galaxy ends and the next begins. Tidal loops, where a tail joins with its parent galaxy at both ends, are rarer still. Satellite interactions Because tidal effects are strongest in the immediate vicinity of a galaxy, satellite galaxies are particularly likely to be affected. Such an external force upon a satellite can produce ordered motions within it, leading to large-scale observable effects: the interior structure and motions of a dwarf satellite galaxy may be severely affected by a galactic tide, inducing rotation (as with the tides of the Earth's oceans) or an anomalous mass-to-luminosity ratio. Satellite galaxies can also be subjected to the same tidal stripping that occurs in galactic collisions, where stars and gas are torn from the extremities of a galaxy, possibly to be absorbed by its companion. The dwarf galaxy M32, a satellite galaxy of Andromeda, may have lost its spiral arms to tidal stripping, while a high star formation rate in the remaining core may be the result of tidally-induced motions of the remaining molecular clouds (Because tidal forces can knead and compress the interstellar gas clouds inside galaxies, they induce large amounts of star formation in small satellites.) The stripping mechanism is the same as between two comparable galaxies, although its comparatively weak gravitational field ensures that only the satellite, not the host galaxy, is affected. If the satellite is very small compared to the host, the tidal debris tails produced are likely to be symmetric, and follow a very similar orbit, effectively tracing the satellite's path. However, if the satellite is reasonably large—typically over one ten thousandth the mass of its host—then the satellite's own gravity may affect the tails, breaking the symmetry and accelerating the tails in different directions. The resulting structure is dependent on both the mass and orbit of the satellite, and the mass and structure of the conjectured galactic halo around the host, and may provide a means of probing the dark matter potential of a galaxy such as the Milky Way. Over many orbits of its parent galaxy, or if the orbit passes too close to it, a dwarf satellite may eventually be completely disrupted, to form a tidal stream of stars and gas wrapping around the larger body. It has been suggested that the extended discs of gas and stars around some galaxies, such as Andromeda, may be the result of the complete tidal disruption (and subsequent merger with the parent galaxy) of a dwarf satellite galaxy. Effects on bodies within a galaxy Tidal effects are also present within a galaxy, where their gradients are likely to be steepest. This can have consequences for the formation of stars and planetary systems. Typically, a star's gravity will dominate within its own system, with only the passage of other stars substantially affecting dynamics. However, at the outer reaches of the system, the star's gravity is weak and galactic tides may be significant. In the Solar System, the theoretical Oort cloud, source of most long-period comets, lies in this transitional region. The Oort cloud is a vast shell surrounding the Solar System, possibly over a light-year in radius. Across such a vast distance, the gradient of the Milky Way's gravitational field plays a far more noticeable role. Because of this gradient, galactic tides may then deform an otherwise spherical Oort cloud, stretching the cloud in the direction of the galactic centre and compressing it along the other two axes, just as the Earth distends in response to the gravity of the Moon. The Sun's gravity is sufficiently weak at such a distance that these small galactic perturbations are enough to dislodge some planetesimals from such distant orbits, sending them towards the Sun and planets by significantly reducing their perihelia. Such bodies, composed of a rock and ice mixture, become comets when subjected to the increased solar radiation present in the inner Solar System. It has been suggested that the galactic tide may also contribute to the formation of an Oort cloud, by increasing the perihelia of planetesimals with large aphelia. This shows that the effects of the galactic tide are quite complex, and depend heavily on the behaviour of individual objects within a planetary system. However, cumulatively, the effect can be quite significant; up to 90% of all comets originating from an Oort cloud may be the result of the galactic tide. See also Oort cloud Roche limit Satellite galaxy Dwarf galaxy Interacting galaxy Tidal force References Extragalactic astronomy Oort cloud Tides
Galactic tide
[ "Astronomy" ]
1,383
[ "Astronomical hypotheses", "Oort cloud", "Extragalactic astronomy", "Astronomical sub-disciplines" ]
9,423,860
https://en.wikipedia.org/wiki/Theory-theory
The theory-theory (or theory theory) is a scientific theory relating to the human development of understanding about the outside world. This theory asserts that individuals hold a basic or 'naïve' theory of psychology ("folk psychology") to infer the mental states of others, such as their beliefs, desires or emotions. This information is used to understand the intentions behind that person's actions or predict future behavior. The term 'perspective taking' is sometimes used to describe how one makes inferences about another person's inner state using theoretical knowledge about the other's situation. This approach has become popular with psychologists as it gives a basis from which to explore human social understanding. Beginning in the mid-1980s, several influential developmental psychologists began advocating the theory theory: the view that humans learn through a process of theory revision closely resembling the way scientists propose and revise theories. Children observe the world, and in doing so, gather data about the world's true structure. As more data accumulates, children can revise their naive theories accordingly. Children can also use these theories about the world's causal structure to make predictions, and possibly even test them out. This concept is described as the 'Child Scientist' theory, proposing that a series of personal scientific revolutions are required for the development of theories about the outside world, including the social world. In recent years, proponents of Bayesian learning have begun describing the theory theory in a precise, mathematical way. The concept of Bayesian learning is rooted in the assumption that children and adults learn through a process of theory revision; that is, they hold prior beliefs about the world but, when receiving conflicting data, may revise these beliefs depending upon their strength. Child development Theory-theory states that children naturally attempt to construct theories to explain their observations. As all humans do, children seek to find explanations that help them understand their surroundings. They learn through their own experiences as well as through their observations of others' actions and behaviors. Through their growth and development, children will continue to form intuitive theories; revising and altering them as they come across new results and observations. Several developmentalists have conducted research of the progression of their theories, mapping out when children start to form theories about certain subjects, such as the biological and physical world, social behaviors, and others' thoughts and minds ("theory of mind"), although there remains controversies over when these shifts in theory-formation occur. As part of their investigative process, children often ask questions, frequently posing "Why?" to adults, not seeking a technical and scientific explanation but instead seeking to investigate the relation of the concept in question to themselves, as part of their egocentric view. In a study where Mexican-American mothers were interviewed over a two-week period about the types of questions their preschool children ask, researchers discovered that the children asked their parents more about biology and social behaviors rather than nonliving objects and artifacts. In their questions, the children were mostly ambiguous, unclear if they desired an explanation of purpose or cause. Although parents will usually answer with a causal explanation, some children found the answers and explanations inadequate for their understanding, and as a result, they begin to create their own theories, particularly evident in children's understanding of religion. This theory also plays a part in Vygotsky's social learning theory, also called modeling. Vygotsky claims that humans, as social beings, learn and develop by observing others' behavior and imitating them. In this process of social learning, prior to imitation, children will first post inquiries and investigate why adults act and behave in a particular way. Afterwards, if the adult succeeds at the task, the child will likely copy the adult, but if the adult fails, the child will choose not to follow the example. Comparison with other theories Theory of mind (ToM) Theory-theory is closely related to theory of mind (ToM), which concerns mental states of people, but differs from ToM in that the full scope of theory-theory also concerns mechanical devices or other objects, beyond just thinking about people and their viewpoints. Simulation theory In the scientific debate in mind reading, theory-theory is often contrasted with simulation theory, an alternative theory which suggests simulation or cognitive empathy is integral to our understanding of others. References Cognitive psychology Child development
Theory-theory
[ "Biology" ]
869
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
16,090,662
https://en.wikipedia.org/wiki/K-casein
Κ-casein, or kappa casein, is a mammalian milk protein involved in several important physiological processes. Chymosin (found in rennet) splits K-casein into an insoluble peptide (para kappa-casein) and water-soluble glycomacropeptide (GMP). GMP is responsible for an increased efficiency of digestion, prevention of neonate hypersensitivity to ingested proteins, and inhibition of gastric pathogens. The human gene for κ-casein is CSN3. Structure Caseins are a family of phosphoproteins (αS1, αS2, β, κ) that account for nearly 80% of bovine milk proteins and that form soluble aggregates are known as "casein micelles" in which κ-casein molecules stabilize the structure. There are several models that account for the spatial conformation of casein in the micelles. One of them proposes that the micellar nucleus is formed by several submicelles, the periphery consisting of microvillosities of κ-casein Another model suggests that the nucleus is formed by casein-interlinked fibrils. Finally, the most recent model proposes a double link among the caseins for gelling to take place. All 3 models consider micelles as colloidal particles formed by casein aggregates wrapped up in soluble κ-casein molecules. Milk-clotting proteases act on the soluble portion, κ-casein, thus originating an unstable micellar state that results in clot formation. Milk clotting Chymosin (EC 3.4.23.4) is an aspartic protease that specifically hydrolyzes the peptide bond in Phe105-Met106 of κ- casein and is considered to be the most efficient protease for the cheesemaking industry. However, there are milk-clotting proteases able to cleave other peptide bonds in the κ-casein chain, such as the endothiapepsin produced by Endothia parasitica. There are also several milk-clotting proteases that, being able to cleave the Phe105-Met106 bond in the κ-casein molecule, also cleave other peptide bonds in other caseins, such as those produced by Cynara cardunculus or even bovine chymosin. This allows the manufacture of different cheeses with a variety of rheological and organoleptic properties. The milk-clotting process consists of three main phases: Enzymatic degradation of κ-casein. Micellar flocculation. Gel formation. Each step follows a different kinetic pattern, the limiting step in milk-clotting being the degradation rate of κ-casein. The kinetic pattern of the second step of the milk-clotting process is influenced by the cooperative nature of micellar flocculation, whereas the rheological properties of the gel formed depend on the type of action of the proteases, the type of milk, and the patterns of casein proteolysis. The overall process is influenced by several different factors, such as pH or temperature. The conventional way of quantifying a given milk-clotting enzyme employs milk as the substrate and determines the time elapsed before the appearance of milk clots. However, milk clotting may take place without the participation of enzymes because of variations in physicochemical factors, such as low pH or high temperature. Consequently, this may lead to confusing and irreproducible results, particularly when the enzymes have low activity. At the same time, the classical method is not specific enough, in terms of setting the precise onset of milk gelation, such that the determination of the enzymatic units involved becomes difficult and unclear. Furthermore, although it has been reported that κ-casein hydrolysis follows typical Michaelis–Menten kinetics, it is difficult to determine with the classic milk-clotting assay. To overcome this, several alternative methods have been proposed, such as the determination of halo diameter in agar-gelified milk, colorimetric measurement, or determination of the rate of degradation of casein previously labeled with either a radioactive tracer or a fluorochrome compound. All these methods use casein as the substrate to quantify proteolytic or milk-clotting activities. FTC-Κ-casein assay Κ-casein labeled with the fluorochrome fluorescein isothiocyanate (FITC) to yield the fluorescein thiocarbamoyl (FTC) derivative. This substrate is used to determinate the milk clotting activity of proteases. FTC-κ-casein method affords accurate and precise determinations of κ-caseinolytic degradation, the first step in the milk-clotting process. This method is the result of a modification to the one described by S.S. Twining (1984). The main modification was substituting the substrate previously used (casein) by κ-casein labeled with the fluorochrome fluorescein isothiocyanate (FITC) to yield the fluorescein thiocarbamoyl (FTC) derivative. This variation allows quantification of the κ-casein molecules degraded in a more precise and specific way, detecting only those enzymes able to degrade such molecules. The method described by Twining (1984), however, was designed to detect the proteolytic activity of a considerably larger variety of enzymes. FTC-κ-casein allows the detection of different types of proteases at levels when no milk clotting is yet apparent, demonstrating its higher sensitivity over currently used assay procedures. Therefore, the method may find application as an indicator during the purification or characterization of new milk-clotting enzymes. Notes References External links InterPro: IPR000117 Kappa casein Fluorescein Thiocarbamoyl-Kappa-Casein Assay for the Specific Testing of Milk-Clotting Proteases Biotechnology and Microbiology Proteins Laboratory techniques Biochemistry
K-casein
[ "Chemistry", "Biology" ]
1,289
[ "Biomolecules by chemical classification", "nan", "Molecular biology", "Biochemistry", "Proteins" ]
16,090,759
https://en.wikipedia.org/wiki/Chemical%20similarity
Chemical similarity (or molecular similarity) refers to the similarity of chemical elements, molecules or chemical compounds with respect to either structural or functional qualities, i.e. the effect that the chemical compound has on reaction partners in inorganic or biological settings. Biological effects and thus also similarity of effects are usually quantified using the biological activity of a compound. In general terms, function can be related to the chemical activity of compounds (among others). The notion of chemical similarity (or molecular similarity) is one of the most important concepts in cheminformatics. It plays an important role in modern approaches to predicting the properties of chemical compounds, designing chemicals with a predefined set of properties and, especially, in conducting drug design studies by screening large databases containing structures of available (or potentially available) chemicals. These studies are based on the similar property principle of Johnson and Maggiora, which states: similar compounds have similar properties. Similarity measures Chemical similarity is often described as an inverse of a measure of distance in descriptor space. Examples for inverse distance measures are molecule kernels, that measure the structural similarity of chemical compounds. Similarity search and virtual screening The similarity-based virtual screening (a kind of ligand-based virtual screening) assumes that all compounds in a database that are similar to a query compound have similar biological activity. Although this hypothesis is not always valid, quite often the set of retrieved compounds is considerably enriched with actives. To achieve high efficacy of similarity-based screening of databases containing millions of compounds, molecular structures are usually represented by molecular screens (structural keys) or by fixed-size or variable-size molecular fingerprints. Molecular screens and fingerprints can contain both 2D- and 3D-information. However, the 2D-fingerprints, which are a kind of binary fragment descriptors, dominate in this area. Fragment-based structural keys, like MDL keys, are sufficiently good for handling small and medium-sized chemical databases, whereas processing of large databases is performed with fingerprints having much higher information density. Fragment-based Daylight, BCI, and UNITY 2D (Tripos) fingerprints are the best known examples. The most popular similarity measure for comparing chemical structures represented by means of fingerprints is the Tanimoto (or Jaccard) coefficient T. Two structures are usually considered similar if T > 0.85 (for Daylight fingerprints). However, it is a common misunderstanding that a similarity of T > 0.85 reflects similar bioactivities in general ("the 0.85 myth"). Chemical similarity network The concept of chemical similarity can be expanded to consider chemical similarity network theory, where descriptive network properties and graph theory can be applied to analyze large chemical space, estimate chemical diversity and predict drug target. Recently, 3D chemical similarity networks based on 3D ligand conformation have also been developed, which can be used to identify scaffold hopping ligands. See also Me-too compound Drug design Substructure search Ternary compound References External links Small Molecule Subgraph Detector (SMSD)— a Java-based software library for calculating Maximum Common Subgraph (MCS) between small molecules. This enables us to find similarity/distance between molecules. MCS is also used for screening drug like compounds by hitting molecules, which share common subgraph (substructure). Kernel-based Similarity for Clustering, regression and QSAR Modeling Brutus— a similarity analysis tool based on molecular interaction fields. Cheminformatics Drug discovery Chemistry
Chemical similarity
[ "Chemistry", "Biology" ]
700
[ "Life sciences industry", "Drug discovery", "Computational chemistry", "Cheminformatics", "Medicinal chemistry", "nan" ]
16,090,917
https://en.wikipedia.org/wiki/European%20Technology%20Platform%20Nanomedicine
The European Technology Platform on Nanomedicine (ETP Nanomedicine) is a European Technology Platform initiative to improve the competitive situation of the European Union in the field of nanomedicine, the application of nanotechnology to medicine. Overview An important initiative, led by industry, has been set up together with the European Commission. A group of 53 European stakeholders, composed of industrial and academic experts, has established a European Technology Platform on nanomedicine. The first task of this high level group was to write a vision document for this highly future-oriented area of nanotechnology-based healthcare in which experts describe an extrapolation of needs and possibilities until 2020. Beginning of 2006 this Platform has been opened to a wider participation (December 2006: 150 member organisations) and has delivered a so-called Strategic Research Agenda showing a well elaborated common European way of working together for the healthcare of the future trying to match the high expectations that nanomedicine has raised so far. Policy Objectives Establish a clear strategic vision in the area resulting in a Strategic Research Agenda. Decrease fragmentation in nano-medical research. Mobilise additional public and private investment. Identify priority areas. Boost innovation in nanobiotechnologies for medical use. Topics Three key priorities have been confirmed by the stakeholders: Nanotechnology-based diagnostics including imaging. Targeted drug delivery and release. Regenerative medicine. Dissemination of knowledge, regulatory and IPR issues, standardisation, ethical, safety, environmental and toxicity concerns as well as public perception in general and the input from other stakeholders like insurance companies or patient organisations play an important role. See also European Technology Platform Joint Technology Initiative References Vision document Strategic Research Agenda CERTH European Technology Platform Nanomedicine Hyperion European Technology Platform Nanomedicine External links European Technology Platform on Nanomedicine European Union and science and technology Information technology organizations based in Europe Science and technology in Europe
European Technology Platform Nanomedicine
[ "Materials_science" ]
388
[ "Nanomedicine", "Nanotechnology" ]
16,091,265
https://en.wikipedia.org/wiki/HomeLink%20Wireless%20Control%20System
The HomeLink Wireless Control System is a radio frequency (RF) transmitter integrated into some automobiles that can be programmed to activate devices such as garage door openers, RF-controlled lighting, gates and locks, including those with rolling codes. The system typically features three buttons, most often found on the driver-side visor or on the overhead console, which can be programmed via a training sequence to replace existing remote controls. It is compatible with most RF-controlled garage door openers, as well as home automation systems such as those based on the X10 protocol. HomeLink is compatible with radio frequency devices operating between 288 and 433 MHz. Select 2007 and newer vehicles are compatible up to 433 MHz. History HomeLink won the Automotive News PACE Award in 1997, for supplying automotive technology to improve consumer interaction between the car and the home. By 2003, it had been installed on over 20,000,000 automobiles. Originally supplied by Johnson Controls, the HomeLink product line was sold to Gentex in 2013. References External links Automotive accessories Home automation Garage door openers
HomeLink Wireless Control System
[ "Technology" ]
218
[ "Home automation", "Garage door openers" ]
16,091,266
https://en.wikipedia.org/wiki/WR%20104
WR 104 is a triple star system located about from Earth. The primary star is a Wolf–Rayet star (abbreviated as WR), which has a B0.5 main sequence star in close orbit and another more distant fainter companion. The WR star is surrounded by a distinctive spiral Wolf–Rayet nebula, often referred to as a pinwheel nebula. The rotational axis of the binary system, and likely of the two closest stars, is directed approximately towards Earth. Within the next few hundred thousand years, the Wolf–Rayet star is predicted to experience a core-collapse supernova with a small chance of producing a long-duration gamma-ray burst. The possibility of a supernova explosion from WR 104 having destructive consequences for life on Earth stirred interest in the mass media, and several popular science articles have been issued in the press since 2008. Some articles decide to reject the catastrophic scenario, while others leave it as an open question. System The Wolf–Rayet star that produces the characteristic emission line spectrum of WR 104 has a resolved companion and an unresolved spectroscopic companion, forming a triple system. The spectroscopic pair consists of the Wolf–Rayet star and a B0.5 main sequence star. The WR star is visually 0.3 magnitudes fainter than the main sequence star, although the WR star is typically considered the primary, as it dominates the appearance of the spectrum and is more luminous. The two are in a nearly circular orbit separated by about 2 AU, which would be about one milli-arcsecond at the assumed distance. The two stars orbit every 241.5 days with a small inclination (i.e. nearly face-on). The visually resolved companion is 1.5 magnitudes fainter than the combined spectroscopic pair and almost one arc-second away. It is thought to be physically associated, although orbital motion has not been observed. From the colour and brightness, it is expected to be a hot main sequence star. Structure The rotational axis of the binary system is directed approximately towards Earth at an estimated inclination of 0 to 16 degrees. This provides a fortunate viewing angle for observing the binary system and its dynamics. Discovered as part of the Keck Aperture Masking Experiment WR 104 is surrounded by a distinctive dusty Wolf–Rayet nebula over 200 astronomical units in diameter formed by interaction between the stellar winds of the two stars as they rotate and orbit. The spiral appearance of the nebula has led to the name Pinwheel Nebula being used. The spiral structure of the nebula is composed of dust that would be prevented from forming by WR 104's intense radiation were it not for the star's companion. The region where the stellar wind from the two massive stars interacts compresses the material enough for the dust to form, and the rotation of the system causes the spiral-shaped pattern. The round appearance of the spiral leads to the conclusion that the system is seen almost pole on, and an almost circular orbital period of 220 days had been assumed from the pinwheel outflow pattern. WR 104 shows frequent eclipse events as well as other irregular variations in brightness. The undisturbed apparent magnitude is around 12.7, but the star is rarely at that level. The eclipses are believed to be caused by dust formed from expelled material, not by the companion star. Supernova progenitor Both stars in the WR 104 system are predicted to end their days as core-collapse supernovae. The Wolf–Rayet star is in the final phase of its life cycle and is expected to turn into a supernova much sooner than the OB star. It is predicted to occur at some point within the next few hundred thousand years. With the relatively close proximity to the Solar System, the question of whether WR 104 will pose a future danger to life on Earth has been raised. Gamma-ray burst Apart from a core-collapse supernova, astrophysicists have speculated about whether WR 104 has the potential to cause a gamma-ray burst (GRB) at the end of its life. The companion OB star certainly has the potential, but the Wolf–Rayet star is likely to go supernova much sooner. There remain too many uncertainties and unknown parameters for any reliable prediction, and only sketchy estimates of a GRB scenario for WR 104 have been published. Wolf–Rayet stars with a sufficiently high spin velocity, prior to going supernova, could produce a long duration gamma ray burst, beaming high energy radiation along its rotational axis in two oppositely directed relativistic jets. Presently, mechanisms for the generation of GRB emissions are not fully understood, but it is considered that there is a small chance that the Wolf–Rayet component of WR 104 may become one when it goes supernova. Effects on Earth According to available astrophysical data for both WR 104 and its companion, eventually both stars will finally be destroyed as highly directional anisotropic supernovae, producing concentrated radiative emissions as narrow relativistic jets. Theoretical studies of such supernovae suggest jet formation aligns with the rotational axes of its progenitor star and its eventual stellar remnant, and will preferentially eject matter along their polar axes. If these jets happen to be aimed towards our solar system, its consequences could significantly harm life on Earth and its biosphere, whose true impact depends on the amount of radiation received, the number of energetic particles and the source's distance. Knowing that the inclination of the binary system containing WR 104 is roughly 12° relative to line of sight, and assuming both stars have their rotational axes similarly orientated, suggests some potential risk. Recent studies suggest these effects pose a "highly unlikely" danger to life on Earth, with which, as stated by Australian astronomer Peter Tuthill, the Wolf–Rayet star would have to undergo an extraordinary string of successive events: The Wolf–Rayet star would have to generate a gamma-ray burst (GRB); however, these events are mostly associated with galaxies with a low metallicity and have not yet been observed in our Milky Way Galaxy. Some astronomers believe it unlikely that WR 104 will generate a GRB; Tuthill tentatively estimates the probability for any kind of GRB event is around the level of one percent, but cautions more research is needed to be confident. The rotational axis of the Wolf–Rayet star would have to be pointed in the direction of Earth. The star's axis is estimated to be close to the axis of the binary orbit of WR 104. Observations of the spiral plume are consistent with an orbital pole angle of anywhere from 0 to 16 degrees relative to the Earth, but a spectrographic observation suggest a significantly larger and therefore less dangerous angle of 30°–40° (possibly as much as 45°). Estimates of the "opening angle" jet's arc currently range from 2 to 20 degrees. (Note: The "opening angle" is the total angular span of the jet, not the angular span from the axis to one side. Earth would therefore only be in the intersecting path if the actual angle of the star's axis relative to Earth is less than half the opening angle.) The jet would have to reach far enough in order to damage life on Earth. The narrower the jet appears, the farther it will reach, but the less likely it is to hit Earth. Notes References External links University of Sydney (Keck Observatory) page Wolf–Rayet stars Pinwheel nebulae Sagittarius (constellation) Sagittarii, V5097 Astronomical objects discovered in 1998 IRAS catalogue objects Spectroscopic binaries Triple star systems B-type main-sequence stars
WR 104
[ "Astronomy" ]
1,557
[ "Sagittarius (constellation)", "Constellations" ]
16,091,349
https://en.wikipedia.org/wiki/Syngas%20fermentation
Syngas fermentation, also known as synthesis gas fermentation, is a microbial process. In this process, a mixture of hydrogen, carbon monoxide, and carbon dioxide, known as syngas, is used as carbon and energy sources, and then converted into fuel and chemicals by microorganisms. The main products of syngas fermentation include ethanol, butanol, acetic acid, butyric acid, and methane. Certain industrial processes, such as petroleum refining, steel milling, and methods for producing carbon black, coke, ammonia, and methanol, discharge enormous amounts of waste gases containing mainly CO and into the atmosphere either directly or through combustion. Biocatalysts can be exploited to convert these waste gases to chemicals and fuels as, for example, ethanol. In addition, incorporating nanoparticles has been demonstrated to improve gas-liquid fluid transfer during syngas fermentation. There are several microorganisms which can produce fuels and chemicals by syngas utilization. These microorganisms are mostly known as acetogens including Clostridium ljungdahlii, Clostridium autoethanogenum, Eubacterium limosum, Clostridium carboxidivorans P7, Peptostreptococcus productus, and Butyribacterium methylotrophicum. Most use the Wood–Ljungdahl pathway. Syngas fermentation process has advantages over a chemical process since it takes places at lower temperature and pressure, has higher reaction specificity, tolerates higher amounts of sulfur compounds, and does not require a specific ratio of CO to . On the other hand, syngas fermentation has limitations such as: Gas-liquid mass transfer limitation Low volumetric productivity Inhibition of organisms. Reactor types The most common utilized reactor type for syngas fermentation is the stirred-tank reactor in which the mass transfer is influenced by several factors such as geometry of the reactor, impeller configuration, the agitation speed and the gas flow rate. Additionally, less investigated reactor types like Trickle-bed reactors, bubble-column reactors and gas-lift reactors have specific drawbacks and advantages regarding the abovementioned limitations. References Environmental science
Syngas fermentation
[ "Environmental_science" ]
456
[ "nan" ]
16,091,542
https://en.wikipedia.org/wiki/List%20of%20human%20microbiota
Human microbiota are microorganisms (bacteria, viruses, fungi and archaea) found in a specific environment. They can be found in the stomach, intestines, skin, genitals and other parts of the body. Various body parts have diverse microorganisms. Some microbes are specific to certain body parts and others are associated with many microbiomes. This article lists some of the species recognized as belonging to the human microbiome and focuses on the oral, vaginal, ovarian follicle, uterus and the male reproductive tract microbiota. Categories of bacteria The "reference" 70 kg human body is estimated to have around 39 trillion bacteria with a mass of about 0.2 kg. These can be separated into about 10,000 microbial species, about 180 of the most studied is listed below here. However, these can broadly be put into three categories: Spheres or ball-shaped (cocci bacteria) Cocci are usually round or spherical in shape. They can form clusters and are non-motile. Examples include Staphylococcus aureus, Streptococcus pyogenes, and Neisseria gonorrhea. Rod-shaped bacteria (bacilli) Bacilli usually have a rod or cylinder shape. Examples include Listeria, Salmonella typhimurium, Yersinia enterocolitica, and Escherichia coli. Spirals or helixes (spirochetes) Spirochetes are usually spiral or corkscrew shaped and move using axial filament. Examples include Treponema pallidum and Leptospira borgpetersenii. Naming convention for the table Vagina The vaginal microbiota is shaped by puberty, pregnancy and menopause. Vaginal microbiota including some Lactobacillus species protect the vagina from harmful pathogens. They convert glucose to lactic acid and this acidic environment kills harmful pathogens. The vaginal microbiota in pregnancy varies markedly during the entire time of gestation. The species and diversity of the microorganisms may be related to the various levels of hormones during pregnancy. Vaginal flora can be transmitted to babies during birth. Vaginal dysbiosis can lead to vaginal infections like bacterial vaginosis which makes one relatively susceptible to sexually transmitted diseases. Good personal hygiene and probiotics promote a healthy vaginal microbiota. Uterus The healthy uterine microbiome has been identified and over 278 genera have been sequenced. Bacteria species like Fusobacterium are typically found in the uterus. Although Lactobacillus may be beneficial in the vagina, “increased levels in the uterus through a breach in the cervical barrier” may be harmful to the uterus. Ovarian follicle The ovarian follicle microbiome has been studied using standard culturing techniques. It has been associated with the outcomes of assisted reproductive technologies and birth outcomes. Positive outcomes are related to the presence of Lactobacillus spp while the presence of Propionibacterium and Actinomyces were related to negative outcomes. The microbiome can vary from one ovary to the other. Studies are ongoing in the further identification of those bacteria present. Male reproductive tract The microbiome present in seminal fluid has been evaluated. Using traditional culturing techniques the microbiome differs between men who have acute prostatitis and those who have chronic prostatitis. Identification of the seminal fluid microbiome has become one of the diagnostic tools used in treating infertility in men that do not display symptoms of infection or disease. The taxa Pseudomonas, Lactobacillus, and Prevotella display a negative effect on the quality of sperm. The presence of Lactobacillus spp in semen samples is associated with a very high normal sperm count. Mouth The oral microbiota consists of all the microorganisms that exist in the mouth. It is the second largest of the human body and made of various bacteria, viruses, fungi and protozoa. These organisms play an important role in oral and overall health. Anthony Van Leeuwenhoek was the first to view these organisms using a microscope he created. The temperature and pH of saliva makes it conducive for bacteria to survive in the oral cavity. Bacteria in the oral cavity include Streptococcus mutans, Porphyromonas gingivalis, and Staphylococcus. S. mutans is the main component of the oral microbiota. A healthy oral microbiome decreases oral infections and promotes a healthy gut microbiome. However, when disturbed, it can lead to gum inflammations and bad breath. Dental plaque is formed when oral microorganisms form biofilms on the surfaces of teeth. Recommended practices to maintain a healthy oral microbiome include practicing good oral hygiene (brushing twice and flossing, replacing toothbrush often), eating healthy diet (food with little or no added sugars and ultra processed foods), drinking lots of water and taking probiotics. See also Placental microbiome List of bacterial vaginosis microbiota List of microbiota species of the lower reproductive tract of women Lung microbiota Gut microbiota Skin flora https://www.wikidata.org/wiki/Q1591401 Other lists of the human body's contents and building bricks List of skeletal muscles of the human body List of organs of the human body List of distinct cell types in the adult human body References Bacteria and humans human microbiota
List of human microbiota
[ "Biology" ]
1,162
[ "Bacteria and humans", "Bacteria" ]
16,092,232
https://en.wikipedia.org/wiki/History%20of%20industrial%20ecology
The establishment of industrial ecology as field of scientific research is commonly attributed to an article devoted to industrial ecosystems, written by Frosch and Gallopoulos, which appeared in a 1989 special issue of Scientific American. Industrial ecology emerged from several earlier ideas and concepts, some of which date back to the 19th century. Before the 1960s The term "industrial ecology" has been used alongside "industrial symbiosis" at least since the 1940s. Economic geography was perhaps one of the first fields to use these terms. For example, in an article published in 1947, George T. Renner refers to "The General Principle of Industrial Location" as a "Law of Industrial Ecology". Briefly stated this is: In the same article the author defines and describes industrial symbiosis: It appears that the concept of Industrial Symbiosis was not new for the field of economic geography, since the same categorization is used by Walter G. Lezius in his 1937 article "Geography of Glass Manufacture at Toledo, Ohio", also published in the Journal of Economic Geography. Used in a different context, the term "Industrial Ecology" is also found in a 1958 paper concerned with the relationship between the ecological impact from increasing urbanization and value orientations of related peoples. The case study is in Lebanon: 1960s In 1963, we find the term Industrial Ecology (defined as the "complex ecology of the modern industrial world") being used to describe the social nature and complexity of (and within) industrial systems: In 1967, the President of the American association for the advancement of science writes in "The experimental city" that "There are examples of industrial symbiosis where one industry feeds off, or at least neutralizes, the wastes of another..." The same author in 1970 talks about "The Next Industrial Revolution" The concept of material and energy sharing and reuse is central to his proposal for a new industrial revolution and he cites agro-industrial symbiosis as a practical way for achieving this: In these early articles, "Industrial Ecology" is used in its literal sense - as a system of interacting industrial entities. The relation to natural ecosystems (through either metaphor or analogy) is not explicit. Industrial Symbiosis on the other hand, is already clearly defined as a type of industrial organization, and the term symbiosis is borrowed from the ecological sciences to describe an analogous phenomenon in industrial systems. 1970s Industrial Ecology has been a research subject of the Japan Industrial Policy Research Institute since 1971. Their definition of Industrial Ecology is "research for the prospect of dynamic harmonization between human activities and nature by a systems approach based upon ecology (JIPRI, 1983)". This programme has resulted to a number of reports that are available only in Japanese. One of the earliest definitions of Industrial Ecology was proposed by Harry Zvi Evan at a seminar of the Economic Commission of Europe in Warsaw (Poland) in 1973 (an article was subsequently published by Evan in the Journal for International Labour Review in 1974 vol. 110 (3), pp. 219–233). Evan defined Industrial Ecology as a systematic analysis of industrial operations including factors like: Technology, environment, natural resources, bio-medical aspects, institutional and legal matters as well as the socio-economic aspects. In 1974 the term of Industrial Ecology is perhaps for the first time associated with a cyclical production mode (rather than a linear one, resulting to waste). In this article, the necessity for a transition to an "open-world Industrial Ecology", is used as argument for the need to establish lunar industries: Many elements of modern Industrial Ecology were commonplace in the industrial sectors of the former Soviet Union. For example, “kombinirovanaia produksia” (combined production) was present from the earliest years of the Soviet Union and was instrumental in shaping the patterns of Soviet industrialization. “Bezotkhodnoyi tekhnologii” (waste-free technology) was introduced in the final decades of the USSR as a way to increase industrial production while limiting environmental impact. Fiodor Davitaya, a Soviet scientist from the Republic of Georgia, described in 1977 the analogy relating industrial systems to natural systems as a model for a desirable transition to cleaner production: 1980s By the 80s Industrial Ecology was already "promoted" to a research subject, which several institutes around the globe embraced. In a 1986 article published in Ecological Modelling, there is a full description of Industrial Ecology and the analogy to natural ecosystems is clearly stated: In fact, in the above article there is an attempt to model an "industrial ecological system". The model is composed of seven major sections: industry, population, labor force, living state, environment and pollution, general health, and occupational health. Notice the rough similarity with Evan's factors as stated in the above section. During the 80s the emergence of another related term, "industrial metabolism", is observed. The term is used as a metaphor for the organization and functioning of industrial activity. In an article defending the "biological modulation of terrestrial carbon cycle", the author includes an extraordinary parenthetical note: 1989 – Decisive articles In 1989 two articles were released that played a decisive role in the history of industrial ecology. The first one was titled "Industrial Metabolism" by Robert Ayres. Ayres essentially lays the foundations of Industrial Ecology, although the term is not to be found in this article. In the appendix of the article he includes "a theoretical exploration of the biosphere and the industrial economy as material-transformation systems and lessons that might be learned from their comparison". He proposes that: The term "Industrial Ecology" gains mainstream attention later the same year (1989) through a "Scientific American" article named "Strategies for Manufacturing". In this article, R.Frosch and N.Gallopoulos wonder "why would not our industrial system behave like an ecosystem, where the wastes of a species may be resource to another species? Why would not the outputs of an industry be the inputs of another, thus reducing use of raw materials, pollution, and saving on waste treatment?" This vision gave birth to the concept of the Eco-industrial Park, the industrial complex that is governed by Industrial Ecology principles. A notable example resides in a Danish industrial park in the city of Kalundborg. There, several linkages of byproducts and waste heat can be found between numerous entities such as a large power plant, an oil refinery, a pharmaceutical plant, a plasterboard factory, an enzyme manufacturer, a waste company and the city itself. Frosch's and Gallopoulos' thinking was in certain ways simply an extension of earlier ideas, such as the efficiency and waste-reduction thinking annunciated by Buckminster Fuller and his students (e.g., J. Baldwin), and parallel ideas about energy cogeneration, such as those of Amory Lovins and the Rocky Mountain Institute. 1990s In 1991, C. Kumar Patel organized a seminal colloquium on Industrial Ecology, held on May 20 and 21, 1991, at the National Academy of Sciences in Washington D.C. The papers were later published in the Proceedings of the National Academy of Sciences USA, and they form an excellent reference on Industrial Ecology. Papers include: 21st century The Journal of Industrial Ecology (since 1997), the International Society for Industrial Ecology (since 2001), and the journal Progress in Industrial Ecology (since 2004) have covered industrial ecology in the international scientific community. Principles of industrial ecology are also emerging in various policy realms such as the concept of the circular economy that is being promoted in China. Although the definition of the circular economy has yet to be formalized, generally the focus is on strategies such as creating a circular flow of materials, and cascading energy flows. An example of this would be using waste heat from one process to run another process that requires a lower temperature. This maximizes the efficiency of exergy use. This strategy aims for a more efficient economy with fewer pollutants and other unwanted by-products. Sources Industrial ecology History of Earth science
History of industrial ecology
[ "Chemistry", "Engineering" ]
1,643
[ "Industrial ecology", "Industrial engineering", "Environmental engineering" ]
16,093,609
https://en.wikipedia.org/wiki/Ave%20Kludze
Ave K. P. Kludze Jr. is an American aerospace engineer and civil servant, specializing in complex systems engineering and design. He is a senior NASA Spacecraft Systems Engineer. Early life and education Kludze was born in Hohoe, in the Volta Region of Ghana, the son of Anselmus Kludze, a legal reformer who served as a Justice of the Supreme Court of Ghana and Comfort Brempong who worked with the Bank of Ghana. He grew up in Dansoman-Sahara, a suburb of Accra. His love of science began at an early age; his parents once remarked that they were fearful to leave him at home in case he dismantled the radio. At friends’ houses, he would take apart their televisions to see how they worked. By his own admission, Kludze's fascination with aviation began with a trip to the airport in Accra as a young boy. His father had intended him to become a lawyer but supported him regardless in his ambitions. He emigrated to the United States in the late 1980s with only a high school diploma from the Adisadel College in Cape Coast, Ghana and 'A' levels from Swedru Secondary School. Shortly after his arrival in the United States, he enrolled at Rutgers University where he set out to pursue a bachelor's degree in electrical engineering. He went on to complete a master's degree in systems engineering at Johns Hopkins University, followed by a PhD in systems engineering at George Washington University. Career Kludze has held positions at various NASA Centers including the NASA Langley Research Center in Virginia and NASA/Goddard Space Flight Center in Maryland, where he became if not the first African, the first Ghanaian to ever fly (command and control) a Spacecraft in Orbit (including the ERBS and TRMM Spececrafts, etc. for NASA from a mission control center). He designed the Human Locator System, which he called the "HuLos" in partial fulfillment of the requirements for his master's degree at the Johns Hopkins University. The HuLos uses nanotechnology (microscopic technology) and is intended to locate human beings anywhere on this planet using satellite communication, GPS and other technologies. What made the system unique at the time of its conception, though considered weird by even his advisor, were the miniaturized size and the concept of global location. The device is to be implanted under the human skull, skin bone or teeth and activated when required. The system as envisioned could be used e.g. in locating missing children, the elderly, stolen cars and hardened criminals. The thesis which contains the design is currently at the Applied Physics Laboratory of the Johns Hopkins University. In 2004, Kludze and a group of NASA engineers developed the Extravehicular Activity Infrared (EVA IR) camera for space-walking astronauts. The EVA IR camera was designed to fulfill a critical inspection need for the Shuttle Program; the on-orbit IR Camera can detect crack and surface defects in the Reinforced Carbon-Carbon (RCC) sections of the Space Shuttle's Thermal Protection System. This camera may help discover and prevent some of the problems leading to the disintegration of the Space Shuttle Columbia. Kludze was selected to join the NASA Engineering and Safety Center (NESC), an organization created after the Columbia incident, as a systems engineering expert. Before joining the NESC, Kludze was the manager of NASA Langley's state-of-the-art Integrated Design Center (IDC) which he helped to develop. He was also the Traceability and Verification Manager for the CALIPSO spacecraft. His pioneering work in systems engineering has been published worldwide; he has a number of publications, several NASA and external awards and recognitions to his credit. Kludze is involved in several space initiatives including the implementation of the U.S. President's Space Exploration Vision to the Moon and Mars.beyond. In 2002, Kludze was recognised and honoured at the Second Biennial Adisadel College Excellence Awards at a ceremony at the State House in Accra, by the Adisadel College Old Boys Association and the college. He was profiled on Cable News Network (CNN) and the British Broadcasting Corporation (BBC). References External links CNN - Ghana's Rocket Man Ghanaian Wins Aeronautical Award in the US Adisadel College Adisadel College Houses Akosombo Would Not Last Forever Akosombo Would Not Last Forever Applying Automation to Spacecraft Mission Operations BBC News: interview with Kludze American Institute of Aeronautics and Astronautics (AIAA) Systems Engineering Technical Committee Year of birth missing (living people) Living people Aerospace engineers Alumni of Adisadel College 21st-century American engineers American scientists Ewe people Ghanaian emigrants to the United States Johns Hopkins University alumni People from Volta Region Systems engineers
Ave Kludze
[ "Engineering" ]
984
[ "Aerospace engineers", "Aerospace engineering" ]
16,094,284
https://en.wikipedia.org/wiki/Digital%20history
Digital history is the use of digital media to further historical analysis, presentation, and research. It is a branch of the digital humanities and an extension of quantitative history, cliometrics, and computing. Digital history is commonly known as digital public history, concerned primarily with engaging online audiences with historical content, or digital research methods, that further academic research. Digital history outputs include: digital archives, online presentations, data visualizations, interactive maps, timelines, audio files, and virtual worlds to make history more accessible to users. Recent digital history projects focus on creativity, collaboration, and technical innovation, text mining, corpus linguistics, network analysis, 3D modeling, and big data analysis. By utilizing these resources, the user can rapidly develop new analyses that can link to, extend, and bring to life existing histories. History Rooted in earlier social science history work, particularly around the history of enslavement in the United States, early digital history in the 1960s and 70s focused on using computers to conduct quantitative analyses, primarily of demographic and social history data - censuses, election returns, city directories, and other tabular or countable data. - with the aim of producing defensible research findings These early computers could be programmed to conduct statistical analyses of these records, creating tallies, or seeking trends across records. This research into historical demography was rooted in the rise of social history as a field of historical interest. The historians involved in this work sought to quantify past societies, to come to new conclusions about communities and population. Computers proved capable tools for that type of work. By the late 1970s younger historians turned to cultural studies, most of these studies involved online databases that were checked by Professionals in Great Britain about once a year. The outpouring of quantitative studies by established scholars continued. Since then, quantitative history and cliometrics have been used primarily by historically minded economists and political scientists. In the late 1980s quantifiers founded the Association for History and Computing. This movement provided some of the impetus for the rise of digital history in the 1990s. The more recent roots of digital history were in software rather than online networks. In 1982, the Library of Congress embarked on its Optical Disk Pilot Project, which placed text and images from its collection on to laserdiscs and CD-ROMs. The library started offering online exhibits in 1992 when it launched Selected Civil War Photographs. In 1993, Roy Rosenzweig, along with Steve Brier and Josh Brown, produced their award-winning CD-ROM Who Built America? From the Centennial Exposition of 1876 to the Great War of 1914, designed for Apple, Inc. that integrated images, text, film and sound clips, displayed in a visual interface that supported a text narrative. Among the earliest online digital history projects were The Heritage Project of the University of Kansas, and medieval historian Dr. Lynn Nelson's World History Index and History Central Catalogue. Another was The Valley of the Shadow, conceived in 1991 by current University of Richmond professor of humanities and president emeritus, Edward L. Ayers, who was then at the University of Virginia. The Institute for Advanced Technology in the Humanities (IATH) at the University of Virginia adopted the Valley Project and partnered with IBM to collect and transcribe historical sources into digital files. The project collected data related to Augusta County in Virginia and Franklin County in Pennsylvania during the American Civil War. In 1996, William G. Thomas III joined Ayers on the Valley Project. Together, they produced an online article entitled "The Differences Slavery Made: A Close Analysis of Two American Communities," which also appeared in The American Historical Review in 2003. A CD-ROM also accompanied the Valley Project, published by W. W. Norton and Company in 2000. Rosenzweig, who died October 11, 2007, founded the Center for History and New Media (CHNM) at George Mason University in 1994. Today, CHNM boasts several digital tools available to historians, such as Zotero, Omeka or Tropy. In 1997, Ayers and Thomas used the term "digital history" when they proposed and founded the Virginia Center for Digital History (VCDH) at the University of Virginia, the earliest center devoted exclusively to history. Several other institutions promoting digital history include the Center for Humane Arts, Letters, and Social Sciences Online (MATRIX) at Michigan State University, Maryland's Institute for Technology in the Humanities, and the Center for Digital Research in the Humanities at the University of Nebraska. In 2004, Emory University launched Southern Spaces, a "peer-reviewed Internet journal and scholarly forum" examining the history of the South. Applications There are many potential benefits to the use of digital history when combined with traditional historical methods. Some of these applications include: Combining traditional historical methods and new research methods in order to come to new conclusions. Using different tools to extract and analyse larger amounts of data that would not be manageable otherwise. Create models and maps of data extracted to create a visualisation of the data. Data extracted and analysed can be placed alongside existing historiography to increase combined historical knowledge. By adding new research methods to existing historical method, historians can benefit greatly from the ability to work with larger amounts of data and develop new interpretations from this. Notable Projects The collaborative nature of most digital history endeavors has meant that the discipline has developed primarily at institutions with the resources to sponsor content research and technical innovation. Two of the first centers, George Mason University's Center for History and New Media and the Virginia Center for Digital History at the University of Virginia have been among the leaders in the development of digital history projects and the education of digital historians. Some of the noteworthy projects emerging from these pioneering centers are The Geography of Slavery, The Texas Slavery Project, and The Countryside Transformed at VCDH and Liberty, Equality, Fraternity: Exploring the French Revolution and The Lost Museum at the CHNM. In each of these projects, mediated archives holding multiple types of sources are combined with digital tools to analyze and illuminate an historical question to a varying degree; this integration of content and tools with analysis is one of the hallmarks of digital history—projects move beyond archives or collections and into scholarly analysis and the use of digital tools to develop that analysis. The differences between the ways projects incorporate these integrations are a measure of the development of the field and point to the ongoing debates over what digital history can and should be. While many of the projects at VCDH, CHNM, and other university's centers have been geared towards academics and post-secondary education, the University of Victoria (British Columbia), in conjunction with the Université de Sherbrooke and the Ontario Institute for Studies in Education at the University of Toronto, has created as series of projects for all ages, "Great Unsolved Mysteries in Canadian History." Laden with instructional aids, this site asks teachers to introduce students to historical research methods to help them develop analytical skills and a sense of the complexities of their national history. Issues of race, religion, and gender are addressed in carefully constructed modules that cover incidents in Canadian history from Viking exploration through the 1920s. One of the original co-creators of the project, John Lutz has also developed Victoria's Victoria with the University of Victoria and Malaspina University-College. In addition to Ayers, Thomas, Lutz, and Rosenzweig, numerous other individual scholars work with digital history techniques and have made and/or continue to make important contributions to the field. Robert Darnton's 2000 article, "An Early Information Society: News and the Media in Eighteenth-Century Paris" was supplemented with electronic resources and is an early model of the discussions around digital history and its future in the humanities. One of the first major digital projects to be reviewed by the American Historical Review (AHR) was Philip Ethington's "Los Angeles and the Problem of Urban Historical Knowledge"—a multimedia exploration of changes to Los Angeles' physical profile over the course of several decades. In this essay, he also expresses his beliefs that historians have major power in the new world of digital knowledge. Patrick Manning, Andrew W. Mellon Professor of World History at the University of Pittsburgh, developed the CD-ROM project "Migration in Modern World History, 1500-2000." In the "African Slave Demography Project," Manning created a demographic simulation of the slave trade to show precisely how declined in West and Central Africa between 1730 and 1850 as well as in East Africa between the years 1820 and 1890 due to slavery. He acknowledged the power digitalization had on his project, and how it reflected his belief of coevolution. Jan Reiff, of UCLA, co-edited the print and online versions of the Encyclopedia of Chicago. Andrew J. Torget, founded the Texas Slavery Project while at VCDH and continues to develop the site as he completes his PhD—likely a model for new digital scholars who will incorporate digital components into larger research agendas. Another notable project that makes use of digital tools for historical practice is The Quilt Index. As scholars became increasingly interested in women's history, quilts became valuable to study. The Quilt Index is an online collaborative database where quilt owners can upload pictures and data about their quilts. This project was created due to the difficulty of collecting quilts. Firstly, they were in the possession of various institutions, archives, and even civilians. And secondly, they can be too fragile or bulky for physical transport. Also in the field of women's history is Click! The Ongoing Feminist Revolution. which highlights the collective action and individual achievements of women from the 1940s to the present. In the UK, a pilot project began in 2002 to create a digital library of British History. This has developed into an extensive collection of over 1,200 volumes, bringing together primary and secondary sources from libraries, archives, museums and academics. Another significant project is the Old Bailey Online, a digital collection of all proceedings between 1674 and 1913. In addition to the digitized records, the Old Bailey Online website provides historical and legal background information, research guides, and educational resources for students. Digital History Classes Digital history is now a common course type in graduate and undergraduate curriculum. For example, the students in Digital History courses at the University of Hertfordshire have learned skills in digital mapping and Python programming, which makes it more accessible and easier to analyse large quantities of source data. One project that the class worked on included analyzing the trends, patterns, and relationships of data related to weather, crime, and poverty. This allowed students to use their traditional history skills to evaluate the significance of their findings. Another project was using digital mapping to compare the differences between various groups of students who studied at Oxford derived from British History Online. Similarly, at Cal State East Bay, history majors meet in the science building's computer lab to go over new and old software that could be used for the creation or presentation of history. Technology Digital technology tools arrange ideas and promote the unique analysis of data, with many tools previously unavailable to historians opening new avenues for collaboration, text mining, and big data analysis. In addition, digital history offers tools for the presentation and access to historical knowledge online. Digital historians may use web development tools, such as WYSIWYG HTML-editor Adobe Dreamweaver. Other tools create more interactive digital history, such as Databases, which provide greater capacity for information storage and retrieval in a definable way. Databases with features like Structured Query Language (SQL) and Extensible Markup Language (XML) arrange materials in a formal manner and allow precise searching for keywords, dates, and other data characteristics. The online article "The Differences Slavery Made: A Close Analysis of Two American Communities" used XML for presenting and connecting evidence with detailed historiographical discussions. The Valley of the Shadow project also employed XML to convert all of the archive's letters, diaries, and newspapers for full text searching capabilities. Coding languages such as Python may be used in order to digitally sort and filter data, whilst Google Fusion Tables can be used for the geographical mapping of data. Digital historians may use content management systems (CSM) to store their digital collection that includes audio, visual, images and text for an online web display. Examples of these systems include: Drupal, WordPress, and Omeka. The Differences Slavery Made also used geographic information systems (GIS) to analyze and understand the spatial arrangement of social structures. For the article, Ayers and Thomas created many new maps through GIS technology to produce detailed images of Augusta and Franklin counties never before possible. GIS and its many components remain helpful for studying history and visualizing change over time. The Semantic Interoperability of Metadata and Information in unLike Environments (SIMILE) project at MIT develops robust, open source tools that enable access, management, and envisaging digital assets. Among the many tools built by SIMILE, the Timeline tool, which employs a DHTML-based AJAXy widget, allows digital historians to create dynamic, customizable timelines for visualizing time-based events. The Timeline page on the SIMILE website declares that their tool "is like Google Maps for time-based information." Additionally, SIMILE's Exhibit tool boasts a customizable structure for sorting and presenting data. Exhibit, written in JavaScript, creates interactive, data-rich web pages without the need for any programming or database creation knowledge. Textual analysis software allows historians to make new use of old sources by finding patterns in large collections of documents or even just analyzing a source for frequency of terms. Textual analysis software allows historians to "text mine", or easily find correlations and themes in the documents. There are several textual-analysis programs available online, from sophisticated ones that allow the researcher to tailor the program to handle large amounts of data, like MALLET, and straightforward programs like TokenX, which generates word-frequency lists and word clouds to illustrate language usage and significance, to basic ones like Wordle, which offers simple visualizations of word frequency and relationships. Some websites provide textual analysis on their content automatically. Online bookmarking and research tool del.icio.us uses tag clouds to visually depict the frequency and importance of user-generated tags, and the recently instituted Google Ngram Viewer allows viewers to search the commonality of textual themes by year. However, with the development of digital history and the technology used to produce it, there has been questions raised over the validity of it. One such issue, is that raised by Jean Francois Baudrillard. He says that "Western Culture introduced significant modifications to the way it produced the real, by intensifying it and heightening it into a domain of reality in hyperspace: hyper-reality". Digital History Centers Center for History and New Media at George Mason University Maryland Institute for Technology in the Humanities at the University of Maryland Virginia Center for Digital History at the University of Virginia Institute for Advanced Technology in the Humanities at the University of Virginia Institute for Computing in the Humanities, Arts, and Social Science at the University of Illinois. Center for Public History and Digital Humanities at Cleveland State University Department of Digital Humanities at King's College London HUMlab at Umeå University, Sweden The Digital History Center at United States Military Academy West Point See also Historical geographic information system Social media analytics Google Trends References Bibliography Ayers, Edward L. "The Pasts and Futures of Digital History," University of Virginia (1999). Ayers, Edward L. "History in Hypertext," University of Virginia (1999). Battershill, Claire, and Shawna Ross. Using Digital Humanities in the Classroom: A Practical Introduction for Teachers, Lecturers, and Students (Bloomsbury Publishing, 2017). Bell, Johnny, et al. "'History is a conversation': teaching student historians through making digital histories." History Australia 13.3 (2016): 415–430. Burton, Orville (ed.). Computing in the Social Sciences and Humanities. Urbana: University of Illinois Press, 2002. Cohen, Daniel J. "History and the Second Decade of the Web". Rethinking History 8 (June 2004): 293–301. Cohen, Daniel J. 2005. The Future of Preserving the Past. CRM: The Journal of Heritage Stewardship 2.2 (2005): 6–19. Cohen, Daniel J. and Roy Rosenzweig, Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web. (U of Pennsylvania Press, 2006). Crompton, Constance, Richard J. Lane, and Ray Siemens, eds. Doing Digital Humanities: Practice, Training, Research (Taylor & Francis, 2016). Denley, Peter and Deian Hopkin. History and Computing. Manchester: Manchester University, 1987. Dollar, Charles, and Richard Jensen. Historians Guide to Statistics (1971), with detailed guide to older studies Fridlund, M., Oiva, M., & Paju, P. (Eds). (2020). Digital histories: emergent approaches within the new digital history. HUP, Helsinki University Press. Greenstein, Daniel I. A Historian's Guide to Computing. Oxford: Oxford University Press, 1994. Guiliano, J. (2022). A primer for teaching digital history: ten design principles. Duke University Press. "Interchange: The Promise of Digital History." Special issue, Journal of American History 95, no. 2 (September 2008). https://web.archive.org/web/20090427063847/http://journalofamericanhistory.org/issues/952/interchange/index.html (accessed May 1, 2009). Knowles, Anne Kelly (ed.). Past Time, Past Place: GIS for History. Redlands, CA: ESRI, 2002. Kornbluh, Mark. 2008. From Digital Repositories to Information Habitats: H-Net, the Quilt Index, Cyber Infrastructure, and Digital Humanities. First Monday 13(8): available at https://web.archive.org/web/20120223151150/http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/viewArticle/2230/2019 Lutz, John Sutton. 2007. Bed Jumping and Compelling Convergences in Historical Computing. Digital History portal, Department of History, University of Nebraska-Lincoln Nelson, Robert K., Andrew J. Torget, Scott Nesbit. "A Conversation with Digital Historians", Southern Spaces, 31 January 2012. Rosenzweig, Roy. "Scarcity or Abundance? Preserving the Past in a Digital Era," American Historical Review 108 (June 2003): 735–62. Rosenzweig, Roy and Michael O'Malley. "Brave New World or Blind Alley? American History on the World Wide Web," Journal of American History 84 (June 1997): 132–55. Rosenzweig, Roy and Michael O'Malley. "The Road to Xanadu: Public and Private Pathways on the History Web," Journal of American History 88 (September 2001): 548–79. Rusert, Britt. "New World: The Impact of Digitization on the Study of Slavery." American Literary History 29.2 (2017): 267–286. Salmi, Hannu. What is Digital History? Cambridge: Polity, 2020. Thomas, William G., III. "Computing and the Historical Imagination," A Companion to Digital Humanities ed. Susan Schreibman, Ray Siemens, John Unsworth (Oxford: Blackwell, 2004). Thomas, William G., III. "Writing a Digital History Journal Article from Scratch: An Account," Digital History (December 2007). Turkel, William J, Adam Crymble, Alan MacEachern. "The Programming Historian," (London, NiCHE, 2007-9). External links History of technology History of mass media Digital humanities Digital media
Digital history
[ "Technology" ]
4,091
[ "Science and technology studies", "Digital media", "Digital humanities", "Computing and society", "History of technology", "Multimedia", "History of science and technology" ]
16,094,518
https://en.wikipedia.org/wiki/Gauss%27s%20law%20for%20magnetism
In physics, Gauss's law for magnetism is one of the four Maxwell's equations that underlie classical electrodynamics. It states that the magnetic field has divergence equal to zero, in other words, that it is a solenoidal vector field. It is equivalent to the statement that magnetic monopoles do not exist. Rather than "magnetic charges", the basic entity for magnetism is the magnetic dipole. (If monopoles were ever found, the law would have to be modified, as elaborated below.) Gauss's law for magnetism can be written in two forms, a differential form and an integral form. These forms are equivalent due to the divergence theorem. The name "Gauss's law for magnetism" is not universally used. The law is also called "Absence of free magnetic poles". It is also referred to as the "transversality requirement" because for plane waves it requires that the polarization be transverse to the direction of propagation. Differential form The differential form for Gauss's law for magnetism is: where denotes divergence, and is the magnetic field. Integral form The integral form of Gauss's law for magnetism states: where is any closed surface (see image right), is the magnetic flux through , and is a vector, whose magnitude is the area of an infinitesimal piece of the surface , and whose direction is the outward-pointing surface normal (see surface integral for more details). Gauss's law for magnetism thus states that the net magnetic flux through a closed surface equals zero. The integral and differential forms of Gauss's law for magnetism are mathematically equivalent, due to the divergence theorem. That said, one or the other might be more convenient to use in a particular computation. The law in this form states that for each volume element in space, there are exactly the same number of "magnetic field lines" entering and exiting the volume. No total "magnetic charge" can build up in any point in space. For example, the south pole of the magnet is exactly as strong as the north pole, and free-floating south poles without accompanying north poles (magnetic monopoles) are not allowed. In contrast, this is not true for other fields such as electric fields or gravitational fields, where total electric charge or mass can build up in a volume of space. Vector potential Due to the Helmholtz decomposition theorem, Gauss's law for magnetism is equivalent to the following statement: The vector field is called the magnetic vector potential. Note that there is more than one possible which satisfies this equation for a given field. In fact, there are infinitely many: any field of the form can be added onto to get an alternative choice for , by the identity (see Vector calculus identities): since the curl of a gradient is the zero vector field: This arbitrariness in is called gauge freedom. Field lines The magnetic field can be depicted via field lines (also called flux lines) – that is, a set of curves whose direction corresponds to the direction of , and whose areal density is proportional to the magnitude of . Gauss's law for magnetism is equivalent to the statement that the field lines have neither a beginning nor an end: Each one either forms a closed loop, winds around forever without ever quite joining back up to itself exactly, or extends to infinity. Incorporating magnetic monopoles If magnetic monopoles were to be discovered, then Gauss's law for magnetism would state the divergence of would be proportional to the magnetic charge density , analogous to Gauss's law for electric field. For zero net magnetic charge density (), the original form of Gauss's magnetism law is the result. The modified formula for use with the SI is not standard and depends on the choice of defining equation for the magnetic charge and current; in one variation, magnetic charge has units of webers, in another it has units of ampere-meters. where is the vacuum permeability. So far, examples of magnetic monopoles are disputed in extensive search, although certain papers report examples matching that behavior. History This idea of the nonexistence of the magnetic monopoles originated in 1269 by Petrus Peregrinus de Maricourt. His work heavily influenced William Gilbert, whose 1600 work De Magnete spread the idea further. In the early 1800s Michael Faraday reintroduced this law, and it subsequently made its way into James Clerk Maxwell's electromagnetic field equations. Numerical computation In numerical computation, the numerical solution may not satisfy Gauss's law for magnetism due to the discretization errors of the numerical methods. However, in many cases, e.g., for magnetohydrodynamics, it is important to preserve Gauss's law for magnetism precisely (up to the machine precision). Violation of Gauss's law for magnetism on the discrete level will introduce a strong non-physical force. In view of energy conservation, violation of this condition leads to a non-conservative energy integral, and the error is proportional to the divergence of the magnetic field. There are various ways to preserve Gauss's law for magnetism in numerical methods, including the divergence-cleaning techniques, the constrained transport method, potential-based formulations and de Rham complex based finite element methods where stable and structure-preserving algorithms are constructed on unstructured meshes with finite element differential forms. See also Magnetic moment Vector calculus Integral Flux Gaussian surface Faraday's law of induction Ampère's circuital law Lorenz gauge condition References External links Magnetism Magnetic monopoles Maxwell's equations Magnetism
Gauss's law for magnetism
[ "Physics", "Astronomy" ]
1,169
[ "Astronomical hypotheses", "Equations of physics", "Unsolved problems in physics", "Magnetic monopoles", "Maxwell's equations" ]
16,094,706
https://en.wikipedia.org/wiki/COSMOS%20International
COSMOS International or COSMOS International Satellitenstart GmbH is a joint Russian-German launch service provider and satellite manufacturer. A partnership between OHB System, Fuchs Gruppe and PO Polyot, COSMOS commercially markets launches using the Kosmos-3M rocket, which are subcontracted to the Russian Space Forces. The organisation conducted its first launch on 28 April 1999, placing ABRIXAS and MegSat into orbit, on a single rocket. It has been responsible for the launches of SAR-Lupe satellites for Germany's Bundeswehr (defence force). See also Eurockot Starsem International Launch Services Sea Launch External links COSMOS Website References Commercial launch service providers
COSMOS International
[ "Astronomy" ]
137
[ "Rocketry stubs", "Astronomy stubs" ]
16,094,935
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Biochemistry
Annual Review of Biochemistry is an annual peer-reviewed scientific journal published by Annual Reviews, a nonprofit scientific publisher. Its first volume was published in 1932, and its founding editor was J. Murray Luck. The current editor is Roger D. Kornberg. The journal focuses on molecular biology and biological chemistry review articles. As of 2024, Journal Citation Reports gives the journal an impact factor of 12.1, ranking it fourteenth out of 313 journals in the category "Biochemistry and Molecular Biology". As of 2023, it is being published as open access, under the Subscribe to Open model. History The Annual Review of Biochemistry was the creation of Stanford University chemist and professor J. Murray Luck. In 1930, Luck offered a course on current research in biochemistry to graduate students. In designing the course, he said he felt "knee-deep in trouble", as he believed he was sufficiently knowledgeable in only a few areas of biochemistry. He considered the volume of research to be overwhelming; there were 6,500 abstracts regarding biochemistry published in Chemical Abstracts that year. Luck asked about 50 biochemists in the US, United Kingdom, and Canada if an annual volume of critical reviews on biochemistry research would be useful, to which he received positive responses. This correspondence provided possible authors and topics for his first several volumes. Stanford University Press agreed to publish the journal on a three-year contract, with financial assistance from the Chemical Foundation. Stanford University gave the journal rent-free office space in 1931 for editorial and business operations. Prior to this, Luck's only experience in the publishing industry was working for a summer as a book salesman in Western Canada. Volume 1 was published in July 1932, consisting of 30 reviews from 35 authors of nine different countries; the volume was 724 pages. Luck was the founding editor of the Annual Review of Biochemistry and held the editorship for thirty-five years. At the completion of the contract with Stanford University Press, the advisory committee of the journal, which included Carl L. Alsberg, Denis Hoagland, and Carl L. A. Schmidt, decided to assume a legal identity as the journal's publisher, though keeping Stanford University Press as the printer. On December 12, 1934, they submitted articles of incorporation with the California Secretary of State to create Annual Review of Biochemistry, Ltd., which was organized as a nonprofit. In February 1938, the name was changed to Annual Reviews, Inc. Prior to World War II, about half of all review articles published each volume were from authors outside the US. The war caused international scientific communication to drop off dramatically, with international authorship at 25% in 1947. The breadth of material within each volume lessened when Annual Reviews added new titles in physiology and plant physiology. Editorial processes The Annual Review of Biochemistry is helmed by the editor. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee. Editors of volumes Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death. J. Murray Luck (1932–1962; 1965) Esmond E. Snell (1963–1964; 1969–1983) Paul D. Boyer (1966–1968) Charles C. Richardson (1984–2003) Roger D. Kornberg (2004–present) Current editorial committee As of 2022, the editorial committee consists of the editor and the following members: James E. Rothman Gunnar von Heijne Dirk Görlich F. Ulrich Hartl Jesper Q. Svejstrup References Biochemistry Biochemistry journals Annual journals English-language journals Academic journals established in 1932
Annual Review of Biochemistry
[ "Chemistry" ]
878
[ "Biochemistry journals", "Biochemistry literature" ]
16,094,972
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Materials%20Research
The Annual Review of Materials Research is a peer-reviewed journal that publishes review articles about materials science. It has been published by the nonprofit Annual Reviews since 1971, when it was first released under the title the Annual Review of Materials Science. Four people have served as editors, with the current editor Ram Seshadri stepping into the position in 2024. It has an impact factor of 10.6 as of 2024. As of 2023, it is being published as open access, under the Subscribe to Open model. History The Annual Review of Materials Science was first published in 1971 by the nonprofit publisher Annual Reviews, making it their sixteenth journal. Its first editor was Robert Huggins. In 2001, its name was changed to the current form, the Annual Review of Materials Research. The name change was intended "to better reflect the broad appeal that materials research has for so many diverse groups of scientists and not simply those who identify themselves with the academic discipline of materials science." As of 2020, it was published both in print and electronically. It defines its scope as covering significant developments in the field of materials science, including methodologies for studying materials and materials phenomena. As of 2024, Journal Citation Reports gives the journal a 2023 impact factor of 10.6, ranking it forty-ninth of 438 titles in the category "Materials Science, Multidisciplinary". It is abstracted and indexed in Scopus, Science Citation Index Expanded, Civil Engineering Abstracts, INSPEC, and Academic Search, among others. Editorial processes The Annual Review of Materials Research is helmed by the editor or the co-editors. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee. Editors of volumes Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death. Robert Huggins (1971–1993) Elton N. Kaufmann (1994–2000) David R. Clarke (2001–2024) Ram Seshadri (2025-) Current editorial committee As of 2024, the editorial committee consists of the editor and the following members: Don Lipkin Vikram Jayaram Wayne D. Kaplan Christine Luscombe Yang Shen See also List of materials science journals References Materials Research Academic journals established in 1971 Materials science journals English-language journals Annual journals
Annual Review of Materials Research
[ "Materials_science", "Engineering" ]
623
[ "Materials science journals", "Materials science" ]
16,094,982
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Nuclear%20and%20Particle%20Science
The Annual Review of Nuclear and Particle Science is a peer-reviewed academic journal that publishes review articles about nuclear and particle science. As of 2024, Journal Citation Reports lists the journal's 2023 impact factor as 9.1, ranking it second of 22 journal titles in the category "Physics, Nuclear" and third of 30 journal titles in the category "Physics, Particles and Fields". Beginning in 2020, the Annual Review of Nuclear and Particle Science is published open access under the Subscribe to Open (S2O) publishing model. The journal was first created by the National Research Council's Committee on Nuclear Science, which partnered with Annual Reviews to produce the first volume in 1952. The initial title of the journal was Annual Review of Nuclear Science. Annual Reviews published all volumes independently beginning with Volume 3. In 1978, the journal's name was changed to Annual Review of Nuclear and Particle Science. In its history, it has had eight editors, four of whom had tenures of 10 or more years: Emilio Segrè, John David Jackson, Chris Quigg, and Barry R. Holstein. History In the early 1950s, the National Research Council's Committee on Nuclear Science announced its support for an annual volume of review articles that covered recent developments in the field of nuclear science. One of the key proponents of creating the journal was Alberto F. Thompson, who had previously helped establish Nuclear Science Abstracts in 1948. The Committee on Nuclear Science consulted the nonprofit publishing company Annual Reviews for advice, and Annual Reviews agreed to publish the initial and subsequent volumes. Members of the Committee acted as the editorial board for the first volume, which was published in December 1952. Published under the title Annual Review of Nuclear Science, it covered nuclear science developments in 1950. Beginning with Volume 2, James G. Beckerley was editor, with Martin D. Kamen, Donald F. Mastick, and Leonard I. Schiff as associate editors. From Volume 3 onward, Annual Reviews assumed all responsibility for the journal from the National Research Council. In 1978, the journal's name was changed to the Annual Review of Nuclear and Particle Science. This name was judged to be more reflective of the journal's content, which also included particle physics. Under Annual Reviews's Subscribe to Open publishing model, it was announced that the 2020 volume of Annual Review of Nuclear and Particle Science would be published open access, a first for the journal. As of 2020, it was published both in print and electronically. Editorial processes The Annual Review of Nuclear and Particle Science is helmed by the editor. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee. Editors of volumes Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death. James G. Beckerley 1953–1957 Emilio Segrè 1958–1977 John David Jackson 1978–1993 Chris Quigg 1994–2004 Boris Kayser Appointed 2004, credited 2005–2010 Barry R. Holstein Appointed 2009, credited 2011–2023 Wick C. Haxton, Michael E. Peskin Appointed 2023. References Nuclear and Particle Science Nuclear physics journals Annual journals Academic journals established in 1952 English-language journals Physics review journals
Annual Review of Nuclear and Particle Science
[ "Physics" ]
797
[ "Nuclear physics journals", "Nuclear physics" ]
16,094,992
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Pharmacology%20and%20Toxicology
The Annual Review of Pharmacology and Toxicology is a peer-reviewed academic journal that publishes review articles about pharmacology and toxicology. It was first published in 1961 as the Annual Review of Pharmacology, changing its name in 1976 to the present title. As of 2023, Annual Review of Pharmacology and Toxicology is being published as open access, under the Subscribe to Open model. As of 2024, Journal Citation Reports lists the journal's 2023 impact factor as 11.2, ranking it second of 106 journal titles in the category "Toxicology" and ninth of 354 titles in the category "Pharmacology & Pharmacy". History The Annual Review of Pharmacology was first published in 1961. Its founding editor was Windsor C. Cutting, who was also the founding editor of the Annual Review of Medicine in 1950. Its initial editorial committee overlapped with that of Pharmacological Reviews so that the two journals would not duplicate each other's efforts. In 1976 its name was changed to its current version, the Annual Review of Pharmacology and Toxicology. It defines its scope as covering various aspects of pharmacology and toxicology, including biochemical receptors, transporters, enzymes, drug development, the immune system, central and autonomic nervous systems, gastrointestinal system, cardiovascular system, endocrine system, and respiratory system. It is abstracted and indexed in Scopus, Science Citation Index Expanded, MEDLINE, Aquatic Sciences and Fisheries Abstracts, and Academic Search, among others. Editorial processes The Annual Review of Pharmacology and Toxicology is helmed by the editor or the co-editors. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee. Editors of volumes Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death. Windsor C. Cutting (1961–1964) Henry W. Elliott (1965–1976) Robert George and Ronald Okun (1977–1989) Robert George (1990) Arthur K. Cho (1991–2012) Paul A. Insel (2013–present) Current editorial committee As of 2022, the editorial committee consists of the editor and the following members: Susan G. Amara Terrence F. Blaschke Urs A. Meyer Amrita Ahluwalia Max Costa Annette C. Dolphin Lorraine J. Gudas Dan M. Roden See also List of pharmaceutical sciences journals References Pharmacology and Toxicology English-language journals Annual journals Academic journals established in 1961 Pharmacology journals Toxicology journals
Annual Review of Pharmacology and Toxicology
[ "Environmental_science" ]
687
[ "Toxicology journals", "Toxicology" ]
16,095,000
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Plant%20Biology
Annual Review of Plant Biology is a peer-reviewed scientific journal published by Annual Reviews. It was first published in 1950 as the Annual Review of Plant Physiology. Sabeeha Merchant has been the editor since 2005, making her the longest-serving editor in the journal's history after Winslow Briggs (1973–1993). Journal Citation Reports lists the journal's 2023 impact factor as 21.3, ranking it first of 265 journal titles in the category "Plant Sciences". As of 2023, it is being published as open access, under the Subscribe to Open model. History Beginning in 1947, the publishing nonprofit Annual Reviews began asking plant physiologists if it would be useful to have an annual journal that published review articles summarizing the recent literature in the field. Responses indicated that this would be very favorable, and the Annual Review of Plant Physiology published its first volume in 1950. Its founding editor was Daniel I. Arnon. It was thus the seventh journal title to be published by Annual Reviews. Its scope was somewhat reduced by the publication of the Annual Review of Phytopathology, first released in 1963. In 1988, its named changed to the Annual Review of Plant Physiology and Plant Molecular Biology. In the 1990s, it began having color illustrations and was published online for the first time. Its name was changed once again in 2002 to its current version, the Annual Review of Plant Biology. As of 2020, it was published both in print and electronically. The journal covers developments in the field of plant biology, including cell biology, genetics, genomics, molecular biology, cell differentiation, tissue, acclimation (including adaptation), and methods. The journal is abstracted and indexed in the following databases. Chemical Abstracts Service MEDLINE/PubMed Science Citation Index BIOSIS Previews Editorial processes The Annual Review of Plant Biology is helmed by the editor. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee. Editors of volumes Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death. Daniel I. Arnon (1950–1955) Lawrence Rogers Blinks (1956) Alden Springer Crafts (1957–1959) Leonard Machlis (1959–1972) Winslow Briggs (1973–1993) Russell L. Jones (1994–2001) Deborah Delmer (2002–2004) Sabeeha Merchant (2005–present) Current editorial committee As of 2022, the editorial committee consists of the editor and the following members: Wilhelm Gruissem Donald R. Ort Ian T. Baldwin Magdalena Bezanilla Xiaofeng Cao Mark Estelle Patricia León Keiko U. Torii Cyril Zipfel References External links Annual Review of Plant Biology at SCImago Journal Rank Molecular and cellular biology journals Academic journals established in 1950 Botany journals English-language journals Plant Biology Annual journals 1950 establishments in California
Annual Review of Plant Biology
[ "Chemistry" ]
737
[ "Molecular and cellular biology journals", "Molecular biology" ]
16,095,249
https://en.wikipedia.org/wiki/History%20of%20poison
The history of poison stretches from before 4500 BCE to the present day. Poisons have been used for many purposes across the span of human existence, most commonly as weapons, anti-venoms, and medicines. Poison has been heavily studied in toxicology, among other sciences, and its use has led to several technological innovations. Poison was discovered in ancient times, and was used by ancient tribes and civilizations as a hunting tool to quicken and ensure the death of their prey or enemies. This use of poison grew more advanced, and many of these ancient peoples began forging weapons designed specifically for poison enhancement. Later in history, particularly at the time of the Roman Empire, one of the more prevalent uses was assassination. As early as 331 BCE, poisonings executed at the dinner table or in drinks were reported, and the practice became a common occurrence. The use of fatal substances was seen among every social class; the nobility would often use it to dispose of unwanted political or economic opponents. In Medieval Europe, poison became a more popular form of killing, though cures surfaced for many of the more widely known poisons. This was stimulated by the increased availability of poisons; shops known as apothecaries, selling various medicinal wares, were open to the public, and from there, substances that were traditionally used for curative purposes were employed for more sinister ends. At approximately the same time, in the Middle East, Arabs developed a form of arsenic that is odorless and transparent, making the poison difficult to detect. This "poison epidemic" was also prevalent in parts of Asia at this time. Over the centuries, the variety of harmful uses of poisons continued to increase. The means for curing these poisons also advanced in parallel. In the modern world, intentional poisoning is less common than the Middle Ages. Rather, the more common concern is the risk of accidental poisoning from everyday substances and products. Constructive uses for poisons have increased considerably in the modern world. Poisons are now used as pesticides, disinfectants, cleaning solutions, and preservatives. Nonetheless, poison continues to be used as a hunting tool in remote parts of developing countries. Origins of poison Archaeological findings prove that while ancient mankind used conventional weapons such as axes and clubs, and later swords, they sought more subtle, destructive means of causing death—something that could be achieved through poison. Grooves for storing or holding poisons such as tubocurarine have been plainly found in their hunting weapons and tools, showing that early humans had discovered poisons of varying potency and applied them to their weapons. Some speculate that this use and existence of these strange and noxious substances was kept secret within the more important and higher-ranked members of a tribe or clan, and were seen as emblems of a greater power. This may have also given birth to the concept of the stereotypical "medicine man" or "witch doctor". Once the use and danger of poison was realized, it became apparent that something had to be done. Mithridates VI, King of Pontus (an ancient Hellenistic state of northern Anatolia), from around 114–63 BC, lived in constant fear of being assassinated through poison. He became a hard-working pioneer in the search for a cure for poisons. In his position of power, he was able to test poisons on criminals facing execution, and then if there was a possible antidote. He was paranoid to the point that he administered daily amounts of poisons in an attempt to make himself immune to as many poisons as he could. Eventually, he discovered a formula that combined small portions of dozens of the best-known herbal remedies of the time, which he named Mithridatium. This was kept secret until his kingdom was invaded by Pompey the Great, who took it back to Rome. After being defeated by Pompey, Mithridates' antidote prescriptions and notes of medicinal plants were taken by the Romans and translated into Latin. Pliny the Elder describes over 7000 different poisons. One he describes as "The blood of a duck found in a certain district of Pontus, which was supposed to live on poisonous food, and the blood of this duck was afterwards used in the preparation of the Mithridatum, because it fed on poisonous plants and suffered no harm." India Indian surgeon Sushruta defined the stages of slow poisoning and the remedies of slow poisoning. He also mentions antidotes and the use of traditional substances to counter the effects of poisoning. Poisoned weapons were used in ancient India, and war tactics in ancient India have references to poison. A verse in Sanskrit reads "Jalam visravayet sarmavamavisravyam ca dusayet," which translates to "Waters of wells were to be mixed with poison and thus polluted." Chānakya (–283 BC), also known as Kautilya, was adviser and prime minister to the first Maurya Emperor Chandragupta (–293 BC). Kautilya suggested employing means such as seduction, secret use of weapons, and poison for political gain. He also urged detailed precautions against assassination—tasters for food and elaborate ways to detect poison. In addition, the death penalty for violations of royal decrees was frequently administered through the use of poison. Egypt Unlike many civilizations, records of Egyptian knowledge and use of poisons can only be dated back to approximately 300 BC. However, it is believed that the earliest known Egyptian pharaoh, Menes, studied the properties of poisonous plants and venoms, according to early records. The Egyptians are also thought to have come into knowledge about elements such as antimony, copper, crude arsenic, lead, opium, and mandrake (among others) which are mentioned in papyri. Egyptians are now thought to be the first to master distillation, and to manipulate the poison that can be retrieved from apricot kernels. Cleopatra is said to have poisoned herself with an asp after hearing of Marc Antony's demise. Prior to her death, she was said to have sent many of her maidservants to act as guinea pigs to test different poisons, including belladonna, henbane, and the strychnine tree's seed. After this, the alchemist Agathodaemon (around AD 300) spoke of a mineral that when mixed with natron produced a 'fiery poison'. He described this poison as 'disappearing in water', giving a clear solution. Emsley speculates that the 'fiery poison' was arsenic trioxide, the unidentified mineral having to have been either realgar or orpiment, due to the relation between the unidentified mineral and his other writings. Rome In Roman times, poisoning carried out at the dinner table or common eating or drinking area was not unheard of, or even uncommon, and was happening as early as 331 BC. These poisonings would have been used for self-advantageous reasons in every class of the social order. The writer Livy describes the poisoning of members of the upper class and nobles of Rome, and Roman emperor Nero is known to have favored the use of poisons on his relatives, even hiring a personal poisoner. His preferred enema poison was said to be cyanide. Nero's predecessor, Claudius, was allegedly poisoned with mushrooms or alternatively poison herbs. However, accounts of the way Claudius died vary greatly. Halotus, his taster, Gaius Stertinius Xenophon, his doctor, and the infamous poisoner Locusta have all been accused of possibly being the administrator of the fatal substance, but Agrippina, his final wife, is considered to be the most likely to have arranged his murder and may have even administered the poison herself. Some report that he died after prolonged suffering following a single dose at his evening meal, while some say that he recovered somewhat, only to be poisoned once more by a feather dipped in poison which was pushed down his throat under the pretense of helping him to vomit, or by poisoned gruel or an enema. Agrippina is considered to be the murderer, because she was ambitious for her son, Nero, and Claudius had become suspicious of her intrigues. Later imperial Asia Despite the negative effects of poison, which were so evident in these times, cures were being found in poison, even at such a time where it was hated by most of the general public. An example can be found in the works of Iranian born Persian physician, philosopher, and scholar Rhazes, writer of Secret of Secrets, which was a long list of chemical compounds, minerals and apparatus, the first man to distil alcohol and use it as an anti-septic, and the person who suggested mercury be used as a laxative. He made discoveries relating to a mercury chloride called corrosive sublimate. An ointment derived from this sublimate was used to cure what Rhazes described as 'the itch', which is now referred to as scabies. This proved an effective treatment because of mercury's poisonous nature and ability to penetrate the skin, allowing it to eliminate the disease and the itch. Nazi suicides by poison Nazi war leader Hermann Göring used cyanide to kill himself the night before he was supposed to be hanged during the Nuremberg trials. Adolf Hitler had also taken a pill of cyanide but he bit down on the capsule and shot himself in the right temple shortly before the fall of Berlin along with his wife, Eva Braun. Present day In the late 20th century, an increasing number of products used for everyday life proved to be poisonous. The risk of being poisoned nowadays lies more in the accidental factor, where poison be induced or taken by accident. Poisoning is the 4th most common cause of death within young people. Accidental ingestions are most common in children less than 5 years old. However, hospital and emergency facilities are much enhanced compared to the first half of the 20th century and before, and antidotes are more available. Antidotes have been found for many poisons, and the antidotes for some of the most commonly known poisons are shown in the table above: However, poison still exists as a murderous entity today, but it is not as popular form of conducting murder as it used to be in past times, probably because of the wider range of ways to kill people, better detection, and other factors that must be taken into consideration. One of the more recent deaths by poisoning was that of Russian dissident Alexander Litvinenko in 2006 from lethal polonium-210 radiation poisoning. Other uses Today, poison is used for a wider variety of purposes than it used to be. For example, poison can be used to rid an unwanted infestation by pests or to kill weeds. Such chemicals, known as pesticides, have been known to be used in some form since about 2500 BC. However, the use of pesticides has increased staggeringly from 1950, and presently approximately 2.5 million tons of industrial pesticides are used each year. Other poisons can also be used to preserve foods and building material. In culture Today, in many developing peoples of countries such as certain parts of Africa, South America and Asia, the use of poison as an actual weapon of hunting and attack still endures. In Africa, certain arrow poisons are made using floral ingredients, such as of that taken from the plant Acokanthera. This plant contains ouabain, which is a cardiac glycoside, oleander, and milkweeds. Poisoned arrows are also still used in the jungle areas of Assam, Burma and Malaysia. The ingredients for the creation of these poisons are mainly extracted from plants of the Antiaris, Strychnos and Strophanthus genera, and Antiaris toxicaria (a tree of the mulberry and breadfruit family), for example, is used in the Java island of Indonesia, as well as several of its surrounding islands. The juice or liquid extracts are smeared on the head of the arrow, and inflicts the target paralysis, convulsions and/or cardiac arrest, virtually on strike due to the speed in which the extracts can affect a victim. As well as plant based poisons, there are others that are made that are based on animals. For example, the larva or pupae of a beetle genus of the Northern Kalahari Desert is used to create a slow-acting poison that can be quite useful when hunting. The beetle itself is applied to the arrow head, by squeezing the contents of the beetle right onto the head. Plant sap is then mixed and serves as an adhesive. However, instead of the plant sap, a powder made from the dead, eviscerated larva can be used. See also Visha Kanya Chinese alchemical elixir poisoning Forensic science List of chemical elements List of Extremely Hazardous Substances List of poisonings Poison Toxicity References and notes Further reading External links History of Poisons www.poison.org Dark History of Poison Arsenic Poisoning History "The Savior from Demise: A Book on Withstanding the Harms of Deadly Poisons" from 1431 (in Arabic) Poisons Poison
History of poison
[ "Environmental_science" ]
2,692
[ "Poisons", "Toxicology" ]
16,095,402
https://en.wikipedia.org/wiki/Rhythmic%20spring
A rhythmic spring (also: ebb and flow spring, periodic spring, intermittent spring) is a cold water spring from which the flow of water either varies or starts and stops entirely, over a fairly regular time-scale of minutes or hours. Compared to continuously flowing springs, rhythmic springs are uncommon, with the number worldwide estimated in 1991 to be around one hundred. Theory Although the cause of the periodicity in flow is not known for certain, the most accepted theory (first postulated in the early 18th century) is that as groundwater flows continuously into a cavern, it fills a narrow tube that leads upwards from near the base of the cavern, then downwards to the spring. As the water level reaches the high point of the tube, it creates a siphon effect, sucking water out of the chamber. Eventually air rushes into the tube and breaks the siphon, stopping the flow if there is no other source feeding the spring, or reducing the flow if there is a continuous flow from another non-siphon source. In 2006 the University of Utah studied the Intermittent Spring in Swift Creek canyon in Star Valley, Wyoming, United States. Kip Solomon, a hydrologist at the University concluded that "The spring water's gas content has now been tested [...]. The data strongly suggests the water was exposed to air underground; strong support for the siphon theory." Notable rhythmic springs The Intermittent Spring in Wyoming, mentioned above, is the largest rhythmic spring in the world. The Gihon Spring in the City of David in Jerusalem used to be a rhythmic spring before modern-time overpumping affected the level of the underground water table. It was of great historical, archaeological, and cultural importance because it is what made possible the human settlement in ancient Jerusalem. Pliny the Younger – who is famous for his accurate description of the Vesuvius eruption of AD 79 – also accurately described a rhythmic spring in the last letter of Book IV. This rhythmic spring is still acting to the same time scale today (Villa Pliniana spring in Como, Italy). Rhythmic springs in Serbia In Serbia there are four intermittent springs: Bjeluška Potajnica, near Arilje in western Serbia Homoljska Potajnica, near Žagubica in eastern Serbia Kučevska Potajnica (Zviška Potajnica), near Kučevo in eastern Serbia Promuklica, near Tutin in south-western Serbia References Ognjen Bonacci & Davor Bojanić, Rhythmic Karst Springs, Hydrological Sciences Journal, Feb 1991 External links Zaganjalka spring (Russia, Tchelyabinsk area - in Russian) Springs (hydrology)
Rhythmic spring
[ "Environmental_science" ]
549
[ "Hydrology", "Springs (hydrology)" ]
16,096,622
https://en.wikipedia.org/wiki/Aluminium-26
Aluminium-26 (26Al, Al-26) is a radioactive isotope of the chemical element aluminium, decaying by either positron emission or electron capture to stable magnesium-26. The half-life of 26Al is 717,000 years. This is far too short for the isotope to survive as a primordial nuclide, but a small amount of it is produced by collisions of atoms with cosmic ray protons. Decay of aluminium-26 also produces gamma rays and x-rays. The x-rays and Auger electrons are emitted by the excited atomic shell of the daughter 26Mg after the electron capture which typically leaves a hole in one of the lower sub-shells. Because it is radioactive, it is typically stored behind at least of lead. Contact with 26Al may result in radiological contamination. This necessitates special tools for transfer, use, and storage. Dating Aluminium-26 can be used to calculate the terrestrial age of meteorites and comets. It is produced in significant quantities in extraterrestrial objects via spallation of silicon alongside beryllium-10, though after falling to Earth, 26Al production ceases and its abundance relative to other cosmogenic nuclides decreases. Absence of aluminium-26 sources on Earth is a consequence of Earth's atmosphere obstructing silicon on the surface and low troposphere from interaction with cosmic rays. Consequently, the amount of 26Al in the sample can be used to calculate the date the meteorite fell to Earth. Occurrence in the interstellar medium The gamma ray emission from the decay of aluminium-26 at 1809 keV was the first observed gamma emission from the Galactic Center. The observation was made by the HEAO-3 satellite in 1984. 26Al is mainly produced in supernovae ejecting many radioactive nuclides in the interstellar medium. The isotope is believed to be crucial for the evolution of planetary objects, providing enough heat to melt and differentiate accreting planetesimals. This is known to have happened during the early history of the asteroids 1 Ceres and 4 Vesta. 26Al has been hypothesized to have played a role in the unusual shape of Saturn's moon Iapetus. Iapetus is noticeably flattened and oblate, indicating that it rotated significantly faster early in its history, with a rotation period possibly as short as 17 hours. Heating from 26Al could have provided enough heat in Iapetus to allow it to conform to this rapid rotation period, before the moon cooled and became too rigid to relax back into hydrostatic equilibrium. The presence of aluminium monofluoride molecule as the 26Al isotopologue in CK Vulpeculae, which is an unknown type of nova, constitutes the first solid evidence of an extrasolar radioactive molecule. Aluminium-26 in the early Solar System In considering the known melting of small planetary bodies in the early Solar System, H. C. Urey noted that the naturally occurring long-lived radioactive nuclei (40K, 238U, 235U and 232Th) were insufficient heat sources. He proposed that the heat sources from short lived nuclei from newly formed stars might be the source and identified 26Al as the most likely choice. This proposal was made well before the general problems of stellar nucleosynthesis of the nuclei were known or understood. This conjecture was based on the discovery of 26Al in a Mg target by Simanton, Rightmire, Long & Kohman. Their search was undertaken because hitherto there was no known radioactive isotope of Al that might be useful as a tracer. Theoretical considerations suggested that a state of 26Al should exist. The life time of 26Al was not then known; it was only estimated between 104 and 106 years. The search for 26Al took place over many years, long after the discovery of the extinct radionuclide 129I which showed that contribution from stellar sources formed ~108 years before the Sun had contributed to the Solar System mix. The asteroidal materials that provide meteorite samples were long known to be from the early Solar System. The Allende meteorite, which fell in 1969, contained abundant calcium–aluminium-rich inclusions (CAIs). These are very refractory materials and were interpreted as being condensates from a hot solar nebula. then discovered that the oxygen in these objects was enhanced in 16O by ~5% while the 17O/18O was the same as terrestrial. This clearly showed a large effect in an abundant element that might be nuclear, possibly from a stellar source. These objects were then found to contain strontium with very low 87Sr/86Sr indicating that they were a few million years older than previously analyzed meteoritic material and that this type of material would merit a search for 26Al. 26Al is only present today in the Solar System materials as the result of cosmic reactions on unshielded materials at an extremely low level. Thus, any original 26Al in the early Solar System is now extinct. To establish the presence of 26Al in very ancient materials requires demonstrating that samples must contain clear excesses of 26Mg/24Mg which correlates with the ratio of 27Al/24Mg. The stable 27Al is then a surrogate for extinct 26Al. The different 27Al/24Mg ratios are coupled to different chemical phases in a sample and are the result of normal chemical separation processes associated with the growth of the crystals in the CAIs. Clear evidence of the presence of 26Al at an abundance ratio of 5×10−5 was shown by Lee et al. The value (26Al/27Al ~ 5) has now been generally established as the high value in early Solar System samples and has been generally used as a refined time scale chronometer for the early Solar System. Lower values imply a more recent time of formation. If this 26Al is the result of pre-solar stellar sources, then this implies a close connection in time between the formation of the Solar System and the production in some exploding star. Many materials which had been presumed to be very early (e.g. chondrules) appear to have formed a few million years later. Other extinct radioactive nuclei, which clearly had a stellar origin, were then being discovered. That 26Al was present in the interstellar medium as a major gamma ray source was not explored until the development of the high-energy astronomical observatory program. The HEAO-3 spacecraft with cooled Ge detectors allowed the clear detection of 1.808 MeV gamma lines from the central part of the galaxy from a distributed 26Al source. This represents a quasi steady state inventory corresponding to two solar masses of 26Al was distributed. This discovery was greatly expanded on by observations from the Compton Gamma Ray Observatory using the COMPTEL telescope in the galaxy. Subsequently, the 60Fe lines (1.173 MeV and 1.333 Mev) were also detected showing the relative rates of decays from 60Fe to 26Al to be 60Fe/26Al ~ 0.11. In pursuit of the carriers of 22Ne in the sludge produced by chemical destruction of some meteorites, carrier grains in micron size, acid-resistant ultra-refractory materials (e.g. C, SiC) were found by E. Anders & the Chicago group. The carrier grains were clearly shown to be circumstellar condensates from earlier stars and often contained very large enhancements in 26Mg/24Mg from the decay of 26Al with 26Al/27Al sometimes approaching 0.2. These studies on micron scale grains were possible as a result of the development of surface ion mass spectrometry at high mass resolution with a focused beam developed by G. Slodzian & R. Castaing with the CAMECA Co. The production of 26Al by cosmic ray interactions in unshielded materials is used as a monitor of the time of exposure to cosmic rays. The amounts are far below the initial inventory that is found in very early solar system debris. Metastable states Before 1954, the half-life of aluminium-26m was measured to be 6.3 seconds. After it was theorized that this could be the half-life of a metastable state (isomer) of aluminium-26, the ground state was produced by bombardment of magnesium-26 and magnesium-25 with deuterons in the cyclotron of the University of Pittsburgh. The first half-life was determined to be in the range of 106 years. The Fermi beta decay half-life of the aluminium-26 metastable state is of interest in the experimental testing of two components of the Standard Model, namely, the conserved-vector-current hypothesis and the required unitarity of the Cabibbo–Kobayashi–Maskawa matrix. The decay is superallowed. The 2011 measurement of the half life of 26mAl is milliseconds. See also Isotopes of aluminium Surface exposure dating References Isotopes of aluminium Positron emitters Radionuclides used in radiometric dating
Aluminium-26
[ "Chemistry" ]
1,848
[ "Radionuclides used in radiometric dating", "Isotopes of aluminium", "Isotopes" ]
16,097,642
https://en.wikipedia.org/wiki/Tack%20strip
Tack strip also known as gripper rod , carpet gripper, Smoothedge tackless strip, gripper strip or gripper edge is a thin piece of wood, between long and about wide, studded with hundreds of sharp nails or tacks used in the installation of carpet. Tack strip is nailed, tack side up, to the perimeter of the area being carpeted to help keep it taut. After the underlay is installed, the carpet is cut to fit, stretched over the area and firmly anchored to the edges of floor by the tack strip. The strip has two functions: to grip the carpet and permanently hold it in place, and to jam the carpet edge into the gap between the tack strip and the wall, giving it a finished look with little effort. This method allows a high quality, long lasting installation to be completed quickly and easily. Tack strip was invented by Roy Roberts in 1939. This product revolutionized the power stretch method still used today for installing tufted carpet. "Gripper Edge" and "Smoothedge" were original trademarks used by Roy Roberts and his companies. References Fasteners Rugs and carpets
Tack strip
[ "Engineering" ]
230
[ "Construction", "Fasteners" ]
16,100,023
https://en.wikipedia.org/wiki/Triethylsilane
Triethylsilane is the organosilicon compound with the formula (C2H5)3SiH. It is a trialkylsilane. The Si-H bond is reactive. It was first discovered by Albert Ladenburg in 1872 among the products of reduction of tetraethyl orthosilicate with sodium and diethylzinc. He also prepared it by a stepwise reduction via ethoxytriethylsilane and named it silicoheptyl hydride, reflecting the idea of a silicon compound analogous to a seven-carbon hydrocarbon. This colorless liquid is used in organic synthesis as a reducing agent and as a precursor to silyl ethers. As one of the simplest trialkylsilanes that is a liquid at room temperature, triethylsilane is often used in studies of hydrosilylation catalysis. Additional reading References Reducing agents Carbosilanes
Triethylsilane
[ "Chemistry" ]
191
[ "Redox", "Reducing agents" ]
16,102,600
https://en.wikipedia.org/wiki/Aniline%20point
The aniline point of an oil is defined as the minimum temperature at which equal volumes of aniline () and lubricant oil are miscible, i.e. form a single phase upon mixing. The value gives an approximation for the content of aromatic compounds in the oil, since the miscibility of aniline, which is also an aromatic compound suggests the presence of similar (i.e. aromatic) compounds in the oil. The lower the aniline point, the greater is the content of aromatic compounds in the oil. The aniline point serves as a reasonable proxy for aromaticity of oils consisting mostly of saturated hydrocarbons (i.e. alkanes, paraffins) or unsaturated compounds (mostly aromatics). Significant chemical functionalization of the oil (chlorination, sulfonation, etc.) can interfere with the measurement, due to changes to the solvency of the functionalized oil. Aniline point indicates if an oil is likely to damage elastomers (rubber compounds) that come in contact with the oil. Determination of aniline point Equal volumes of aniline and oil are stirred continuously in a test tube and heated until the two merge into a homogeneous solution. Heating is stopped and the tube is allowed to cool. The temperature at which the two phases separate out is recorded as aniline point. References Gupta, O.P. Fuels, Furnaces, Refractories. See also Lubricant Grease (lubricant) Oil analysis Viscosity index Saponification value Cloud point Pour point Flash point Fire point Softening point Glass transition temperature (Tg) Chemical properties
Aniline point
[ "Chemistry" ]
349
[ "nan", "Organic chemistry stubs" ]
16,102,721
https://en.wikipedia.org/wiki/Electrochemical%20reaction%20mechanism
In electrochemistry, an electrochemical reaction mechanism is the step-by-step sequence of elementary steps, involving at least one outer-sphere electron transfer, by which an overall electrochemical reaction occurs. Overview Elementary steps like proton coupled electron transfer and the movement of electrons between an electrode and substrate are special to electrochemical processes. Electrochemical mechanisms are important to all redox chemistry including corrosion, redox active photochemistry including photosynthesis, other biological systems often involving electron transport chains and other forms of homogeneous and heterogeneous electron transfer. Such reactions are most often studied with standard three electrode techniques such as cyclic voltammetry(CV), chronoamperometry, and bulk electrolysis as well as more complex experiments involving rotating disk electrodes and rotating ring-disk electrodes. In the case of photoinduced electron transfer the use of time-resolved spectroscopy is common. Formalism When describing electrochemical reactions an "E" and "C" formalism is often employed. The E represents an electron transfer; sometimes EO and ER are used to represent oxidations and reductions respectively. The C represents a chemical reaction which can be any elementary reaction step and is often called a "following" reaction. In coordination chemistry common C steps which "follow" electron transfer are ligand loss and association. The ligand loss or gain is associated with a geometric change in the complexes coordination sphere. The reaction above would be called an EC reaction. Characterization The production of in the reaction above by the "following" chemical reaction produces a species directly at the electrode that could display redox chemistry anywhere in a CV plot or none at all. The change in coordination from to often prevents the observation of "reversible" behavior during electrochemical experiments like cyclic voltammetry. On the forward scan the expected diffusion wave is observed, in example above the reduction of to . However, on the return scan the corresponding wave is not observed, in the example above this would be the wave corresponding to the oxidation of to . In our example there is no to oxidize since it has been converted to through ligand loss. The return wave can sometimes be observed by increasing the scan rates so the following chemical reaction can be observed before the chemical reaction takes place. This often requires the use of ultramicroelectrodes (UME) capable of very high scan rates of 0.5 to 5.0 V/s. Plots of forward and reverse peak ratios against modified forms of the scan rate often identify the rate of the chemical reaction. It has become a common practice to model such plots with electrochemical simulations. The results of such studies are of disputed practical relevance since simulation requires excellent experimental data, better than that routinely obtained and reported. Furthermore, the parameters of such studies are rarely reported and often include an unreasonably high variable to data ratio (ref?). A better practice is to look for a simple, well documented relationship between observed results and implied phenomena; or to investigate a specific physical phenomenon using an alternative technique such as chronoamperometry or those involving a rotating electrode. Electrocatalysis Electrocatalysis is a catalytic process involving oxidation or reduction through the direct transfer of electrons. The electrochemical mechanisms of electrocatalytic processes are a common research subject for various fields of chemistry and associated sciences. This is important to the development of water oxidation and fuel cells catalysts. For example, half the water oxidation reaction is the reduction of protons to hydrogen, the subsequent half reaction. This reaction requires some form of catalyst to avoid a large overpotential in the delivery of electrons. A catalyst can accomplish this reaction through different reaction pathways, two examples are listed below for the homogeneous catalysts . Pathway 1 Pathway 2 Pathway 1 is described as an ECECC while pathway 2 would be described as an ECC. If the catalyst was being considered for solid support, pathway 1 which requires a single metal center to function would be a viable candidate. In contrast, a solid support system which separates the individual metal centers would render a catalysts that operates through pathway 2 useless, since it requires a step which is second order in metal center. Determining the reaction mechanism is much like other methods, with some techniques unique to electrochemistry. In most cases electron transfer can be assumed to be much faster than the chemical reactions. Unlike stoichiometric reactions where the steps between the starting materials and the rate limiting step dominate, in catalysis the observed reaction order is usually dominated by the steps between the catalytic resting state and the rate limiting step. "Following" physical transformations During potential variant experiments, it is common to go through a redox couple in which the major species is transformed from a species that is soluble in the solution to one that is insoluble. This results in a nucleation process in which a new species plates out on the working electrode. If a species has been deposited on the electrode during a potential sweep then on the return sweep a stripping wave is usually observed. While the nucleation wave may be pronounced or difficult to detect, the stripping wave is usually very distinct. Often these phenomena can be avoided by reducing the concentration of the complex in solution. Neither of these physical state changes involve a chemical reaction mechanism but they are worth mentioning here since the resulting data is at times confused with some chemical reaction mechanisms. References Electrochemical concepts
Electrochemical reaction mechanism
[ "Chemistry" ]
1,085
[ "Electrochemistry", "Electrochemical concepts" ]
16,103,661
https://en.wikipedia.org/wiki/List%20of%20GPS%20satellites
, 83 Global Positioning System navigation satellites have been built: 31 are launched and operational, 3 are in reserve or testing, 43 are retired, 2 were lost during launch, and 1 prototype was never launched. 3 Block III satellites have completed construction and have been declared "Available For Launch" (AFL). The next launch is GPS III SV08, currently targeted for 2025. The constellation requires a minimum of 24 operational satellites, and allows for up to 32; typically, 31 are operational at any one time. A GPS receiver needs four satellites to work out its position in three dimensions. SVNs are "space vehicle numbers" which are serial numbers assigned to each GPS satellite. PRNs are the "pseudo-random noise" sequences, or Gold codes, that each satellite transmits to differentiate itself from other satellites in the active constellation. After being launched, GPS satellites enter a period of testing before their signals are set to "Healthy". During normal operations, certain signals may be set to "Unhealthy" to accommodate updates or testing. After decommissioning, most GPS satellites become on-orbit spares and may be recommissioned if needed. Permanently retired satellites are sent to a higher, less congested disposal orbit where their fuel is vented, batteries are intentionally depleted and communication is switched off. Satellites Satellites by launch date Satellites by block Orbital slots (by SVN) Refer to GPS Constellation Status for the most up-to-date information. Numbers in parentheses refer to non-operational satellites. Once launched, GPS satellites do not change their plane assignment but slot assignments are somewhat arbitrary and are subject to change. PRN status by satellite block , 31 of 32 PRNs are in use; PRN 22 is unassigned. Two additional satellites are designated as on-orbit spares. PRN to SVN history This section is for the purpose of making it possible to determine the PRN associated with a SVN at a particular epoch. For example, SVN 049 had been assigned PRNs 01, 24, 27, and 30 at different times of its lifespan, whereas PRN 01 had been assigned to SVNs 032, 037, 049, 035, and 063 at different epochs. This information can be found in the IGS ANTEX file, which uses the convention "GNN" and "GNNN" for PRNs and SVNs, respectively. For example, SVN 049 is described as: BLOCK IIR-M G01 G049 2009-014A TYPE / SERIAL NO 2009 3 24 0 0 0.0000000 VALID FROM 2011 5 6 23 59 59.9999999 VALID UNTIL BLOCK IIR-M G24 G049 2009-014A TYPE / SERIAL NO 2012 2 2 0 0 0.0000000 VALID FROM 2012 3 14 23 59 59.9999999 VALID UNTIL BLOCK IIR-M G24 G049 2009-014A TYPE / SERIAL NO 2012 8 9 0 0 0.0000000 VALID FROM 2012 8 22 23 59 59.9999999 VALID UNTIL BLOCK IIR-M G27 G049 2009-014A TYPE / SERIAL NO 2012 10 18 0 0 0.0000000 VALID FROM 2013 5 9 23 59 59.9999999 VALID UNTIL BLOCK IIR-M G30 G049 2009-014A TYPE / SERIAL NO 2013 5 10 0 0 0.0000000 VALID FROM whereas for PRN 01 the following excerpt is relevant: BLOCK IIA G01 G032 1992-079A TYPE / SERIAL NO 1992 11 22 0 0 0.0000000 VALID FROM 2008 10 16 23 59 59.9999999 VALID UNTIL BLOCK IIA G01 G037 1993-032A TYPE / SERIAL NO 2008 10 23 0 0 0.0000000 VALID FROM 2009 1 6 23 59 59.9999999 VALID UNTIL BLOCK IIR-M G01 G049 2009-014A TYPE / SERIAL NO 2009 3 24 0 0 0.0000000 VALID FROM 2011 5 6 23 59 59.9999999 VALID UNTIL BLOCK IIA G01 G035 1993-054A TYPE / SERIAL NO 2011 6 2 0 0 0.0000000 VALID FROM 2011 7 12 23 59 59.9999999 VALID UNTIL BLOCK IIF G01 G063 2011-036A TYPE / SERIAL NO 2011 7 16 0 0 0.0000000 VALID FROM A table extracted out of the ANTEX file is made available by the Bernese GNSS Software. Planned launches Block III Block IIIF See also List of BeiDou satellites List of Galileo satellites List of GLONASS satellites List of NAVIC satellites References External links Boeing Delta launch schedule United States military launch record NAVCEN GPS Constellation NIMA GPS Constellation Status UNB GPS Constellation Status AGI-CSSI GPS Constellation Status Global Positioning System GPS
List of GPS satellites
[ "Technology", "Engineering" ]
1,020
[ "Wireless locating", "GPS satellites", "Aircraft instruments", "Aerospace engineering", "Global Positioning System" ]
16,103,684
https://en.wikipedia.org/wiki/Universal%20Terminology%20eXchange
UTX (Universal Terminology eXchange) is a simple glossary format. UTX is developed by AAMT (Asia-Pacific Association for Machine Translation). A tab-separated text format that contains minimal information, such as source language entry, target language entry, and part-of-speech entry. UTX is intended to facilitate rapid creation and quick exchanges of human-readable and machine-readable glossaries. Initially, UTX was created to absorb the differences between various user dictionary formats for machine translation. The scope of the format was later expanded to include other purposes, such as glossaries for human translations, natural language processing, thesaurus, text-to-speech, input method, etc. UTX could be used to improve the efficiency of localization for open source projects. UTX Converter UTX Converter was developed as an open source project by AAMT. UTX Converter is available for free. It has the following functions: Functions for UTX The format check of a UTX file (UTX 1.11) Extraction of forbidden terms Extraction of the pairs of forbidden terms and approved terms Extraction of the pairs of non-standard terms and approved terms Conversion function Conversion between UTX and a user dictionary (*. txt file) of ATLAS (Fujitsu) Conversion between UTX and a user dictionary (*. txt file) of The Honyaku (Toshiba) Conversion between UTX and a user dictionary (*.opt file for EJ, *.dic file for JE) of PC/MED/PAT/Legal Transer (Cross Language) Conversion from UTX to a text for MultiTerm import See also TBX Translation memory Terminology Computer-assisted translation External links UTX Home UPF Home (A predecessor to UTX, in Japanese) UTX mailing list Glossary Markup Language (GlossML) An open XML format for storing glossaries. Computer-assisted translation
Universal Terminology eXchange
[ "Technology" ]
395
[ "Natural language and computing", "Computer-assisted translation" ]
16,105,044
https://en.wikipedia.org/wiki/Kleene%27s%20O
In set theory and computability theory, Kleene's is a canonical subset of the natural numbers when regarded as ordinal notations. It contains ordinal notations for every computable ordinal, that is, ordinals below Church–Kleene ordinal, . Since is the first ordinal not representable in a computable system of ordinal notations the elements of can be regarded as the canonical ordinal notations. Kleene (1938) described a system of notation for all computable ordinals (those less than the Church–Kleene ordinal). It uses a subset of the natural numbers instead of finite strings of symbols. Unfortunately, there is in general no effective way to tell whether some natural number represents an ordinal, or whether two numbers represent the same ordinal. However, one can effectively find notations which represent the ordinal sum, product, and power (see ordinal arithmetic) of any two given notations in Kleene's ; and given any notation for an ordinal, there is a computably enumerable set of notations which contains one element for each smaller ordinal and is effectively ordered. Definition The basic idea of Kleene's system of ordinal notations is to build up ordinals in an effective manner. For members of , the ordinal for which is a notation is . and (a partial ordering of Kleene's ) are the smallest sets such that the following holds. . Suppose is the -th partial computable function. If is total and , then This definition has the advantages that one can computably enumerate the predecessors of a given ordinal (though not in the ordering) and that the notations are downward closed, i.e., if there is a notation for and then there is a notation for . There are alternate definitions, such as the set of indices of (partial) well-orderings of the natural numbers. Explanation A member of Kleene's is called a notation and is meant to give a definition of the ordinal . The successor notations are those such that is a successor ordinal . In Kleene's , a successor ordinal is defined in terms of a notation for the ordinal immediately preceding it. Specifically, a successor notation is of the form for some other notation , so that . The limit notations are those such that is a limit ordinal. In Kleene's , a notation for a limit ordinal amounts to a computable sequence of notations for smaller ordinals limiting to . Any limit notation is of the form where the 'th partial computable function is a total function listing an infinite sequence of notations, which are strictly increasing under the order . The limit of the sequence of ordinals is . Although implies , does not imply . In order for , must be reachable from by repeatedly applying the operations and for . In other words, when is eventually referenced in the definition of given by . A Computably Enumerable Order Extending the Kleene Order For arbitrary , say that when is reachable from by repeatedly applying the operations and for . The relation agrees with on , but is computably enumerable: if , then a computer program will eventually find a proof of this fact. For any notation , all are themselves notations in . For , is a notation of only when all of the criteria below are met: For all , is either , a power of , or for some . For any , if then is total and strictly increasing under ; i.e. for any . The set is well-founded, so that there are no infinite descending sequences . Basic properties of <O If and and then ; but the converse may fail to hold. induces a tree structure on , so is well-founded. only branches at limit ordinals; and at each notation of a limit ordinal, is infinitely branching. Since every computable function has countably many indices, each infinite ordinal receives countably many notations; the finite ordinals have unique notations, usually denoted . The first ordinal that doesn't receive a notation is called the Church–Kleene ordinal and is denoted by . Since there are only countably many computable functions, the ordinal is evidently countable. The ordinals with a notation in Kleene's are exactly the computable ordinals. (The fact that every computable ordinal has a notation follows from the closure of this system of ordinal notations under successor and effective limits.) is not computably enumerable, but there is a computably enumerable relation which agrees with precisely on members of . For any notation , the set of notations below is computably enumerable. However, Kleene's , when taken as a whole, is (see analytical hierarchy) and not arithmetical because of the following: is -complete (i.e. is and every set is Turing reducible to it) and every subset of is effectively bounded in (a result of Spector). In fact, any set is many-one reducible to . is the universal system of ordinal notations in the sense that any specific set of ordinal notations can be mapped into it in a straightforward way. More precisely, there is a computable function such that if is an index for a computable well-ordering, then is a member of and is order-isomorphic to an initial segment of the set . There is a computable function , which, for members of , mimics ordinal addition and has the property that . (Jockusch) Properties of paths in <O A path in is a subset of which is totally ordered by and is closed under predecessors, i.e. if is a member of a path and then is also a member of . A path is maximal if there is no element of which is above (in the sense of ) every member of , otherwise is non-maximal. A path is non-maximal if and only if is computably enumerable (c.e.). It follows by remarks above that every element of determines a non-maximal path ; and every non-maximal path is so determined. There are maximal paths through ; since they are maximal, they are non-c.e. In fact, there are maximal paths within of length . (Crossley, Schütte) For every non-zero ordinal , there are maximal paths within of length . (Aczel) Further, if is a path whose length is not a multiple of then is not maximal. (Aczel) For each c.e. degree , there is a member of such that the path has many-one degree . In fact, for each computable ordinal , a notation exists with . (Jockusch) There exist paths through which are . Given a progression of computably enumerable theories based on iterating Uniform Reflection, each such path is incomplete with respect to the set of true sentences. (Feferman & Spector) There exist paths through each initial segment of which is not merely c.e., but computable. (Jockusch) Various other paths in have been shown to exist, each with specific kinds of reducibility properties. (See references below) See also Computable ordinal Large countable ordinal Ordinal notation References Ordinal numbers
Kleene's O
[ "Mathematics" ]
1,584
[ "Ordinal numbers", "Order theory", "Mathematical objects", "Numbers" ]
16,106,907
https://en.wikipedia.org/wiki/Vaccine%20therapy
Vaccine therapy is a type of treatment that uses a substance or group of substances to stimulate the immune system to destroy a tumor or infectious microorganisms such as bacteria or viruses. Cancer vaccines Cancer is a group of fatal diseases that involves abnormal cell growth that can invade or spread to other parts of the body. They are usually caused by the accumulation of mutations in genes that regulate cell growth and differentiation. Majority of cancer, about 90-95%, are due to genetic mutations from environmental and lifestyle factors – including age, chemicals, diet, exercise, viruses, and radiation. The remaining 5-10% are due to inherited genetics. Some of the cancers may be difficult to treat by conventional means such as surgery, radiation, and chemotherapy, but may be controlled by the stimulation of the immune response of the body with the help of cancer vaccines. Preventive or prophylactic vaccines Treatment or therapeutic vaccines These vaccines are intended to treat existing cancer by stimulating the patient’s immune system. Cancer vaccines can also divided into specific or universal cancer vaccine based on the types of cancer it is used for. Specific cancer vaccines are only used to treat a particular type of cancer while universal vaccine can be used to treat different types of cancer. Protein/peptide-based vaccines Tumor-associated antigens Vaccines of this kind use specific tumor antigens, which are usually proteins or peptides, to stimulate immune system against either tumor specific antigens (TSAs) or tumor associated antigens (TAAs). This vaccine helps stimulate the patient's immune system to increase production of antibodies or killer T cells. Dendritic cell vaccines Dendritic cells (DCs) are considered the most potent APC (antigen presenting cell) of the immune system. DC cells have a unique ability to stimulate naïve T cells and can be used to induce of antigen-specific immune response. Several DC-based cancer vaccines have been developed including DC loaded with, tumor peptides or whole proteins, with tumor-derived mRNA or DNA., DC transduced with viral vectors such as retroviruses, lentiviruses adenoviruses, fowl pox and alphaviruses containing the tumor antigen or gene of interest, whole necrotic or apoptotic tumor cells, tumor cell lysates and DC-fused with tumor cells. Whole tumor cell vaccines An advantage to using tumor cell vaccines is that this type of vaccine is polyepitope, which means it can present an entire spectrum of TAAs to a patient’s immune system. Autologous tumor cell vaccines These vaccines are made from antigens taken from the patient’s own cancer cells. Autologous vaccines have been used to treat lung cancer, colorectal cancer, melanoma, renal cell cancer, and prostate cancer. Allogenic tumor cell vaccines These vaccines are made from antigens taken from individuals other than the patient, usually from cancer cell lines. References External links Vaccine therapy entry in the public domain NCI Dictionary of Cancer Terms Vaccination
Vaccine therapy
[ "Biology" ]
607
[ "Vaccination" ]
16,107,008
https://en.wikipedia.org/wiki/Proliferative%20index
Proliferation, as one of the hallmarks and most fundamental biological processes in tumors, is associated with tumor progression, response to therapy, and cancer patient survival. Consequently, the evaluation of a tumor proliferative index (or growth fraction) has clinical significance in characterizing many solid tumors and hematologic malignancies. This has led investigators to develop different technologies to evaluate the proliferation index in tumor samples. The most commonly used methods in evaluating a proliferative index include mitotic indexing, thymidine-labeling index, bromodeoxyuridine assay, the determination of fraction of cells in various phases of cell cycle, and the immunohistochemical evaluation of cell cycle-associated proteins. Mitotic index (also called mitotic count) Mitotic indexing is the oldest method of assessing proliferation and is determined by counting the number of mitotic figures (cells undergoing mitosis) through a light microscope on H&E stained sections. It is usually expressed as the number of cells per microscopic field. Cells in the mitotic phase are identified by the typical appearance of their chromosomes in the cell during the mitotic phase of the cell cycle. Usually the number of mitotic figures is expressed as the total number in a defined number of high power fields, such as 10 mitoses in 10 high power fields. Since the field of vision area can vary considerably between different microscopes, the exact area of the high power fields should be defined in order to compare results from different studies. Accordingly, one of the main problems of counting mitosis has been the reproducibility. Thus, the need for standardized methodology and strict protocols is important to achieve reproducible results. Automated image alalysis using deep learning-based algorithms has been proposed as a promising tool to assist pathologists and thereby improve reproducibility and accuracy. Thymidine-labeling index Thymidine-labeling indexing is determined by counting the number of tumor nuclei labeled on autoradiographed sections after incubating the tumor cells with thymidine. Rapidly proliferating cells readily update more radiolabeled Thymidine, which produces darker spots on the autoradiograph film. Bromodeoxyuridine assay A bromodeoxyuridine assay, similar to thymidine-labeling indexing, incubates cells with radiolabeled bromodeoxyuridine, which is taken up at a greater rate by proliferating cells, and then uses a film to image the distribution of radiolabeled bromodeoxyuridine in cells. S-phase fraction evaluation Evaluating DNA histograms through flow cytometry provides an estimate of the fractions of cells within each of the phases in the cell cycle. Cell nuclei are stained with a DNA binding stain and the amount of staining is measured from the histogram. The fractions of cells within the different cell cycle phases (G0/G1, S and G2/M compartments) can then be calculated from the histogram by computerized cell cycle analysis. Immunohistochemical evaluation The immunohistochemical detection of proliferation related proteins such as Ki-67 and proliferating cell nuclear antigen is a commonly used method to determine the proliferation index. Ki-67 is a nuclear antigen expressed in proliferating cells that is coded by the MKI67 gene on chromosome 10, and is expressed during the GI, S, G2, and M phases of the cell cycle. Cells are then stained with a Ki-67 antibody, and the number of stained nuclei is then expressed as a percentage of total tumor cells. It is recommended to count at least 500 tumor cells in the highest labeled area. The Ki-67 score closely correlates with other proliferation markers, and has been shown to have prognostic and predictive value for many different tumor types. Similarly, proliferating cell nuclear antigen (PCNA) is a protein associated with cell proliferation that is upregulated in proliferating cells, making it another useful antigen for immunostaining. It is associated with DNA polymerase alpha, which is expressed throughout the phases of the cell cycle. The expression of PCNA also correlates well with other proliferation markers such as mitotic count, S-phase fraction and Ki-67 staining. Diagnostic role of proliferation index The various methods of characterizing the proliferation index have found roles in both the diagnostic and prognostic evaluation of tumors. For instances, the number of mitotic cells is used to classify tumors. In general, a high proliferation index suggests malignancy and high-grade tumors. Among solid tumors, the clinical significance of the proliferation index on breast cancer has been extensively studied. Mitotic counting has also been shown in multiple studies to have prognostic value in breast cancer, where a lower count of mitotic cells correlates with a more favorable outcome, and thus has been incorporated into part of the histological grading system. The Ki-67 labelling index has also been found to have prognostic significance where in many clinical practice guidelines, evaluation of Ki-67 in newly diagnosed invasive breast carcinomas is recommended. Additionally, the tumor proliferation index has been used to predict the response to systemic chemotherapies in patients who are receiving neoadjuvant systemic therapy where patients who have tumors with high tumor proliferative index respond better to systemic cytotoxic therapies than those who have tumors with a low tumor proliferative index. References External links Proliferative index entry in the public domain NCI Dictionary of Cancer Terms Growth factors Oncology
Proliferative index
[ "Chemistry" ]
1,148
[ "Growth factors", "Signal transduction" ]
16,107,068
https://en.wikipedia.org/wiki/Meridian%20arc
In geodesy and navigation, a meridian arc is the curve between two points on the Earth's surface having the same longitude. The term may refer either to a segment of the meridian, or to its length. The purpose of measuring meridian arcs is to determine a figure of the Earth. One or more measurements of meridian arcs can be used to infer the shape of the reference ellipsoid that best approximates the geoid in the region of the measurements. Measurements of meridian arcs at several latitudes along many meridians around the world can be combined in order to approximate a geocentric ellipsoid intended to fit the entire world. The earliest determinations of the size of a spherical Earth required a single arc. Accurate survey work beginning in the 19th century required several arc measurements in the region the survey was to be conducted, leading to a proliferation of reference ellipsoids around the world. The latest determinations use astro-geodetic measurements and the methods of satellite geodesy to determine reference ellipsoids, especially the geocentric ellipsoids now used for global coordinate systems such as WGS 84 (see numerical expressions). History of measurement Early estimations of Earth's size are recorded from Greece in the 4th century BC, and from scholars at the caliph's House of Wisdom in Baghdad in the 9th century. The first realistic value was calculated by Alexandrian scientist Eratosthenes about 240 BC. He estimated that the meridian has a length of 252,000 stadia, with an error on the real value between -2.4% and +0.8% (assuming a value for the stadion between 155 and 160 metres). Eratosthenes described his technique in a book entitled On the measure of the Earth, which has not been preserved. A similar method was used by Posidonius about 150 years later, and slightly better results were calculated in 827 by the arc measurement method, attributed to the Caliph Al-Ma'mun. Ellipsoidal Earth Early literature uses the term oblate spheroid to describe a sphere "squashed at the poles". Modern literature uses the term ellipsoid of revolution in place of spheroid, although the qualifying words "of revolution" are usually dropped. An ellipsoid that is not an ellipsoid of revolution is called a triaxial ellipsoid. Spheroid and ellipsoid are used interchangeably in this article, with oblate implied if not stated. 17th and 18th centuries Although it had been known since classical antiquity that the Earth was spherical, by the 17th century, evidence was accumulating that it was not a perfect sphere. In 1672, Jean Richer found the first evidence that gravity was not constant over the Earth (as it would be if the Earth were a sphere); he took a pendulum clock to Cayenne, French Guiana and found that it lost minutes per day compared to its rate at Paris. This indicated the acceleration of gravity was less at Cayenne than at Paris. Pendulum gravimeters began to be taken on voyages to remote parts of the world, and it was slowly discovered that gravity increases smoothly with increasing latitude, gravitational acceleration being about 0.5% greater at the geographical poles than at the Equator. In 1687, Isaac Newton had published in the Principia as a proof that the Earth was an oblate spheroid of flattening equal to . This was disputed by some, but not all, French scientists. A meridian arc of Jean Picard was extended to a longer arc by Giovanni Domenico Cassini and his son Jacques Cassini over the period 1684–1718. The arc was measured with at least three latitude determinations, so they were able to deduce mean curvatures for the northern and southern halves of the arc, allowing a determination of the overall shape. The results indicated that the Earth was a prolate spheroid (with an equatorial radius less than the polar radius). To resolve the issue, the French Academy of Sciences (1735) undertook expeditions to Peru (Bouguer, Louis Godin, de La Condamine, Antonio de Ulloa, Jorge Juan) and to Lapland (Maupertuis, Clairaut, Camus, Le Monnier, Abbe Outhier, Anders Celsius). The resulting measurements at equatorial and polar latitudes confirmed that the Earth was best modelled by an oblate spheroid, supporting Newton. However, by 1743, Clairaut's theorem had completely supplanted Newton's approach. By the end of the century, Jean Baptiste Joseph Delambre had remeasured and extended the French arc from Dunkirk to the Mediterranean Sea (the meridian arc of Delambre and Méchain). It was divided into five parts by four intermediate determinations of latitude. By combining the measurements together with those for the arc of Peru, ellipsoid shape parameters were determined and the distance between the Equator and pole along the Paris Meridian was calculated as  toises as specified by the standard toise bar in Paris. Defining this distance as exactly led to the construction of a new standard metre bar as  toises. 19th century In the 19th century, many astronomers and geodesists were engaged in detailed studies of the Earth's curvature along different meridian arcs. The analyses resulted in a great many model ellipsoids such as Plessis 1817, Airy 1830, Bessel 1841, Everest 1830, and Clarke 1866. A comprehensive list of ellipsoids is given under Earth ellipsoid. The nautical mile Historically a nautical mile was defined as the length of one minute of arc along a meridian of a spherical earth. An ellipsoid model leads to a variation of the nautical mile with latitude. This was resolved by defining the nautical mile to be exactly 1,852 metres. However, for all practical purposes, distances are measured from the latitude scale of charts. As the Royal Yachting Association says in its manual for day skippers: "1 (minute) of Latitude = 1 sea mile", followed by "For most practical purposes distance is measured from the latitude scale, assuming that one minute of latitude equals one nautical mile". Calculation On a sphere, the meridian arc length is simply the circular arc length. On an ellipsoid of revolution, for short meridian arcs, their length can be approximated using the Earth's meridional radius of curvature and the circular arc formulation. For longer arcs, the length follows from the subtraction of two meridian distances, the distance from the equator to a point at a latitude . This is an important problem in the theory of map projections, particularly the transverse Mercator projection. The main ellipsoidal parameters are, , , , but in theoretical work it is useful to define extra parameters, particularly the eccentricity, , and the third flattening . Only two of these parameters are independent and there are many relations between them: Definition The meridian radius of curvature can be shown to be equal to: The arc length of an infinitesimal element of the meridian is (with in radians). Therefore, the meridian distance from the equator to latitude is The distance formula is simpler when written in terms of the parametric latitude, where and . Even though latitude is normally confined to the range , all the formulae given here apply to measuring distance around the complete meridian ellipse (including the anti-meridian). Thus the ranges of , , and the rectifying latitude , are unrestricted. Relation to elliptic integrals The above integral is related to a special case of an incomplete elliptic integral of the third kind. In the notation of the online NIST handbook (Section 19.2(ii)), It may also be written in terms of incomplete elliptic integrals of the second kind (See the NIST handbook Section 19.6(iv)), The calculation (to arbitrary precision) of the elliptic integrals and approximations are also discussed in the NIST handbook. These functions are also implemented in computer algebra programs such as Mathematica and Maxima. Series expansions The above integral may be expressed as an infinite truncated series by expanding the integrand in a Taylor series, performing the resulting integrals term by term, and expressing the result as a trigonometric series. In 1755, Leonhard Euler derived an expansion in the third eccentricity squared. Expansions in the eccentricity () Delambre in 1799 derived a widely used expansion on , where Richard Rapp gives a detailed derivation of this result. Expansions in the third flattening () Series with considerably faster convergence can be obtained by expanding in terms of the third flattening instead of the eccentricity. They are related by In 1837, Friedrich Bessel obtained one such series, which was put into a simpler form by Helmert, with Because changes sign when and are interchanged, and because the initial factor is constant under this interchange, half the terms in the expansions of vanish. The series can be expressed with either or as the initial factor by writing, for example, and expanding the result as a series in . Even though this results in more slowly converging series, such series are used in the specification for the transverse Mercator projection by the National Geospatial-Intelligence Agency and the Ordnance Survey of Great Britain. Series in terms of the parametric latitude In 1825, Bessel derived an expansion of the meridian distance in terms of the parametric latitude in connection with his work on geodesics, with Because this series provides an expansion for the elliptic integral of the second kind, it can be used to write the arc length in terms of the geodetic latitude as Generalized series The above series, to eighth order in eccentricity or fourth order in third flattening, provide millimetre accuracy. With the aid of symbolic algebra systems, they can easily be extended to sixth order in the third flattening which provides full double precision accuracy for terrestrial applications. Delambre and Bessel both wrote their series in a form that allows them to be generalized to arbitrary order. The coefficients in Bessel's series can expressed particularly simply where and is the double factorial, extended to negative values via the recursion relation: and . The coefficients in Helmert's series can similarly be expressed generally by This result was conjectured by Friedrich Helmert and proved by Kazushige Kawase. The extra factor originates from the additional expansion of appearing in the above formula and results in poorer convergence of the series in terms of compared to the one in . Numerical expressions The trigonometric series given above can be conveniently evaluated using Clenshaw summation. This method avoids the calculation of most of the trigonometric functions and allows the series to be summed rapidly and accurately. The technique can also be used to evaluate the difference while maintaining high relative accuracy. Substituting the values for the semi-major axis and eccentricity of the WGS84 ellipsoid gives where is expressed in degrees (and similarly for ). On the ellipsoid the exact distance between parallels at and is . For WGS84 an approximate expression for the distance between the two parallels at ±0.5° from the circle at latitude is given by Quarter meridian The distance from the equator to the pole, the quarter meridian (analogous to the quarter-circle), also known as the Earth quadrant, is It was part of the historical definition of the metre and of the nautical mile, and used in the definition of the hebdomometre. The quarter meridian can be expressed in terms of the complete elliptic integral of the second kind, where are the first and second eccentricities. The quarter meridian is also given by the following generalized series: (For the formula of c0, see section #Generalized series above.) This result was first obtained by James Ivory. The numerical expression for the quarter meridian on the WGS84 ellipsoid is The polar Earth's circumference is simply four times quarter meridian: The perimeter of a meridian ellipse can also be rewritten in the form of a rectifying circle perimeter, . Therefore, the rectifying Earth radius is: It can be evaluated as . The inverse meridian problem for the ellipsoid In some problems, we need to be able to solve the inverse problem: given , determine . This may be solved by Newton's method, iterating until convergence. A suitable starting guess is given by where is the rectifying latitude. Note that it there is no need to differentiate the series for , since the formula for the meridian radius of curvature can be used instead. Alternatively, Helmert's series for the meridian distance can be reverted to give where Similarly, Bessel's series for in terms of can be reverted to give where Adrien-Marie Legendre showed that the distance along a geodesic on a spheroid is the same as the distance along the perimeter of an ellipse. For this reason, the expression for in terms of and its inverse given above play a key role in the solution of the geodesic problem with replaced by , the distance along the geodesic, and replaced by , the arc length on the auxiliary sphere. The requisite series extended to sixth order are given by Charles Karney, Eqs. (17) & (21), with playing the role of and playing the role of . See also References External links Online computation of meridian arcs on different geodetic reference ellipsoids Geodesy Meridians (geography) History of measurement
Meridian arc
[ "Mathematics" ]
2,786
[ "Applied mathematics", "Geodesy" ]
16,108,069
https://en.wikipedia.org/wiki/Phenyltriazines
Phenyltriazines are a class of molecules containing a phenyl group and a triazine group. These molecules are pharmacologically important. As an example, lamotrigine is a phenyltriazine derivative used as an anticonvulsant drug and has been shown to be useful for alleviating epilepsy and bipolar disorder. References Triazines
Phenyltriazines
[ "Chemistry" ]
85
[ "Pharmacology", "Medicinal chemistry stubs", "Organic compounds", "Pharmacology stubs", "Organic compound stubs", "Organic chemistry stubs" ]
14,565,682
https://en.wikipedia.org/wiki/Umbravirus
Umbravirus is a genus of plant viruses assigned to the family Tombusviridae. The genus has 11 species. Transmission may be by aphids or mechanical inoculation. The genome is a linear, positive-sense, single-stranded RNA, 4200–6900 nucleotides in length. Taxonomy The genus contains the following species: Carrot mottle mimic virus Carrot mottle virus Ethiopian tobacco bushy top virus Groundnut rosette virus Ixeridium yellow mottle virus 2 Lettuce speckles mottle virus Opium poppy mosaic virus Patrinia mild mottle virus Pea enation mosaic virus 2 Tobacco bushy top virus Tobacco mottle virus References External links Viralzone: Umbravirus Umbraviruses Viral plant pathogens and diseases Virus genera
Umbravirus
[ "Biology" ]
158
[ "Virus stubs", "Viruses" ]
14,566,545
https://en.wikipedia.org/wiki/Fifth%20Estate
The Fifth Estate is a socio-cultural reference to groupings of outlier viewpoints in contemporary society, and is most associated with bloggers, journalists publishing in non-mainstream media outlets, and the social media or "social license". The "Fifth" Estate extends the sequence of the three classical Estates of the Realm, nobility, clergy, subjects and the preceding Fourth Estate, essentially the mainstream press. The use of "fifth estate" dates to the 1960s counterculture, and in particular the influential The Fifth Estate, an underground newspaper first published in Detroit in 1965. Web-based technologies have enhanced the scope and power of the Fifth Estate far beyond the modest and boutique conditions of its beginnings. Nimmo and Combs asserted in 1992 that political pundits constitute a Fifth Estate. Media researcher Stephen D. Cooper argued in 2006 that bloggers are the Fifth Estate. In 2009, William Dutton argued that the Fifth Estate is not just the blogging community, nor an extension of the media, but "networked individuals" enabled by the Internet, e.g. social media, in ways that can hold the other estates accountable. See also Fourth branch of government Fifth power Gossip References Social media Social influence Information society Influence of mass media Journalism
Fifth Estate
[ "Technology" ]
251
[ "Computing and society", "Information society", "Social media" ]
14,566,906
https://en.wikipedia.org/wiki/Ordinal%20definable%20set
In mathematical set theory, a set S is said to be ordinal definable if, informally, it can be defined in terms of a finite number of ordinals by a first-order formula. Ordinal definable sets were introduced by . Definition A drawback to the above informal definition is that it requires quantification over all first-order formulas, which cannot be formalized in the standard language of set theory. However, there is a different, formal such characterization: A set S is ordinal definable if there is some collection of ordinals α1, ..., αn and a first-order formula φ taking α2, ..., αn as parameters that uniquely defines as an element of , i.e., such that S is the unique object validating φ(S, α2...αn), with its quantifiers ranging over . The latter denotes the set in the von Neumann hierarchy indexed by the ordinal α1. The class of all ordinal definable sets is denoted OD; it is not necessarily transitive, and need not be a model of ZFC because it might not satisfy the axiom of extensionality. A set further is hereditarily ordinal definable if it is ordinal definable and all elements of its transitive closure are ordinal definable. The class of hereditarily ordinal definable sets is denoted by HOD, and is a transitive model of ZFC, with a definable well ordering. It is consistent with the axioms of set theory that all sets are ordinal definable, and so hereditarily ordinal definable. The assertion that this situation holds is referred to as V = OD or V = HOD. It follows from V = L, and is equivalent to the existence of a (definable) well-ordering of the universe. Note however that the formula expressing V = HOD need not hold true within HOD, as it is not absolute for models of set theory: within HOD, the interpretation of the formula for HOD may yield an even smaller inner model. HOD has been found to be useful in that it is an inner model that can accommodate essentially all known large cardinals. This is in contrast with the situation for core models, as core models have not yet been constructed that can accommodate supercompact cardinals, for example. References Set theory
Ordinal definable set
[ "Mathematics" ]
522
[ "Mathematical logic", "Set theory" ]
14,566,980
https://en.wikipedia.org/wiki/Water%20sky
Water sky is a phenomenon that is closely related to ice blink. It forms in regions with large areas of ice and low-lying clouds and so is limited mostly to the extreme northern and southern sections of earth, in Antarctica and in the Arctic. When light hits the blue oceans or seas, some of it bounces back and enables the observer to physically see the water. However, some of the light also is reflected back up on to the bottoms of low-lying clouds and causes a dark spot to appear underneath some clouds. These clouds may be visible when the seas are not and can show alert and knowledgeable travelers the general direction of water. The dark clouds over open water have long been used by polar explorers and scientists to navigate in sea ice. For example, Arctic explorer Fridtjof Nansen and his assistant Hjalmar Johansen used the phenomenon to find lanes of water in their failed expedition to the North Pole, as did Louis Bernacchi and Douglas Mawson in Antarctica. Sources Mawson, Sir Douglas; The Home of the Blizzard Reed, William; The Phantom of the Poles 1906 External links Water Sky and Ice Blink Water Sky Atmospheric optical phenomena
Water sky
[ "Physics" ]
238
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
14,567,243
https://en.wikipedia.org/wiki/Rhodopsin-like%20receptors
Rhodopsin-like receptors are a family of proteins that comprise the largest group of G protein-coupled receptors. Scope G-protein-coupled receptors, GPCRs, constitute a vast protein family that encompasses a wide range of functions (including various autocrine, paracrine, and endocrine processes). They show considerable diversity at the sequence level, on the basis of which they can be separated into distinct groups. GPCRs are usually described as "superfamily" because they embrace a group of families for which there are indications of evolutionary relationship, but between which there is no statistically significant similarity in sequence. The currently known superfamily members include the rhodopsin-like GPCRs (this family), the secretin-like GPCRs, the cAMP receptors, the fungal mating pheromone receptors, and the metabotropic glutamate receptor family. There is a specialised database for GPCRs. Function The rhodopsin-like GPCRs themselves represent a widespread protein family that includes hormone, neuropeptide, neurotransmitter, and light receptors, all of which transduce extracellular signals through interaction with guanine nucleotide-binding (G) proteins. Although their activating ligands vary widely in structure and character, the amino acid sequences of the receptors are very similar and are believed to adopt a common structural framework comprising 7 transmembrane (TM) helices. Classes Rhodopsin-like GPCRs have been classified into the following 19 subgroups (A1-A19) based on a phylogenetic analysis. Subfamily A1 Chemokine receptor Chemokine (C-C motif) receptor 1 (, CKR1) Chemokine (C-C motif) receptor 2 (, CKR2) Chemokine (C-C motif) receptor 3 (, CKR3) Chemokine (C-C motif) receptor 4 (, CKR4) Chemokine (C-C motif) receptor 5 (, CKR5) Chemokine (C-C motif) receptor 8 (, CKR8) Chemokine (C-C motif) receptor-like 2 (, CKRX) chemokine (C motif) receptor 1 (, CXC1) chemokine (C-X3-C motif) receptor 1 (, C3X1) GPR137B (, TM7SF1) Subfamily A2 Chemokine receptor Chemokine (C-C motif) receptor-like 1 (, CCR11) Chemokine (C-C motif) receptor 6 (, CKR6) Chemokine (C-C motif) receptor 7 (, CKR7) Chemokine (C-C motif) receptor 9 (, CKR9) Chemokine (C-C motif) receptor 10 (, CKRA) CXC chemokine receptors Chemokine (C-X-C motif) receptor 3 () Chemokine (C-X-C motif) receptor 4 (, Fusin) Chemokine (C-X-C motif) receptor 5 () Chemokine (C-X-C motif) receptor 6 (, BONZO) Chemokine (C-X-C motif) receptor 7 (, RDC1) Interleukin-8 (IL8R) IL8R-α (, CXCR1) IL8R-β (, CXCR2) Adrenomedullin receptor () Duffy blood group, chemokine receptor (, DUFF) G Protein-coupled Receptor 30 (, CML2, GPCR estrogen receptor) Subfamily A3 Angiotensin II receptor Angiotensin II receptor, type 1 (, AG2S) Angiotensin II receptor, type 2 (, AG22) Apelin receptor (, APJ) Bradykinin receptor Bradykinin receptor B1 (, BRB1) Bradykinin receptor B2 (, BRB2) GPR15 (, GPRF) GPR25 () Subfamily A4 Opioid receptor delta Opioid receptor (, OPRD) kappa Opioid receptor (, OPRK) mu Opioid receptor (, OPRM) Nociceptin receptor (, OPRX) Somatostatin receptor Somatostatin receptor 1 (, SSR1) Somatostatin receptor 2 (, SSR2) Somatostatin receptor 3 (, SSR3) Somatostatin receptor 4 (, SSR4) Somatostatin receptor 5 (, SSR5) GPCR neuropeptide receptor Neuropeptides B/W receptor 1 (, GPR7) Neuropeptides B/W receptor 2 (, GPR8) GPR1 orphan receptor () Subfamily A5 Galanin receptor Galanin receptor 1 (, GALR) Galanin receptor 2 (, GALS) Galanin receptor 3 (, GALT) Cysteinyl leukotriene receptor Cysteinyl leukotriene receptor 1 () Cysteinyl leukotriene receptor 2 () Leukotriene B4 receptor Leukotriene B4 receptor (, P2Y7) Leukotriene B4 receptor 2 () Relaxin receptor Relaxin/insulin-like family peptide receptor 1 (, LGR7) Relaxin/insulin-like family peptide receptor 2 (, GPR106) Relaxin/insulin-like family peptide receptor 3 (, SALPR) Relaxin/insulin-like family peptide receptor 4 (, GPR100/GPR142) KiSS1-derived peptide receptor (GPR54) () Melanin-concentrating hormone receptor 1 (, GPRO) Urotensin-II receptor (, UR2R) Subfamily A6 Cholecystokinin receptor Cholecystokinin A receptor (, CCKR) Cholecystokinin B receptor (, GASR) Neuropeptide FF receptor Neuropeptide FF receptor 1 (, FF1R) Neuropeptide FF receptor 2 (, FF2R) Orexin receptor Hypocretin (orexin) receptor 1 (, OX1R) Hypocretin (orexin) receptor 2 (, OX2R) Vasopressin receptor Arginine vasopressin receptor 1A (, V1AR) Arginine vasopressin receptor 1B (, V1BR) Arginine vasopressin receptor 2 (, V2R) Oxytocin receptor () Gonadotropin releasing hormone receptor (, GRHR) Pyroglutamylated RFamide peptide receptor (, GPR103) GPR22 (, GPRM) GPR176 (, GPR) Subfamily A7 Bombesin receptor Bombesin-like receptor 3 () Neuromedin B receptor () Gastrin-releasing peptide receptor () Endothelin receptor Endothelin receptor type A (, ET1R) Endothelin receptor type B (, ETBR) GPR37 (, ETBR-LP2) Neuromedin U receptor Neuromedin U receptor 1 () Neuromedin U receptor 2 () Neurotensin receptor Neurotensin receptor 1 (, NTR1) Neurotensin receptor 2 (, NTR2) Thyrotropin-releasing hormone receptor (, TRFR) Growth hormone secretagogue receptor () GPR39 () Motilin receptor (, GPR38) Subfamily A8 Anaphylatoxin receptors C3a receptor (, C3AR) C5a receptor (, C5AR) Chemokine-like receptor 1 (, CML1) Formyl peptide receptor Formyl peptide receptor 1 (, FMLR) Formyl peptide receptor-like 1 (, FML2) Formyl peptide receptor-like 2 (, FML1) MAS1 oncogene MAS1 (, MAS) MAS1L (, MRG) GPR1 () GPR32 (, GPRW) GPR44 () GPR77 (, C5L2) Subfamily A9 Melatonin receptor Melatonin receptor 1A (, ML1A) Melatonin receptor 1B (, ML1B) Neurokinin receptor Tachykinin receptor 1 (, NK1R) Tachykinin receptor 2 (, NK2R) Tachykinin receptor 3 (, NK3R) Neuropeptide Y receptor Neuropeptide Y receptor Y1 (, NY1R) Neuropeptide Y receptor Y2 (, NY2R) Pancreatic polypeptide receptor 1 (, NY4R) Neuropeptide Y receptor Y5 (, NY5R) Prolactin-releasing peptide receptor (PRLHR, GPRA) Prokineticin receptor 1 (, GPR73) Prokineticin receptor 2 (, PKR2) GPR19 (, GPRJ) GPR50 (, ML1X) GPR75 () GPR83 (, GPR72) Subfamily A10 Glycoprotein hormone receptor FSH-receptor () Luteinizing hormone/choriogonadotropin receptor (, LSHR) Thyrotropin receptor () Leucine-rich repeat-containing G protein-coupled receptor 4 (, GPR48) Leucine-rich repeat-containing G protein-coupled receptor 5 (, GPR49) Leucine-rich repeat-containing G protein-coupled receptor 6 () Subfamily A11 GPR40-related receptor Free fatty acid receptor 1 (, GPR40) Free fatty acid receptor 2 (, GPR43) Free fatty acid receptor 3 (, GPR41) GPR42 (, FFAR1L) P2 purinoceptor Purinergic receptor P2Y1 () Purinergic receptor P2Y2 () Purinergic receptor P2Y4 () Purinergic receptor P2Y6 () Purinergic receptor P2Y8 () Purinergic receptor P2Y11 () Hydroxycarboxylic acid receptor 1 (, GPR81) Hydroxycarboxylic acid receptor 2, Niacin receptor 1 (, GPR109A) Hydroxycarboxylic acid receptor 3, Niacin receptor 2 (, GPR109B, HM74) GPR31 (, GPRV) GPR82 () Oxoglutarate (alpha-ketoglutarate) receptor 1 (, GPR80) Succinate receptor 1 (, GPR91) Subfamily A12 P2 purinoceptor Purinergic receptor P2Y12 () Purinergic receptor P2Y13 (, GPR86) Purinergic receptor P2Y14 (, UDP-glucose receptor, KI01) GPR34 () GPR87 () GPR171 (, H963) Platelet-activating factor receptor (, PAFR) Subfamily A13 Cannabinoid receptor Cannabinoid receptor 1 (brain) (, CB1R) Cannabinoid receptor 2 (macrophage) (, CB2R) Lysophosphatidic acid receptor Lysophosphatidic acid receptor 1 () Lysophosphatidic acid receptor 2 () Lysophosphatidic acid receptor 3 () Sphingosine 1-phosphate receptor Sphingosine 1-phosphate receptor 1 () Sphingosine 1-phosphate receptor 2 () Sphingosine 1-phosphate receptor 3 () Sphingosine 1-phosphate receptor 4 () Sphingosine 1-phosphate receptor 5 () Melanocortin/ACTH receptor Melanocortin 1 receptor (, MSHR) Melanocortin 3 receptor () Melanocortin 4 receptor () Melanocortin 5 receptor () ACTH receptor (), ACTR) GPR3 () GPR6 () GPR12 (, GPRC) Subfamily A14 Eicosanoid receptor Prostaglandin D2 receptor (, PD2R) Prostaglandin E1 receptor (, PE21) Prostaglandin E2 receptor (, PE22) Prostaglandin E3 receptor (, PE23) Prostaglandin E4 receptor (, PE24) Prostaglandin F receptor (, PF2R) Prostaglandin I2 (prostacyclin) receptor (, PI2R) Thromboxane A2 receptor (, TA2R) Subfamily A15 Lysophosphatidic acid receptor Lysophosphatidic acid receptor 4 () Lysophosphatidic acid receptor 5 () Lysophosphatidic acid receptor 6 () P2 purinoceptor Purinergic receptor P2Y10 (, P2Y10) Protease-activated receptor Coagulation factor II (thrombin) receptor-like 1 (, PAR2) Coagulation factor II (thrombin) receptor-like 2 (, PAR3) Coagulation factor II (thrombin) receptor-like 3 (, PAR4) Epstein-Barr virus induced gene 2 (lymphocyte-specific G protein-coupled receptor) () Proton-sensing G protein-coupled receptors GPR4 () GPR65 () GPR68 () GPR132 (, G2A) GPR17 (, GPRH) GPR18 (, GPRI) GPR20 (, GPRK) GPR35 () GPR55 () Coagulation factor II receptor (, THRR) Subfamily A16 Opsins Rhodopsin (, OPSD) Opsin 1 (cone pigments), short-wave-sensitive (color blindness, tritan) (, OPSB) (blue-sensitive opsin) Opsin 1 (cone pigments), medium-wave-sensitive (color blindness, deutan) (, OPSG) (green-sensitive opsin) Opsin 1 (cone pigments), long-wave-sensitive (color blindness, protan) (, OPSR) (red-sensitive opsin) Opsin 3, Panopsin () Opsin 4, Melanopsin () Opsin 5 (, GPR136) Retinal G protein coupled receptor () Retinal pigment epithelium-derived rhodopsin homolog (, OPSX) (visual pigment-like receptor opsin) Subfamily A17 5-Hydroxytryptamine (5-HT) receptor 5-HT2A (, 5H2A) 5-HT2B (, 5H2B) 5-HT2C (, 5H2C) 5-HT6 (, 5H6) Adrenergic receptor Alpha1A (, A1AA) Alpha1B (, A1AB) Alpha1D (, A1AD) Alpha2A (, A2AA) Alpha2B (, A2AB) Alpha2C (, A2AC) Beta1 (, B1AR) Beta2 (, B2AR) Beta3 (, B3AR) Dopamine receptor D1 (, DADR) D2 (, D2DR) D3 (, D3DR) D4 (, D4DR) D5 (, DBDR) Trace amine receptor TAAR1 (, TAR1) TAAR2 (, GPR58) TAAR3 (, GPR57) TAAR5 (, PNR) TAAR6 (, TAR4) TAAR8 (, GPR102) TAAR9 (, TAR3) Histamine H2 receptor (, HH2R) Subfamily A18 Histamine H1 receptor (, HH1R) Histamine H3 receptor () Histamine H4 receptor () Adenosine receptor A1 (, AA1R) A2a (, AA2A) A2b (, AA2B) A3 (, AA3R) Muscarinic acetylcholine receptor M1 (, ACM1) M2 (, ACM2) M3 (, ACM3) M4 (, ACM4) M5 (, ACM5) GPR21 (, GPRL) GPR27 () GPR45 (, PSP24) GPR52 () GPR61 () GPR62 () GPR63 () GPR78 () GPR84 () GPR85 () GPR88 () GPR101 () GPR161 (, RE2) GPR173 (, SREB3) Subfamily A19 5-Hydroxytryptamine (5-HT) receptor 5-HT1A (, 5H1A) 5-HT1B (, 5H1B) 5-HT1D (, 5H1D) 5-HT1E (, 5H1E) 5-HT1F (, 5H1F) 5-HT4 () 5-HT5A (, 5H5A) 5-HT7 (, 5H7) Unclassified Olfactory receptor Nematode chemoreceptor (multiple, including ) Taste receptor type 2 Vomeronasal receptor type 1 VN1R1 VN1R2 VN1R3 VN1R4 VN1R5 References External links This database includes multiple sequence alignments of all GPCR families and sub-families. G protein-coupled receptors Protein domains Protein families
Rhodopsin-like receptors
[ "Chemistry", "Biology" ]
3,818
[ "Protein classification", "Signal transduction", "G protein-coupled receptors", "Protein domains", "Protein families" ]
14,567,369
https://en.wikipedia.org/wiki/Born-digital
The term born-digital refers to materials that originate in a digital form. This is in contrast to digital reformatting, through which analog materials become digital, as in the case of files created by scanning physical paper records. It is most often used in relation to digital libraries and the issues that go along with said organizations, such as digital preservation and intellectual property. However, as technologies have advanced and spread, the concept of being born-digital has also been discussed in relation to personal consumer-based sectors, with the rise of e-books and evolving digital music. Other terms that might be encountered as synonymous include "natively digital", "digital-first", and "digital-exclusive". Discrepancies in definition There exists some inconsistency in defining born-digital materials. Some believe such materials must exist in digital form exclusively; in other words, if they can be transferred into a physical, analog form, they are not truly born-digital. However, others maintain that while these materials will often not have a subsequent physical counterpart, having one does not bar them from being classified as 'born-digital'. For instance, Mahesh and Mittal identify two types of born-digital content, "exclusive digital" and "digital for print", allowing for a broader base of classification than the former definition provides. Furthermore, it has been pointed out that certain works may incorporate components that are both born-digital and digitized, further blurring the lines between what should and should not be considered 'born-digital.' For example, a digital video created may utilize historical film footage that has been converted. It is important to be aware of these discrepancies when thinking about born-digital materials and the effects they have. However, some universals do exist across these definitions. All make clear the fact that born-digital media must originate digitally. Also, they agree that this media must be able to be utilized in a digital form (whether exclusively or otherwise), while they do not have to exist or be used as analog materials. Etymology The term "born digital" is of uncertain origin. While it may have occurred to multiple people at various times, it was coined independently by web developer Randel (Rafi) Metz in 1993, who acquired the domain name "borndigital.com" then and sustained it as a personal website for 18 years until 2011. The domain is now owned by a web developer in New Zealand. The original website is archived here. Examples of born-digital content Grey literature and communications Much of the grey literature that exists today are almost entirely conducted online, due in part to the accessibility and speed of internet communications. As the products of the vast amount of information created by organizations and individuals on computers, data sets and electronic records must exist in the context of other activities. Common content includes: Email Documents created in word processors and/or observed in viewers. Examples include Microsoft Word, Google Docs, WordPerfect, Apple Pages, LibreOffice Writer, and Adobe Reader. Spreadsheets used to organize and tabulate data are almost entirely digital. Common applications include Microsoft Excel, Google Sheets, LibreOffice Calc, and Lotus 1-2-3 (discontinued). Presentations used to present data and ideas are created with software such as Microsoft PowerPoint, Google Slides, LibreOffice Impress, and Prezi. Electronic medical records Social media websites such as Facebook, Twitter, and Reddit have originated in the networked world, and are therefore born-digital by default. Digital photography Digital photography has allowed larger groups of people to participate in the process, art form, and pastime of photography. With the advent of digital cameras in the late 1980s, followed by the invention and dissemination of mobile phones capable of photography, sales of digital cameras eventually surpassed that of analog cameras. The early to mid 2000s saw the rise of photo storage websites, such as Flickr and Photobucket, and social media websites dedicated primarily to sharing digital photographs, including Instagram, Pinterest, Imgur, and Tumblr. Digital image files include Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), Portable Network Graphics (PNG), Graphic Interchange Format (GIF), and raw image format. Digital art Digital art is an umbrella term for art created with a computer. Types include visual media, digital animation, computer-aided design, 3D models and interactive art. Webcomics, comics published primarily on the internet, are an example of exclusively born-digital art. Webcomics follow the tradition of user-generated content and may later be printed by the creator, but as they were originally disseminated through the internet, they are considered to be born-digital media. Many webcomics are published on existing social media websites, while others use webcomic-specific platforms or their own domains. Electronic books E-books are books that can be read through the digital screens of computers, smartphones, or dedicated devices. The e-book sector of the book industry has flourished in recent years, with increasing numbers of e-books and e-book readers being developed and sold. E-publishing is particularly favorable to independent authors, because the digital marketplace creates a more direct connection between authors, their works, and the audience. Some publishing houses, including major ones such as Harlequin, have formed imprints for digital-only books in response to this trend. Publishers also offer digital-exclusive publications for use on e-book readers, such as the Kindle. One example of this was with the simultaneous launch of Amazon's Kindle 2 with the Stephen King novelette Ur. In recent years, however, the sale of e-books from traditional publishers has decreased, due in part to increasing prices. Video recordings Videos that are born-digital vary in type and usage. Vlogs, an amalgamation of "video" and "blog," are streamed and consumed on video-sharing websites such as YouTube. Similarly, a web series is a television-like show that is shown exclusively and/or initially on the internet. This does not include the streaming of pre-existing traditional television shows. Examples include Dr. Horrible's Sing Along Blog, The Lizzie Bennett Diaries, The Guild, and The Twilight Zone (2019). Sound recordings Digital sound recordings have played a role since the 1970s with the acceptance of pulse-code modulation (PCM) in the recording process. Since then, numerous means of storing and delivering digital audio have been developed, including web streams, compact discs and mp3 audio files. Increasingly, digital audio are only available via download, lacking any kind of tangible counterpart. One example of this trend is the 2008 recording of Hector Berlioz's Symphonie fantastique by Los Angeles Philharmonic under Gustavo Dudamel. Available through download only, it has presented problems for libraries which may want to carry this work but cannot due to licensing limitations. Another example is Radiohead's 2007 release In Rainbows, released initially as a digital download. The music industry has changed dramatically with the increase in digital music, specifically digital downloads. The digital format and consumers' growing comfort with it has led to rising sales in single tracks. This growth is clearly still underway, with all of the ten best-selling singles since 2000 having been released since 2007. This does not necessarily signal the demise of CDs, as they are still more popular than digital albums, but it does show that this changing born-digital content is having a significant influence on sales and the industry. Other media WebExhibits are websites that act as virtual museums for any variety of content. These often use both primary and secondary historical sources, maps, timelines, infographics, and other data visualizations to showcase the historical past. One example is Clio Visualizing History's Click! The Ongoing Feminist Revolution, a web exhibit about the American women's movement from the 1940s to the present. Clio Visualizing History was founded by Lola Van Wagenen in 1996 to meet the growing need for innovative history projects in multi-media platforms. Journalism As existing print publications migrated to born-digital releases, digital native news websites such as HuffPo and Buzzfeed News have grown substantially. This trend toward web-exclusive content has seen the rise of "news applications," or news articles built with interactive features that cannot be replicated on print. "News Apps" are often heavily data-driven, using interactive graphics custom-built for the story by a team of software specialists in addition to the core group of writers and editors. Examples include Baltimore Homicides from The Baltimore Sun, Do No Harm from the Las Vegas Sun, and Snow Fall from The New York Times, which took a team of more than fifteen journalists, web developers, and designers to build. Key issues Preservation Digital preservation involves the conservation and maintenance of digital content. As with other digital objects, preservation must be a continuous and regular undertaking, as these materials do not show the same signs of degradation that print and other physical materials do. Invisible processes such as bit rot can lead to irreparable damage. In the case of born-digital content, deterioration can occur in the form of bit rot, a process in which digital files degrade over time, and link rot, a process in which URLs link to pages on the internet that are no longer available. Incompatibility is also a concern, in regard to the eventual obsolescence of both hardware and software capable of making sense of the documents. Many questions arise regarding what should be archived and preserved and who should undertake the job. Vast amounts of born-digital content are created constantly and institutions are forced to decide what and how much should be saved. Because linking plays such a large role in the digital setting, whether a responsibility exists to maintain access to links (and therefore context) is debated, especially when considering the scope of such a task. Additionally, since publishing is not as delineated in the digital realm and preliminary versions of work are increasingly made available, knowing when to archive presents further complications. Relevance and accessibility For digital libraries and repositories that are used as reference materials, such as PBS LearningMedia, which provides educational resources for teachers, staying relevance is of utmost importance. The information must be factually accurate and include context, while staying current to the website's main goals. As in the case of preservation, bit rot, link rot, and incompatibility negatively affect how users might access born-digital records, while mere functionality, e.g. video quality and legibility of any text, is also a concern. Additionally, considerations on how digital content can be inclusive of people with disabilities should be made, particularly in conjunction with assistive technologies such as screen readers, screen magnifiers, and speech-to-text software. Access is also affected by licensing laws — the lack of ownership of their digital collections leaves libraries with nothing when their license expires, despite the costs already paid. Licensing Laws created to protect the intellectual property were written for analog works; as such, provisions such as the first-sale doctrine of US copyright law, which enables libraries to lend materials to patrons, have not been applied to the digital realm. Therefore, certain copyrighted digital content that is licensed rather than owned, as is common with many digital materials, is often of limited use since it cannot be transmitted to patrons at various computers or lent through an interloan agreement. However, with regards to the preservation functions of libraries and archives and the subsequent need to make copies of born-digital materials, the laws of many countries have been changing, allowing for agreements to be made between these institutions and the rights holders of born-digital content. Consumers have also had to deal with intellectual property as it concerns their ownership of and ability to control the born-digital material that they buy. Piracy proves to be a bigger problem with digital objects, including those that are born-digital, because such materials can be copied and spread in perfect condition with speed and distance on a scale inconceivable for traditional print and physical materials. Again, the first-sale doctrine, which, from a consumer standpoint, allows purchasers of materials to sell or give away items (such as books and CDs), is not yet applied effectively to digital objects. Three reasons for this have been identified by Victor Calaba: "...first, license agreements imposed by software manufacturers typically prohibit exercise of the first sale doctrine; second, traditional copyright law may not support application of the first sale doctrine to digital works; finally, the functionally prevents users from making copies of digitized works and prohibits the necessary bypassing of access control mechanisms to facilitate a transfer." Increasingly, institutions are more interested in subscribing to digital versions of journals, something observed as some scholarly journals have unbundled their print and electronic editions and allowed for separate subscription; these trends have created questions about the economic sustainability of print publication. Major journals such as the American Chemical Society have made significant changes to their print editions in order to cut costs, and many others predict an exclusively digital future. The increasing subscription prices and predatory practices of scholarly journals, however, provided impetus for the Open Access Movement, which advocates for free, unrestricted access to scholarly papers. See also e-Flux Digital artifactual value Digital curation Legal deposit National edeposit, Australia's system for depositing, storing and managing all born-digital documents published in Australia Virtual artifact References Library science terminology Academic publishing Publishing terminology Digital media Records management Online publishing
Born-digital
[ "Technology" ]
2,759
[ "Multimedia", "Digital media" ]
14,568,020
https://en.wikipedia.org/wiki/Huntingtin-associated%20protein%201
Huntingtin-associated protein 1 (HAP1) is a protein which in humans is encoded by the HAP1 gene. This protein was found to bind to the mutant huntingtin protein () in proportion to the number of glutamines present in the glutamine repeat region. Huntington's disease (HD), a neurodegenerative disorder characterized by loss of striatal neurons, is caused by an expansion of a polyglutamine tract in the HD protein huntingtin. This gene encodes a protein that interacts with huntingtin, with two cytoskeletal proteins (dynactin and pericentriolar autoantigen protein 1), and with a hepatocyte growth factor-regulated tyrosine kinase substrate (HGS). The interactions with cytoskeletal proteins and a kinase substrate suggest a role for this protein in vesicular trafficking or organelle transport. Variants Huntingtin-associated protein 1 has two subtypes; HAP1A and HAP1B. Function HAP1 preferentially interacts with in a polyQ dependent manner. Its localization and possible interacting partners (other than Htt) have since been characterised, thus elucidating a possible role for this protein in HD pathogenesis. Martin et al. showed that HAP1 is localized in mitotic spindle of dividing striatal cells, and associated endosomes, microtubules and vesicles in the basal forebrain and striatial neurons – where HAP1B is preferentially expressed. Furthermore, Page and colleagues identified HAP1 mRNA in the following forebrain limbic nuclei: the amygdala, nucleus accumbens, dentate gyrus, septal nuclei, bed nucleus of the stria terminalis, and hypothalamus. They also identified HAP1 in numerous areas of the cortex, including the anterior cingulate cortex and the limbic cortex. The subcellular location of HAP1 closely resembles that of Htt. Gutekunst and colleagues used immunogold labeling to identify subcellular localization of both HAP1 and , and identified a close similarity of the distribution of the two proteins. They did not find HAP1 labeling in protein aggregates in the cytoplasm and postulated that this indicated HAP1 in pre-aggregate related HD pathogenesis. The role of HAP1 in HD pathogenesis may involve aberration of cell cycle processes, as high immunostaining of HAP1 during the cell cycle has been observed. It may have a part in spindle orientation, microtubule stabilization or chromosome movement. More importantly, HAP1 may also disrupt endocytosis, as it has been detected on vesicles involved in the early stages of this process. It is possible that the non-pathogenic activity of HAP1 is intracellular trafficking and that this is perturbed following its association with . HAP1 also interacts with proteins other than Htt and it is likely that their function is altered in HD pathogenesis. These include dynactin p150Glued, a cytoplasmic dynein accessory protein involved in retrograde transport of organelles, and kinesin-like protein which is another transport-mediation protein. HAP1 also shows a similar CNS distribution pattern to that of neural nitric oxide synthase (nNos), especially in both of the pedunculopontine nuclei, the supraoptic nucleus, and the olfactory bulb. The possible significance of this interaction is that increased HAP1 interaction with muHtt may also increase nitric oxide (NO) thus facilitating neuronal damage. HAP1 also interacts with other factors involved in vesicular trafficking including GABAA receptor, Rho-GEF, and HGS. References Proteins
Huntingtin-associated protein 1
[ "Chemistry" ]
808
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,568,229
https://en.wikipedia.org/wiki/HAP2
HAP2 (hapless 2), also known as GCS1 (generative cell-specific protein 1), is a family of membrane fusion proteins found in the sperm cell of diverse eukaryotes including Toxoplasma, thale cress, and fruit flies. This protein is essential for gamete fusion, and therefore fertilization, in these organisms. By eukaryotic taxonomy, it is found in: Plant in the broad sense (Archaeplastida) Land plants (pollen – "male" gamete) - thale cress, numerous other species including rice Green algae ("minus" mating type) - Chlamydomonas Red algae - Cyanidioschyzon Alveolates Apicomplexans (male gametocytes) Plasmodium Toxoplasma Ciliophorans - Tetrahymena Euglenozoans - Trypanosoma Amorphea Opisthokonta Animals Cnidarians - Hydra Arthropods - Insects - fruit flies and bees Acorn worms - Saccoglossus Amoebozoa - Dictyostelium discoideum slime mold (in a gene-duplicated setup with 3 mating types) For a full list of organisms in which it occurs, consult InterPro #IPR018928 - taxonomy. Origin HAP2/GCS1 belongs to the broader group of fusexins, which includes such proteins as C. elegans developmental fusogens EFF-1 and AFF-1, haloarchaeal Fsx1, and viral "class II" fusion proteins. In an unrooted phylogenetic tree from 2021, HAP2/GCS1 and EFF-1/AFF-1 occupy two ends of the tree, the middle being occupied by viral sequences; this suggests that they may have been acquired separately. The latest structure-based unrooted phylogenetic tree of Brukman et al. (2022), which takes into account the newly-discovered archaeal sequences, shows that Fsx1 groups with HAP2/GCS1, and that they are separated from EFF-1 by a number of viral sequences. Based on where the root is placed, a number of different hypotheses regarding the history of these families – their horizontal transfer and vertical inheritance – can be generated. Older comparisons excluding archaeal sequences would strongly favor an interpretation where HAP2/GCS1 is acquired from a virus, but the grouping of Fsx1 with HAP2/GCS1 has allowed the possibility of a much more ancient source. References Protein families
HAP2
[ "Biology" ]
553
[ "Protein families", "Protein classification" ]
14,568,414
https://en.wikipedia.org/wiki/Rule%20of%20Sarrus
In matrix theory, the rule of Sarrus is a mnemonic device for computing the determinant of a matrix named after the French mathematician Pierre Frédéric Sarrus. Consider a matrix then its determinant can be computed by the following scheme. Write out the first two columns of the matrix to the right of the third column, giving five columns in a row. Then add the products of the diagonals going from top to bottom (solid) and subtract the products of the diagonals going from bottom to top (dashed). This yields A similar scheme based on diagonals works for matrices: Both are special cases of the Leibniz formula, which however does not yield similar memorization schemes for larger matrices. Sarrus' rule can also be derived using the Laplace expansion of a matrix. Another way of thinking of Sarrus' rule is to imagine that the matrix is wrapped around a cylinder, such that the right and left edges are joined. References External links Sarrus' rule at Planetmath Linear Algebra: Rule of Sarrus of Determinants at khanacademy.org Linear algebra Determinants Mnemonics
Rule of Sarrus
[ "Mathematics" ]
233
[ "Linear algebra", "Algebra" ]
14,568,475
https://en.wikipedia.org/wiki/Allenrolfea%20occidentalis
Allenrolfea occidentalis, the iodine bush, is a low-lying shrub of the Southwestern United States, California, Idaho, and northern Mexico. It grows in sandy, often salty, distinctly alkaline soils, such as desert washes and saline dry lakebeds. It is a common halophyte member of the alkali flat ecosystem. Description The knobby green stems are fleshy and appear jointed at the internodes between segments. Often the segments are so short they are nearly round. The leaves appear as flaky scales scattered across the surface of the stems. The genus was named for the English botanist Robert Allen Rolfe. It grows up to tall. The seeds of iodinebush have been used as food in North America in prehistory. References Further reading Gul, B., D. J. Weber, and M. A. Khan. (2001). Growth, ionic and osmotic relations of an Allenrolfea occidentalis population in an inland salt playa of the Great Basin Desert. Journal of Arid Environments 48(4) 445–60. External links Jepson Manual Treatment USDA Plants Profile Photo gallery Amaranthaceae Halophytes Flora of the Southwestern United States Flora of Northwestern Mexico Flora of Idaho Flora of the Northwestern United States Flora of California Flora of the California desert regions Flora of the Sonoran Deserts Flora of the Great Basin Plants described in 1871 Taxa named by Sereno Watson Flora without expected TNC conservation status
Allenrolfea occidentalis
[ "Chemistry" ]
308
[ "Halophytes", "Salts" ]
14,568,730
https://en.wikipedia.org/wiki/L-838%2C417
L-838,417 is an anxiolytic drug used in scientific research. It has similar effects to benzodiazepine drugs, but is structurally distinct and so is classed as a nonbenzodiazepine anxiolytic. The compound was developed by Merck, Sharp and Dohme. L-838,417 is a subtype-selective GABAA positive allosteric modulator, acting as a partial agonist at α2, α3 and α5 subtypes. However, it acts as a negative allosteric modulator at the α1 subtype, and has little affinity for the α4 or α6 subtypes. This gives it selective anxiolytic effects, which are mediated mainly by α2 and α3 subtypes, but with little sedative or amnestic effects as these effects are mediated by α1. Some sedation might still be expected due to its activity at the α5 subtype, which can also cause sedation, however no sedative effects were seen in animal studies even at high doses, suggesting that L-838,417 is primarily acting at α2 and α3 subtypes with the α5 subtype of lesser importance. As might be predicted from its binding profile, L-838,417 substitutes for the anxiolytic benzodiazepine chlordiazepoxide in animals, but not for the hypnotic imidazopyridine drug zolpidem. The synthesis of L-838,417 and similar compounds was described in 2005 in the Journal of Medicinal Chemistry. In neuropathic pain animal models, it has been shown that stabilizing the Potassium Chloride Cotranspoter 2 (KCC2) at neuronal membranes could not only potentiate the L-838,417-induced analgesia in rats, but also rescue its analgesic potential at high doses, revealing a novel strategy for analgesia in pathological pain, by combined targeting of the appropriate GABAA receptor subtypes (i.e. α2, α3) and restoring Cl− homeostasis. See also α5IA SL-651,498 References Anxiolytics Fluoroarenes Triazolopyridazines Ethers Triazoles GABAA receptor positive allosteric modulators Tert-butyl compounds
L-838,417
[ "Chemistry" ]
496
[ "Organic compounds", "Functional groups", "Ethers" ]
14,568,821
https://en.wikipedia.org/wiki/Naum%20Z.%20Shor
Naum Zuselevich Shor () (1 January 1937 – 26 February 2006) was a Soviet and Ukrainian mathematician specializing in optimization. He made significant contributions to nonlinear and stochastic programming, numerical techniques for non-smooth optimization, discrete optimization problems, matrix optimization, dual quadratic bounds in multi-extremal programming problems. Shor became a full member of the National Academy of Science of Ukraine in 1998. Subgradient methods N. Z. Shor is well known for his method of generalized gradient descent with space dilation in the direction of the difference of two successive subgradients (the so-called r-algorithm), that was created in collaboration with Nikolay G. Zhurbenko. The ellipsoid method was re-invigorated by A.S. Nemirovsky and D.B. Yudin, who developed a careful complexity analysis of its approximation properties for problems of convex minimization with real data. However, it was Leonid Khachiyan who provided the rational-arithmetic complexity analysis, using an ellipsoid algorithm, that established that linear programming problems can be solved in polynomial time. It has long been known that the ellipsoidal methods are special cases of these subgradient-type methods. R-algorithm Shor's r-algorithm is for unconstrained minimization of (possibly) non-smooth functions, which has been somewhat popular despite an unknown convergence rate. It can be viewed as a quasi-Newton method, although it does not satisfy the secant equation. Although the method involves subgradients, it is distinct from his so-called subgradient method described above. References Notes Bibliography . External links ORB Newsletter Issue 5 contains an article with a short biography Numerical analysts Theoretical computer scientists Mathematical analysts 20th-century Ukrainian mathematicians Soviet computer scientists Taras Shevchenko National University of Kyiv alumni Academic staff of the Moscow Institute of Physics and Technology Members of the National Academy of Sciences of Ukraine Recipients of the USSR State Prize 1937 births 2006 deaths Ukrainian Jews Laureates of the State Prize of Ukraine in Science and Technology
Naum Z. Shor
[ "Mathematics" ]
429
[ "Mathematical analysis", "Mathematical analysts" ]
14,569,019
https://en.wikipedia.org/wiki/TaKeTiNa%20Rhythm%20Process
The TaKeTiNa Rhythm Process is a musical meditative group process for people who want to develop their awareness of rhythm. It was developed in the 1970s by the Austrian musician and composer Reinhard Flatischler. In a TaKeTiNa process, there are three different rhythmic layers—represented by the voice, claps, and steps—that continue simultaneously. Vocalization and clap rhythms, accompanied by the berimbau, constantly change while the steps, supported by a surdo drum, remain the same. The surdo stabilizes the basic rhythm of the steps, while call-and-response singing serves to destabilize and re-stabilize the rhythmic movements. In this process, the simultaneity of stabilization and destabilization creates a disturbance that allows participants to repeatedly fall out, and then fall back into rhythm. Participants are guided into the experience of "rhythm archetypes", rhythmic "images anchored deep in human consciousness". According to Flatischler, the support of the group allows the individual participant to go into his or her own process, building deep musical and personal trust. TaKeTiNa is used in academic and clinical settings and in corporate trainings worldwide. References Further reading Flatischler, Reinhard. (1992). The Forgotten Power of Rhythm: Taketina. Mendicino, CA: Life Rhythm. Flatischler, Reinhard. (2007). Rhythm for Evolution: Das TaKeTiNa-Rhythmusbuch. Mainz, Germany: Schott. Peyser, R. (1998). "Primal rhythm - ancient healer: TaKeTiNa with Reinhard Flatischler." [HTML document]. Retrieved December 10, 2007 from the World Wide Web: http://www.randypeyser.com/flatischler.htm Peyser, R. (Fall, 2009). TaKeTiNa: Rewiring with rhythm [HTML document]. Retrieved July 29, 2010 from the World Wide Web: http://issuu.com/consciousdancer/docs/issue_8 Rothman, T. (2001). "Ta Ke Ti Na - Listening to the Pulse of Life." [WORD document]. Deutschwaldstrasse, Austria: Ta Ke Ti Na Institute. Retrieved December 10, 2007 from the World Wide Web: http://academic.evergreen.edu/curricular/transcendentpractices/allprogram/Winter%20Ta%20Ke%20Ti%20Na.doc Toms, J. W. (December, 2007). Cultivating enlightenment. In "New Dimensions Newsletter," [HTML document]. Retrieved December 11, 2007 from the World Wide Web: http://www.ndbroadcasting.org/data/newsletter/200712.html. External links Homepage Meditation Music therapy Rhythm and meter 1970s introductions
TaKeTiNa Rhythm Process
[ "Physics" ]
590
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
14,569,196
https://en.wikipedia.org/wiki/RWJ-51204
RWJ-51204 is an anxiolytic drug used in scientific research. It has similar effects to benzodiazepine drugs, but is structurally distinct and so is classed as a nonbenzodiazepine anxiolytic. RWJ-51204 is a nonselective partial agonist at GABAA receptors. It produces primarily anxiolytic effects at low doses, with sedative, ataxia and muscle relaxant effects only appearing at some 20x the effective anxiolytic dose. It was discovered by researchers at the pharmaceutical company Johnson & Johnson, but its development has been discontinued. References Anxiolytics Fluoroarenes Ethers Anilides Ketones Benzimidazoles GABAA receptor positive allosteric modulators Abandoned drugs 2-Fluorophenyl compounds
RWJ-51204
[ "Chemistry" ]
177
[ "Ketones", "Drug safety", "Functional groups", "Organic compounds", "Ethers", "Abandoned drugs" ]
14,569,866
https://en.wikipedia.org/wiki/Abu%20Sayda%20chlorine%20bombing
The Abu Sayda bombing was a chlorine car bombing attack that occurred on 15 May 2007, in an open-air market in the Iraqi Diyala Governorate village of Abu Sayda. The attack killed up to 45 people and wounded 60 more in the Shia village, the highest death toll of all chlorine bombings in Iraq. Iraqi and American military sources initially denied the use of chlorine. References 2007 murders in Iraq Chemical weapons attacks Bombings in the Iraqi insurgency (2003–2011) Car and truck bombings in Iraq Chemical terrorism Marketplace attacks in Iraq Mass murder in 2007 May 2007 events in Iraq Terrorist incidents in Iraq in 2007 Violence against Shia Muslims in Iraq War crimes in the Iraq War Diyala Governorate in the Iraq War Car and truck bombings in 2007
Abu Sayda chlorine bombing
[ "Chemistry" ]
153
[ "Chemical terrorism", "Chemical weapons attacks", "Chemical weapons" ]
14,570,219
https://en.wikipedia.org/wiki/Adhesive%20bonding%20of%20semiconductor%20wafers
Adhesive bonding (also referred to as gluing or glue bonding) describes a wafer bonding technique with applying an intermediate layer to connect substrates of different types of materials. Those connections produced can be soluble or insoluble. The commercially available adhesive can be organic or inorganic and is deposited on one or both substrate surfaces. Adhesives, especially SU-8 and benzocyclobutene (BCB), are specialized for production of MEMS and electronic components. The procedure enables bonding temperatures from 1000 °C down to room temperature. Adhesive bonding has the advantage of relatively low bonding temperature as well as the absence of electric voltage and current. Because the wafers are not in direct contact, this procedure enables the use of different substrates, such as silicon, glass, metals or other semiconductor materials. A drawback is that small structures become wider during patterning which hampers the production of an accurate intermediate layer with tight dimension control. Further, the possibility of corrosion due to out-gassed products, thermal instability and penetration of moisture limits the reliability of the bonding process. Another disadvantage is the missing possibility of hermetically sealed encapsulation due to higher permeability of gas and water molecules while using organic adhesives. Overview Adhesive bonding with organic materials such as BCB or SU-8 has simple process properties and the ability to form high-aspect ratio micro structures. The bonding procedure is based on polymerization reaction of organic molecules to form long polymer chains during annealing. This cross-link reaction forms BCB and SU-8 to a solid polymer layer. The intermediate layer is applied by spin-on, spray-on, screen-printing, embossing, dispensing or block printing on one or two substrate surfaces. The adhesive layer thickness depends on the viscosity, rotational speed and the applied tool pressure. The procedural steps of adhesive bonding are divided into the following: Cleaning and pre-treatment of substrates surfaces Application of adhesive, solvent or other intermediate layers Contacting substrates Hardening intermediate layer The most established adhesives are polymers that enable connections of different materials at temperatures ≤ 200 °C. Due to these low process temperature metal electrodes, electronics and various micro-structures can be integrated on the wafer. The structuring of polymers as well as the realization of cavities over movable elements are possible using photolithography or dry etching. The hardening conditions depend on the materials used. Hardening of the adhesives are possible: at room temperature through heating cycles using UV light by applying pressure Process parameters The most important process parameters for achieving a high bonding strength are: adhesive material coating thickness bonding temperature processing time chamber pressure tool pressure Surface preparation of plastics There are three major requirements of creating a desirable surface for adhesive bonding of plastics: the weak boundary layer of the given material must be removed or chemically modified to create a strong boundary layer; the surface energy of the adherend should be higher than the surface energy of the adhesive for good wetting; and the surface profile can be improved to provide mechanical interlocking. Meeting one of these major requirements will improve bonding; however, the most desirable surface will incorporate all three requirements. Numerous techniques are available to help produce a desirable surface for adhesive bonding. Degreasing When preparing a surface for adhesive bonding, all oil and grease contamination must be removed in order to form a strong bond. Although the surface may appear to be clean, it is important to still use the degreasing process. Prior to performing the degreasing process, the compatibility of the solvent used and the adherend must be considered to prevent irreversible damage of the surface or part. Vapor degreasing One method of degreasing is vapor degreasing, where the adherend is dipped in a solvent. When removed from the solvent, the vapors condense on the surface of the adherend and dissolve any contaminants that had existed. These contaminants then drip off the adherend with the condensed vapors. Instead of vapor degreasing The other method of degreasing requires a cloth or rag soaked in solvent, which can be used to wipe down the surface of the adherend to remove contaminants. It is important that all residue that had been left behind from the solvents be removed, so that there is no detrimental effects to the adhesive bonding. Following degreasing process After degreasing, a good test to determine cleanliness of the surface is to use a drop of water. If the drop spreads on the surface, a low contact angle and good wettability has been achieved, which indicates the surface is clean and ready for application of the adhesive. If the drop beads up or retains its shape, the degreasing process should be repeated. Abrasion In general, abrasion is superior to other methods of surface preparations due to the fact that it is simple to perform, and it does not produce a significant amount of waste. To prepare the adherend for bonding, the surface can be sanded or grit blasted with an abrasive material to roughen the surface and remove any loose material. Rough surfaces produce stronger bonds because they have an increased surface area for the adhesive to bond to as compared to a relatively smooth surface. In addition, roughening the surface will also increase mechanical interlocking. Following abrasion, the adherend should always be wiped with solvent or an aqueous detergent solution to clean the surface of any oils and loose material and then dried. After this process is complete, the adhesive can be applied. Peel ply For a peel ply, a thin, woven piece of material is applied to the adherend during fabrication. Because the material is woven, it will leave a torturous surface when removed, which will improve bonding by mechanical interlocking. Prior to adhesive bonding, the woven material acts to protect the surface of the adherend from contaminates. When an adhesive is ready to be applied, the material can be peeled off, leaving a rough and clean surface for bonding. Corona discharge treatment Corona discharge treatment (CDT) is typically used to improve adhesion of ink or coatings on plastic films. In the CDT, an electrode is connected to a high voltage source. The film travels on a roller that is covered with a dielectric layer and is grounded. When a voltage is applied, the electrical discharge causes ionization of air, and a plasma is formed. In doing so, the surface of the film is oxidized, thus improving wetting and adhesion. Additionally, the discharge reacts with molecules of the adherend to form free radicals, which react with oxygen and eventually form polar groups that increase the surface energy of the adherend. Another way CDT improves bonding is that it roughens the adherend by removing the amorphous regions of the surface, which increases the surface area and improves adhesive bonding. Depending on the type of adherend being treated with CDT, the treatment times may differ. Some adherends may require longer treating times to achieve the same surface energy. Flame treatment In flame treatment, a mixture of gas and air is used to produce a flame that is run over the surface of the adherend. The flame that is produced must be oxidizing in order to produce an effective treatment. This means that the flame is blue in color. Flame treatment can be performed by using a setup similar to the CDT in which plastic film travels across a roller while the flame contacts it. In addition to more sophisticated methods, flame treatment can also be done by hand with the use of a torch. However, even and steady treatment of the surface is more difficult to obtain. Once the flame treatment is completed, the part can be gently cleaned with water and air dried, which will ensure that an excess of oxides are not formed. Control during the flame treatment is critical. Too much of the treatment will degrade the plastic, which will lead to poor adhesion. Too little of a treatment will not modify the surface enough and will also lead to poor adhesion. An additional aspect of flame treatment that must be considered is possible deformation to the adherend. Precise control of the flame will prevent this from occurring. Plasma treatment Plasma is a gas excited by electrical energy, and contains approximately an equal density of positive- and negative-charged ions. The interaction of the electrons and ions in the plasma with the surface oxidizes the surface and forms free radicals. The oxidation of the surface removes unwanted contaminants and improves adhesion. In addition to removing contaminates, the plasma treatment also introduces polar groups that increase the surface energy of the adherend. Plasma treatment can produce adhesive bonds up to four times stronger than compared to chemically or mechanically treated adherends. In general, plasma treatment is not used often in industry because it is required to be performed below atmospheric pressure. This creates an expensive and less cost effective process. Chemical treatment Chemical treatments are used to change the composition and structure of the surface of the adherend and are often used in addition to degreasing and abrasion to maximize the strength of the adhesive bond. In addition to this, they increase the chance of other bonding forces to occur, such as hydrogen, dipole, and van der Waals bonding between the adherend and the adhesive. Chemical solutions can be applied to the surface of an adherend to either clean or alter the surface of the adherend, depending on the chemical used. Solvents are used to simply clean the surfaces of any contaminates or debris. They do not increase the surface energy of the adherend. To modify the surface of the adherend, acid solutions can be used to etch and oxidize the surface. These solutions must be carefully prepared in order to ensure good bonding strength is developed. These treatments can be made more effective by increasing the time and temperature of the application. However, too long of time can lead to excess reaction products that form and can hinder the bonding performance between the adhesive and adherend. As with other surface preparation methods, a good test to assure a good chemical treatment is to put a drop of water on the surface of the adherend. If the drop flattens or spreads out, it means the surface of the adherend has good wettability and should allow for good bonding. A final consideration when using chemical treatments is that of safety. The chemicals used in the treatments can be hazardous to human health and before using any, the material safety data sheet for the particular chemical should be referenced. Ultraviolet radiation treatment Ultraviolet (UV) radiation plays a role in numerous surface treatments, including some of the fore mentioned treatments, although it may not be the dominating factor. An example of a UV treatment where UV radiation is the primary factor that effects the surface preparation is with the use of excimer lasers. Excimer lasers are extremely high energy and use to create pulses of radiation. When the laser makes contact with the surface of the adherend, it removes a layer of material, therefore cleaning the surface. In addition, if the UV radiation laser treatment is performed in the presence of air, the surface of the adherend can be oxidized, thus improving the surface energy. Finally, the radiation pulses can be used to create specific surface patterns that will increase surface area and improve bonding. SU-8 photoresist Overview SU-8 is a 3 component UV-sensitive negative photoresist based on epoxy resin, gamma butyrolactone, and triaryl sulfonium salt. SU-8 polymerizes at approximately 100 °C and is temperature-stable up to 150 °C. It is CMOS- and bio-compatible and has excellent electrical, mechanical and fluidic properties. It also has a high cross-linking density, high chemical resistance, and high thermal stability. The viscosity depends on the mixture with the solvent for different layer thicknesses (1.5 to 500 μm). Using multilayer coating a layer thickness up to 1 mm is reachable. The lithographic structuring is based on a photoinitiator triaylium-sulfonium that releases lewis acid during UV radiation. This acid works as catalyst for the polymerization. The connection of the molecules is activated over different annealing steps, so called post-exposure bake (PEB). Using SU-8 can achieve a high bonding yield. In addition, the substrate flatness, clean room conditions and the wettability of the surface are important factors to achieve good bonding results. Procedural steps The standard process (compare to figure "Schematic bonding process") consists of applying SU-8 on the top wafer by spin-on or spray-on of thin layers (3 to 100 μm). Subsequently, the structuring of the photo-resist using direct UV light exposure is applied but can also be achieved through deep reactive-ion etching (DRIE). During coating and structuring of the SU-8 the tempering steps before and after exposure have to be considered. Based on thermal layer stress the risk of crack formation exists. While coating the photoresist the formation of voids due to the layer thickness inhomogeneity has to be avoided. The adhesive layer thickness should be larger than the flatness imperfection of the wafer to establish good contact. The procedural steps based on a typical example are: Cleaning Top wafer Thermal oxidation Dehydration Spin coating the SU-8 Softbake 120 s at 65 °C 300 s to 95 °C Cooling down Exposure with 165 to 200  Post-exposure-bake 2 to 120 min at 50 to 120 °C to room temperature relaxation time development rinsing and dry spinning hard bake at 50 to 150 °C for 5 to 120 min For non-planar wafer surfaces or free standing structures, spin-coating is not a very successful SU-8 deposition method. As a result, spray-on is mainly used on structured wafers. The bonding takes place at the polymerization temperature of SU-8 at approximately 100 °C. The soft-bake allows that high residual solvent content minimizes intrinsic stress and improves cross-linking. The SU-8 layer is patterned using soft contact exposure followed by post-exposure bake. The non-exposed SU-8 is removed by immersing in, e.g. propylene glycol methyl ether acetate (PGMEA). Ensuring void-free bonding a homogeneous layer thickness of the SU-8 over the wafer surface is important (compare to cross section photo). To ensure good contact of the wafer pair a constant pressure between 2.5 and 4.5 bar during bonding is applied. The frames should be kept above the non-flatness value of the wafer, based on the fact that defects usually are caused by the curvature of the wafer. A shear strength of the bonded wafer pair of about 18 to 25 MPa is achievable. Examples Adhesive bonding using SU-8 is applicable to zero-level packaging technology for low cost MEMS packaging. Metallic feed-throughs can be used for electrically connections to packed elements through the adhesive layer. Also biomedical and micro fluidic devices are fabricated based on SU-8 adhesive layer as well as micro fluidic channels, movable micro-mechanical components, optical waveguides and UV-LIGA components. Benzocyclobutene (BCB) Overview Benzocyclobutene (BCB) is a hydrocarbon that is widely used in electronics manufacturing. BCB exists in a dry etch and a photosensitive version, each requiring different procedural steps for structuring (compare BCB process flow). It does release only small amounts of by-products during curing which enables a void-free bond. This polymer ensures very strong bonds and excellent chemical resistance to numerous acids, alkalines and solvents. The BCB is over 90% transparent to visible light that enables the use for optical MEMS applications. Compared to other polymers the BCB has a low dielectric constant and dielectric loss. The polymerization of BCB is taking place at a temperature around 250 to 300 °C and it is stable up to 350 °C. Using BCB does not ensure a sufficient hermeticity of sealed cavities for MEMS. Procedural steps The procedural steps for dry etch BCB are: Cleaning Supplying the adhesion promoter Drying of the primer BCB deposition Photosensitive BCB Exposure and development Dry etch BCB Pre-bake/soft-cure Patterning of the BCB layer by lithography and dry-etching Bonding at specific temperature, ambient pressure for specific amount of time Post-bake/hard-cure to form solid BCB monomer layer The wafers can be cleaned using H2O2 + H2SO4 or oxygen plasma. The cleaned wafers are rinsed with DI water and dried at elevated temperature, e.g. 100 to 200 °C for 120 min. The adhesion promoter with a specific thickness is deposited, i.e. spin-coated or contact printed on the wafer to improve the bonding strength. Spray coating is preferable when the adhesive is deposited on free standing structures. Subsequently, the BCB layer is spin or spray coated, usually 1 to 50 μm thick, to the same wafer. To prevent that the patterned layer has a lower bond strength than the unpatterned layer, due to the cross-linking of the polymer, a soft-curing step is applied before bonding. The pre-curing of the BCB takes place for several minutes on the hot plate at a specific temperature ≤ 300 °C. The soft cure prevents bubble formation and unbonded areas as well as the distortion of the adhesive layer during compression to improve the alignment accuracy. The degree of polymerization should not be over 50%, so it is robust enough to be patterned and still sufficiently adhesive to be bonded. If the BCB is hard-baked (far over 50%), it loses its adhesives properties and results in an increased amount of void formation. But also if the soft-curing is above 210 °C the adhesive cures too much, so that the material is not soft and sticky enough to achieve a high bonding strength. The substrates with the intermediate layer are pressed together with subsequent curing results in a bond. The post-bake process is applied at 180 to 320 °C for 30 to 240 min usually in a specific atmosphere or vacuum in the bond chamber. This is necessary to hard-cure the BCB. The vacuum prevents air trapped in the bond interface and pumps out the gases of the out-gassing residual solvents during annealing. The temperature and the curing time are variable, so with a higher temperature curing time can be reduced based on a quicker cross-linking. The final bonding layer thickness depends on the thickness of the cured BCB, the spinning speed and the shrink rate. Examples Adhesive bonding using a BCB intermediate layer is a possible method for packaging and sealing of MEMS devices, also structured Si wafers. Its use is specified for applications that does not require hermetic sealing, i.e. MOEMS mirror arrays, RF MEMS switches and tunable capacitors. BCB bonding is used in the fabrication of channels for fluidic devices, for transfer protruding surface structures as well as for CMOS controller wafers and integrated SMA microactuators. Technical specifications References Electronics manufacturing Packaging (microfabrication) Semiconductor technology Wafer bonding
Adhesive bonding of semiconductor wafers
[ "Materials_science", "Engineering" ]
4,021
[ "Electronics manufacturing", "Microtechnology", "Packaging (microfabrication)", "Electronic engineering", "Semiconductor technology" ]
14,570,400
https://en.wikipedia.org/wiki/Rare-cutter%20enzyme
A rare-cutter enzyme is a restriction enzyme with a recognition sequence which occurs only rarely in a genome. An example is NotI, which cuts after the first GC of a 5'-GCGGCCGC-3' sequence; restriction enzymes with seven and eight base pair recognition sequences are often also called rare-cutter enzymes (six bp recognition sequences are much more common). For example, rare-cutter enzymes with 7-nucleotide recognition sites cut once every 47 bp (16,384 bp), and those with 8-nucleotide recognition sites cut every 48 bp (65,536 bp) respectively. They are used in top-down mapping to cut a chromosome into chunks of these sizes on average. External links Bio-Medicine.com's definition Molecular biology Biotechnology Restriction enzymes EC 3.1
Rare-cutter enzyme
[ "Chemistry", "Biology" ]
170
[ "Genetics techniques", "Biotechnology stubs", "Biotechnology", "Biochemistry stubs", "nan", "Molecular biology", "Biochemistry", "Restriction enzymes" ]
14,570,655
https://en.wikipedia.org/wiki/Macroscopic%20quantum%20state
A macroscopic quantum state is a state of matter in which macroscopic properties, such as mechanical motion, thermal conductivity, electrical conductivity and viscosity, can be described only by quantum mechanics rather than merely classical mechanics. This occurs primarily at low temperatures where little thermal motion is present to mask the quantum nature of a substance. Macroscopic quantum phenomena can emerge from coherent states of superfluids and superconductors. Quantum states of motion have been directly observed in a macroscopic mechanical resonator (see quantum machine). References Quantum mechanics
Macroscopic quantum state
[ "Physics" ]
114
[ "Theoretical physics", "Quantum mechanics", "Quantum physics stubs" ]
14,570,956
https://en.wikipedia.org/wiki/C7H14O
{{DISPLAYTITLE:C7H14O}} The molecular formula C7H14O (molar mass: 114.18 g/mol) may refer to: Cyclohexylmethanol 1-Methylcyclohexanol Heptanal, or heptanaldehyde Heptanones 2-Heptanone 3-Heptanone 4-Heptanone Molecular formulas
C7H14O
[ "Physics", "Chemistry" ]
91
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
14,571,189
https://en.wikipedia.org/wiki/Hexanal
Hexanal, also called hexanaldehyde or caproaldehyde is an alkyl aldehyde used in the flavor industry to produce fruity flavors. Its scent resembles freshly cut grass, like cis-3-hexenal. It is potentially useful as a natural extract that prevents fruit spoilage. It occurs naturally, and contributes to a hay-like "off-note" flavor in green peas. The first synthesis of hexanal was published in 1907 by P. Bagard. References Alkanals Substances discovered in the 1900s
Hexanal
[ "Chemistry" ]
116
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
14,572,511
https://en.wikipedia.org/wiki/Melem
In chemistry, melem is a compound with formula ; specifically, 2,5,8-triamino-heptazine or 2,5,8-triamino-tri-s-triazine, whose molecule can be described as that of heptazine with the three hydrogen atoms replaced by amino groups. It is a white crystalline solid. Preparation Melem can be prepared by thermal decomposition of various C−N−H compounds, such as melamine C3N3(NH2)3, dicyandiamide H4C2N4, ammonium dicyanamide NH4[N(CN)2], cyanamide H2CN2, at 400 to 450 °C. Structure and properties Crystal structure Melem crystallizes in the group P21/c (No. 14), with parameters a = 739.92(1) pm, b = 865.28(3) pm, c = 1338.16(4) pm, β = 99.912(2)°, and Z = 4. The almost-planar molecules are arranged in parallel layers spaced 327 pm apart. The molecule is in the triamino form, rather than one of the possible tautomers. Thermal decomposition When heated above 560°, melem transforms into a graphite-like C−N material. Melemium cations Melem accepts up to three protons yielding cations called melemium . Some salts described in the literature are melemium sulfate, • 2, melemium perchlorate, • , melemium hydrogensulfate and two melemium methylsulfonates • and • . The protons can be inserted in any of the six outer nitrogen atoms of the heptazine core, yielding many tautomers of apparently similar energies. See also Triazine , with a single C-N ring Melamine , triamino triazine Melaminium , a cation derived from melamine Melam , a condensation dimer of melamine Melamium , a cation derived from melam Melon , a condensation oligomer of melem References Heterocyclic compounds with 3 rings Nitrogen heterocycles Tricyclic compounds Amines
Melem
[ "Chemistry" ]
470
[ "Amines", "Bases (chemistry)", "Functional groups" ]
14,572,649
https://en.wikipedia.org/wiki/Double%20Helix%20%28novel%29
Double Helix (2004) is a novel by American writer Nancy Werlin about 18-year-old Eli Samuels, who works for a famous molecular biologist named Dr. Quincy Wyatt. There is a mysterious connection between Dr. Wyatt and Eli's parents, and all Eli knows about the connection is that it has something to do with his mother, who has Huntington's disease. Because of the connection between Dr. Wyatt and the Samuels family, Eli's father is strongly against Eli working there. The job is perfect, and the wages are great, but Eli can't help but notice that Dr. Wyatt seems to be a little too interested in him. Later on, as Eli continues to work in the lab, he discovers with the help of Kayla Matheson, Dr. Wyatt's supposed "niece," that he and Kayla are the product of a highly unethical eugenics experiment. Characters Eli Samuels: Eli is 18 years old, 6′9″ (201 centimeters) tall. He's an A student, and is the salutatorian in high school. His mother has Huntington's disease, which he could have too. Jonathan Samuels: Jonathan Samuels is Eli's father, who loved his wife. He has a problem with Eli working at Wyatt Transgenic's, because of things that went down in the past. He is initially completely unwilling to tell Eli anything about his mother's relationships with Dr. Wyatt. Dr. Quincy Wyatt: Dr. Quincy Wyatt is a famed geneticist, who is considered to be on par with Mendel, Watson, and Crick. He offers Eli a job. He displays an unexplained interest in Eli. Vivian Fadiman: Vivian Fadiman is Eli's girlfriend and Valedictorian at his high school. All she wants is to be part of his life and she supports him in everything he does. It's hard for her to understand why Eli hides major parts of his life from her. Eli is devoted to her, though they do go through some rough times. Kayla Matheson: A year older than Eli, he gets to know her via Dr. Wyatt. He is attracted to her because of her beauty and athleticism. Ava Samuels: Ava Samuels was Eli's mother and lived in a nursing home because of her Huntington's disease. She had a mysterious connection with Dr. Wyatt. Themes The story addresses a lot of recent scientific breakthroughs, and uses them as plot devices. For instance, The Human Genome Project, which documents all the genes and DNA in the human genetic makeup, is talked about when Dr. Wyatt explains his genetic testing to Eli for the first time. Scientific American recently published an article detailing the possible uses for the information gathered from the Human Genome Project. Modern advancements in gene studies currently can detect and, in some cases, even predict the presence of a genetic abnormality. In Double Helix, this ability to detect flaws before birth was used to genetically engineer a “more perfect son, devoid of flaws and with a proper chance to live free of Huntington's.” For some people, the biggest issue with genetic engineering is whether or not to seek out and act on knowledge about genetic flaws. Double Helix attempts to explore the life saving and life destroying aspects of genetic engineering. The book proposes that, while Eli's life was saved by avoiding the Huntington's disease gene, his concept of life and self were destroyed when he found out he was genetically engineered to be a certain way. Reception Double Helix received reviews from Booklist, Kirkus Reviews, and Publishers Weekly. References 2004 American novels American young adult novels Novels about genetic engineering Molecular biology
Double Helix (novel)
[ "Chemistry", "Biology" ]
754
[ "Biochemistry", "Molecular biology" ]
14,573,037
https://en.wikipedia.org/wiki/Monge%27s%20theorem
In geometry, Monge's theorem, named after Gaspard Monge, states that for any three circles in a plane, none of which is completely inside one of the others, the intersection points of each of the three pairs of external tangent lines are collinear. For any two circles in a plane, an external tangent is a line that is tangent to both circles but does not pass between them. There are two such external tangent lines for any two circles. Each such pair has a unique intersection point in the extended Euclidean plane. Monge's theorem states that the three such points given by the three pairs of circles always lie in a straight line. In the case of two of the circles being of equal size, the two external tangent lines are parallel. In this case Monge's theorem asserts that the other two intersection points must lie on a line parallel to those two external tangents. In other words, if the two external tangents are considered to intersect at the point at infinity, then the other two intersection points must be on a line passing through the same point at infinity, so the line between them takes the same angle as the external tangent. Proofs The simplest proof employs a three-dimensional analogy. Let the three circles correspond to three spheres of different radii; the circles correspond to the equators that result from a plane passing through the centers of the spheres. The three spheres can be sandwiched uniquely between two planes. Each pair of spheres defines a cone that is externally tangent to both spheres, and the apex of this cone corresponds to the intersection point of the two external tangents, i.e., the external homothetic center (center of similarity). Since one line of the cone lies in each plane, the apex of each cone must lie in both planes, and hence somewhere on the line of intersection of the two planes. Therefore, the three external homothetic centers are collinear. This proof is somewhat flawed, however, as it cannot account for cases where the smallest circle is located between the other two, nor any case where one circle is fully contained by another. It can be made fully general by using cones of equal apex angle rather than spheres, creating three similar cones. Any pair of similar three dimensional objects has a center of similarity, about which you could scale either object to coincide with the other; these lines of similarity replace the external tangents of the previous proof. Further, the line connecting any two apex points must also intersect their center of similarity. The three apex points always define a plane in three dimensions, and all three centers of similarity must lie in the plane containing the circular bases. Hence, the three centers must lie on the intersection of the two planes, which must be a line in three dimensions. Monge's theorem can also be proved by using Desargues' theorem. Another easy proof uses Menelaus' theorem, since the ratios can be calculated with the diameters of each circle, which will be eliminated by cyclic forms when using Menelaus' theorem. Desargues' theorem also asserts that 3 points lie on a line, and has a similar proof using the same idea of considering it in 3 rather than 2 dimensions and writing the line as an intersection of 2 planes. See also Homothetic centers of circles Problem of Apollonius References Bibliography External links Monge's Circle Theorem at MathWorld Monge's theorem at cut-the-knot Three Circles and Common Tangents at cut-the-knot Euclidean plane geometry Articles containing proofs Theorems about circles
Monge's theorem
[ "Mathematics" ]
719
[ "Articles containing proofs", "Planes (geometry)", "Euclidean plane geometry" ]
14,573,061
https://en.wikipedia.org/wiki/Megaprime
A megaprime is a prime number with at least one million decimal digits. Other terms for large primes include "titanic prime", coined by Samuel Yates in the 1980s for a prime with at least 1000 digits (of which the smallest is 10999+7), and "gigantic prime" for a prime with at least 10,000 digits (of which the smallest is 109999+33603). , there are 2,944 known megaprimes which have more than 1,000,000 digits. The first to be found was the Mersenne prime 26972593−1 with 2,098,960 digits, discovered in 1999 by Nayan Hajratwala, a participant in the distributed computing project GIMPS. Nayan was awarded a Cooperative Computing Award from the Electronic Frontier Foundation for this achievement. Almost all primes are megaprimes, as the number of primes with fewer than one million digits is finite. However, the vast majority of known primes are not megaprimes. All numbers from 10999999 through 10999999 + 593498 are known to be composite, and there is a very high probability that 10999999 + 593499, a strong probable prime for each of 8 different bases, is the smallest megaprime. , the smallest number known to be a megaprime is 10999999 + 308267×10292000 + 1. The last prime that is not a megaprime is currently unknown. , the biggest prime number known to not be a megaprime is 10999999 - 1022306×10287000 - 1. There is a very high probability that 10999999 − 172473 is the biggest no-mega prime. See also List of largest known primes and probable primes, a list that includes the largest known megaprimes and probable megaprimes Largest known prime number References Prime numbers Large integers
Megaprime
[ "Mathematics" ]
423
[ "Prime numbers", "Mathematical objects", "Numbers", "Number theory" ]
14,573,357
https://en.wikipedia.org/wiki/Efference%20copy
In physiology, an efference copy or efferent copy is an internal copy of an outflowing (efferent), movement-producing signal generated by an organism's motor system. It can be collated with the (reafferent) sensory input that results from the agent's movement, enabling a comparison of actual movement with desired movement, and a shielding of perception from particular self-induced effects on the sensory input to achieve perceptual stability. Together with internal models, efference copies can serve to enable the brain to predict the effects of an action. An equal term with a different history is corollary discharge. Efference copies are important in enabling motor adaptation such as to enhance gaze stability. They have a role in the perception of self and nonself electric fields in electric fish. They also underlie the phenomenon of tickling. Motor control Motor signals A motor signal from the central nervous system (CNS) to the periphery is called an efference, and a copy of this signal is called an efference copy. Sensory information coming from sensory receptors in the peripheral nervous system to the central nervous system is called afference. On a similar basis, nerves into the nervous system are afferent nerves and ones out are termed efferent nerves. When an efferent signal is produced and sent to the motor system, it has been suggested that a copy of the signal, known as an efference copy, is created so that exafference (sensory signals generated from external stimuli in the environment) can be distinguished from reafference (sensory signals resulting from an animal's own actions). This efference copy, by providing the input to a forward internal model, is then used to generate the predicted sensory feedback that estimates the sensory consequences of a motor command. The actual sensory consequences of the motor command are then deployed to compare with the corollary discharge to inform the CNS about how well the expected action matched its actual external action. Corollary discharge Corollary discharge is characterized as an efference copy of an action command used to inhibit any response to the self generated sensory signal which would interfere with the execution of the motor task. The inhibitory commands originate at the same time as the motor command and target the sensory pathway that would report any reafference to higher levels of the CNS. This is unique from the efference copy, since the corollary discharge is actually fed into the sensory pathway to cancel out the reafferent signals generated by the movement. Alternatively, corollary discharges briefly alters self-generated sensory responses to reduce self-induced desensitization or help distinguish between self-generated and externally generated sensory information. History Steinbuch In 1811 Johann Georg Steinbuch (1770–1818) referred repeatedly to the problem of efference copy and reafference in his book "Beytrag zur Physiologie der Sinne" ("Contribution to the Physiology of Senses"). After studying medicine, Steinbuch worked for a number of years as lecturer at the University of Erlangen and thereafter as physician in Heidenheim, Ulm, and Herrenberg (Württemberg, South Germany). As a young university teacher, he was particularly interested in the brain mechanisms which enable the perception of space and objects, but in later years his attention shifted to the more practical problems of clinical medicine. Together with Justinus Kerner he gave a very precise description in 1817 of the clinical symptoms of botulism. In his book "Beytrag zur Physiologie der Sinne”, Steinbuch presented a very careful analysis of the tactile recognition of objects by the grasping hand. Hereby, he developed the hypothesis that the cerebral mechanisms controlling the movement of the hands interact within the brain with the afferent signal flow evoked in the mechanoreceptors while the grasping hand is moving across the surface of the object. The cerebral signals controlling the movement were called "Bewegidee" (motion idea). According to Steinbuch’s model, only by the interaction of the "Bewegidee" with the afferent signal flow did object recognition become possible. He illustrated his statements by a simple experiment: if an object passively activates the mechanoreceptors of the palm and fingers of a resting hand for sufficient sequences and time, object recognition is not achieved. When the hand, however, grasps actively, object recognition occurs within a few seconds. von Helmholtz The first person to propose the existence of efferent copies was the German physician and physicist Hermann von Helmholtz in the middle of the 19th century. He argued that the brain needed to create an efference copy for the motor commands that controlled eye muscles so as to aid the brain's determining the location of an object relative to the head. His argument used the experiment in which one gently presses on one's own eye. If this is done, one notices that the visual world seems to have "moved" as a result of this passive movement of the eyeball. In contrast, if the eyeball is actively moved by the eye muscles the world is perceived as still. The reasoning made is that with a passive movement of the eyeball, no efferent copies are made as with active movements that allow sensory changes to be anticipated and controlled for with the result in their absence the world appears to move. Sherrington In 1900, Charles Sherrington, the founder of modern ideas about motor control, rejected von Helmholtz ideas and argued that efference copies were not needed as muscles had their own sense of the movements they made. "The view [of von Helmholtz and his followers] which dispenses with peripheral organs and afferent nerves for the muscular sense has had powerful adherents . . . It supposes that during ... a willed movement the outgoing current of impulses from brain to muscle is accompanied by a 'sensation for innervation'. ... it "remains unproven". This resulted in the idea of efference copies being dropped for the next 75 years. Von Holst In 1950, Erich von Holst and Horst Mittelstaedt investigated how species are able to distinguish between exafference and reafference given a seemingly identical percept of the two. To explore this question, they rotated the head of a fly 180 degrees, effectively reversing the right and left edges of the retina and reversing the subject's subsequent reafferent signals. In this state, self-initiated movements of the fly would result in a perception that the world was also moving, rather than standing still as they would in a normal fly. After rotation of the eyes, the animal showed a reinforcement of the optokinetic response in the same direction as the moving visual input. Von Holst and Mittelstaedt interpreted their findings as evidence that corollary discharge (i.e. neural inhibition with active movement) could not have accounted for this observed change as this would have been expected to inhibit the optokinetic reaction. They concluded that an "Efferenzkopie" of the motor command was responsible for this reaction due to the persistence of the reafferent signal and given the consequent discrepancy between expected and actual sensory signals which reinforced the response rather than preventing it. Sperry The Nobel Prize winner, Roger Wolcott Sperry argued for the basis of corollary discharges following his research upon the optokinetic reflex. He is also regarded as the originator of the term "corollary discharge". Motor adaptation The Coriolis effect Efference copy relates to Coriolis effect in a manner that allows for learning and correction of errors experienced from self-generated Coriolis forces. During trunk rotational movements there is a learned CNS anticipation of Coriolis effects, mediated by generation of an appropriate efference copy that can be compared to re-afferent information. Gaze stability It has been proposed that efference copy has an important role in maintaining gaze stability with active head movement by augmenting the vestibulo-ocular reflex (aVOR) during dynamic visual acuity testing. Grip force Efference copy within an internal model allows us to grip objects in parallel to a given load. In other words, the subject is able to properly grip any load that they are provided because the internal model provides such a good prediction of the object without any delay. Flanagan and Wing tested to see whether an internal model is used to predict movement-dependent loads by observing grip force changes with known loads during arm movements. They found that even when giving subjects different known loads the grip force was able to predict the load force. Even when the load force was suddenly changed the grip force never lagged in the phase relationship with the load force, therefore affirming the fact that there was an internal model in the CNS that was allowing for the proper prediction to occur. It has been suggested by Kawato that for gripping, the CNS uses a combination of the inverse and forward model. With the use of the efference copy the internal model can predict a future hand trajectory, thus allowing for the parallel grip to the particular load of the known object. Tickling Experiments have been conducted wherein subjects' feet are tickled both by themselves and with a robotic arm controlled by their own arm movements. These experiments have shown that people find a self-produced tickling motion of the foot to be much less “tickly” than a tickling motion produced by an outside source. They have postulated that this is because when a person sends a motor command to produce the tickling motion, the efference copy anticipates and cancels out the sensory outcome. This idea is further supported by evidence that a delay between the self-produced tickling motor command and the actual execution of this movement (mediated by a robotic arm) causes an increase in the perceived tickliness of the sensation. This shows that when the efference copy is incompatible with the afference, the sensory information is perceived as if it were exafference. Therefore, it is theorized that it is not possible to tickle ourselves because when the predicted sensory feedback (efference copy) matches the actual sensory feedback, the actual feedback will be attenuated. If the predicted sensory feedback does not match the actual sensory feedback, whether caused by a delay (as in the mediation by the robotic arm) or by external influences from the environment, the brain cannot predict the tickling motion on the body and a more intense tickling sensation is perceived. This is the reason why one cannot tickle oneself. Speech It has been argued that motor efference copies play an important role in speech production. Tian and Poeppel propose that a motor efference copy is used to produce a forward model of somatosensory estimation, which entails an estimation of the articulatory movement and position of the articulators as a result of planned motor action. A second (subsequent) auditory efference copy entails the estimation of auditory information as produced by the articulatory system in a second forward model. Both of these forward models can produce respective predictions and corollary discharge, which can in turn be used in comparisons with somatosensory and auditory feedback. Moreover, this system is thought by some to be the basis for inner speech, especially in relation to auditory verbal hallucinations. In the case of inner speech, the efference signal is not sent or is inhibited before action takes place, leaving only the efference copy and leading to the perception of inner speech or inner hearing. In the case of auditory verbal hallucinations, it is thought that a breakdown along the efference copy and forward model route creates a mismatch between what is expected and what is observed, leading to the experience that speech is not produced by oneself. Recent studies suggest that efference copy already occurs when an acoustic signal is generated at the press of a button. The differences in the ERP signal of the efference copy are so severe that machine learning algorithms can distinguish between schizophrenia patients and healthy control subjects, for example. Efference copies also occur not only with spoken words, but with inner language - the quiet production of words. Mormyrid electric fish The mormyrid electric fish provides an example of corollary discharge in lower vertebrates. Specifically, the knollenorgan sensor (KS) is involved with electro-communication, detecting the electric organ discharges (EOD) of other fish. Unless the reafference was somehow modulated, the KS would also detect self generated EODs that would interfere with interpretation of external EODs needed for communication between fish. However, these fish display corollary discharges that inhibit the ascending sensory pathway at the first CNS relay point. These corollary discharges are timed to arrive at the same time as the reafference from the KS to minimize the interference of self-produced EODs with the perception of external EODs, and optimize the duration of inhibition. See also Internal model (motor control) Corollary discharge theory References Further reading External links Peer-reviewed article in Scholarpedia on Corollary Discharge In Primate Vision by Robert H. Wurtz Central nervous system Motor control Tickling Motor cognition
Efference copy
[ "Biology" ]
2,750
[ "Behavior", "Motor control" ]
14,573,421
https://en.wikipedia.org/wiki/Traditional%20knowledge%20GIS
Traditional knowledge Geographic Information Systems (GIS) is a toolset of systems that uses data, techniques, and technologies designed to document and utilize local knowledge in communities around the world. Traditional knowledge is information that encompasses the experiences of a particular culture or society. Traditional knowledge GIS differ from ordinary cognitive maps in that they express environmental and spiritual relationships among real and conceptual entities. This toolset focuses on cultural preservation, land rights disputes, natural resource management, and economic development. Technical aspects Traditional knowledge GIS employs cartographic and database management techniques such as participatory GIS, map biographies, and historical mapping. Participatory GIS aspires to a mutually beneficial relationship between the governing and the governed by fostering public involvement in all aspects of a GIS. It is widely accepted that this technique is necessary for sound environmental and economic planning in developing areas. This method generates a sense of place in scientific analysis that incorporates sacred sites and traditional land use practices. Participatory GIS can be effective for local resource management and planning, but researchers doubt its efficacy as a tool in attaining land tenure or fighting legal battles because of lack of expertise among local individuals and lack of access to technology. Map biographies track the practices of local communities either for the sake of preservation or to argue for resource protection or land grants. GIS technologies are powerful in their ability to accommodate multimedia and multidimensional data sets, which allows for the recording and playing of oral histories and representations of abstract ecological knowledge. Historical mapping documents and analyzes events that are meaningful to a particular tradition or locale. Cultural and humanitarian benefits can be derived from including maps in the historical record of an area. Cultural preservation Cultural preservation is perhaps the principal application of a traditional knowledge GIS. As adherents to traditional lifestyles decline in population, a degree of urgency has developed around the collection of data and wisdom from aging local elders. A central feature of cultural preservation is language revitalization. Bilingual visual and auditory maps depict oral traditions and historical information in places of cultural significance at various scales and levels of detail. Researchers encounter significant obstacles to data acquisition due to the sensitive nature of much of the data sought for a traditional knowledge GIS, and locals may distrust the motives of outside consultants. Land rights and natural resource management Traditional knowledge GIS can influence debates over land rights and resource management in ecologically sensitive areas. Interests of local residents in these regions often conflict with those of migrant workers, state conservation units, and domestic and foreign mining or logging enterprises. GIS hardware and software are used to identify spatial trends in interpreting these conflicts. Economic development Economic development through traditional knowledge GIS is subject to local ownership over the systems and full access to relevant data and training. This situation is rare outside of industrialized nations, so little progress has been made in this field of research. Current issues and effectiveness There is a disparate nature to implementations of traditional knowledge GIS across geographies. Though developing nations utilize some forms of participatory GIS, communities there are less likely to gain access to expensive databases and cartographic methods than those in developed nations. The overall effectiveness of traditional knowledge GIS has not been determined conclusively. Advocates for traditional mapping point to successes in acquiring land titles, managing local databases, and creating new skill sets for local communities worldwide. Detractors cite cost, the need for specialized training, and cultural differences as reasons GIS may be inappropriate for these applications. Traditional knowledge GIS analyze the nature of political and social struggles that lead to competing resource claims. They are powerful tools for mediation and negotiation among coexisting social groups. No cost or open-source traditional knowledge software The Nunaliit Atlas Framework was developed by and is maintained by the Geomatics and Cartographic Research Centre at Carleton University. The focus of this software is to create community atlas projects. Commercial software The CEDAR tool has a number of modules focused on contact relationship management, consultation for development projects, heritage projects and GIS. This software is provided either as a hosted service or as a computer located in client offices. The LOUIS toolkit is a suite of tools for recording, managing and using traditional land use and traditional knowledge information. This software is provided as a hosted service with complementary desktop and mobile applications, including a mobile data collection application. See also Participatory 3D modelling (P3DM) Participatory GIS References Applications of geographic information systems Geographical technology Geographic information systems GIS
Traditional knowledge GIS
[ "Technology" ]
902
[ "Information systems", "Geographic information systems" ]
14,573,499
https://en.wikipedia.org/wiki/Gury%20Marchuk
Gury Ivanovich Marchuk (; 8 June 1925 – 24 March 2013) was a Soviet and Russian scientist in the fields of computational mathematics, and physics of atmosphere. Academician (since 1968); the President of the USSR Academy of Sciences in 1986–1991. Among his notable prizes are the USSR State Prize (1979), Demidov Prize (2004), Lomonosov Gold Medal (2004). Marchuk was born in Orenburg Oblast, Russia. A member of the Communist Party of the Soviet Union since 1947, Academician Marchuk was elected to the Central Committee of the Party as a candidate member in 1976 and as a full member in 1981. He was elected as deputy of the Supreme Soviet of the Union of Soviet Socialist Republics in 1979. He was appointed to succeed Vladimir Kirillin as chairman of the State Committee for Science and Technology (GKNT) in 1980. Marchuk was a proponent of the Integrated Long-Term Programme (ILTP) of Cooperation in Science & Technology that was established in 1987 as a scientific cooperative venture between India and the Soviet Union. The programme allowed the scientists of the countries to collaboratively undertake research in areas as diverse as healthcare and lasers. Marchuk co-chaired the programme's Joint Council with Prof. C.N.R. Rao for 25 years and was made an honorary member of India's National Academy of Sciences. In 2002, the Government of India conferred the Padma Bhushan on him. Honours and awards Hero of Socialist Labour (1975) Honorary Citizen of Obninsk (1985) Four Orders of Lenin (1967, 1971, 1975, 1985) Keldysh Gold Medal — for his work "The development and creation of new methods of mathematical modeling" (1981) Karpinski International Prize (1988) Chebyshev Gold Medal — for outstanding performance in mathematics (1996) Lomonosov Gold Medal (Moscow State University, 2004) - for his outstanding contribution to the creation of new models and methods for solving problems in the physics of nuclear reactors, the physics of the atmosphere and ocean, and immunology Cavalier silver sign "Property of Siberia" Lenin Prize in Science (1961) Friedman Prize (1975) USSR State Prize (1979) State Prize of the Russian Federation (2000) Demidov Prize (2004) Honorary Doctorates of the University of Toulouse (1973), Charles University (Prague, 1978), Dresden University of Technology (1978), Technical University of Budapest (1978) Foreign Member of the Bulgarian Academy of Sciences (1977), German Academy of Sciences at Berlin (1977), Czechoslovak Academy of Sciences (1977), Polish Academy of Sciences (1988) Order of Merit for the Fatherland, 2nd and 4th classes Jubilee Medal "300 Years of the Russian Navy" Medal "For the Victory over Germany in the Great Patriotic War 1941–1945" Medal "For Valiant Labour in the Great Patriotic War 1941-1945" Commander of the Legion of Honour Order of Georgi Dimitrov Padma Bhushan (2002) Vilhelm Bjerknes Medal (2008) References External links Gury Marchuk — scientific works on the website Math-Net.Ru Scientific biography (in Russian) . 1925 births 2013 deaths People from Orenburg Oblast Candidates of the Central Committee of the 25th Congress of the Communist Party of the Soviet Union Deputy heads of government of the Soviet Union Members of the Central Committee of the 26th Congress of the Communist Party of the Soviet Union Members of the Central Committee of the 27th Congress of the Communist Party of the Soviet Union Members of the Central Committee of the 28th Congress of the Communist Party of the Soviet Union Presidents of the USSR Academy of Sciences Full Members of the Russian Academy of Sciences Honorary members of the Russian Academy of Education Foreign members of the Bulgarian Academy of Sciences Foreign fellows of the Indian National Science Academy Members of the French Academy of Sciences Academic staff of the Moscow Institute of Physics and Technology Russian mathematicians Soviet mathematicians Soviet politicians Recipients of the Lomonosov Gold Medal Recipients of the Order of Lenin Recipients of the Order "For Merit to the Fatherland", 2nd class State Prize of the Russian Federation laureates Recipients of the USSR State Prize Commanders of the Legion of Honour Demidov Prize laureates Heroes of Socialist Labour Recipients of the Lenin Prize Recipients of the Padma Bhushan in science & engineering Members of the German Academy of Sciences at Berlin
Gury Marchuk
[ "Technology" ]
882
[ "Science and technology awards", "Recipients of the Lomonosov Gold Medal" ]
14,573,864
https://en.wikipedia.org/wiki/Adparticle
An adparticle is an atom, molecule, or cluster of atoms or molecules that lies on a crystal surface. The term is used in surface chemistry. The word is a contraction of "adsorbed particle". An adparticle that is a single atom may be referred to as an "adatom". References Surface science
Adparticle
[ "Physics", "Chemistry", "Materials_science" ]
71
[ "Physical chemistry stubs", "Condensed matter physics", "Surface science" ]
14,574,252
https://en.wikipedia.org/wiki/Petroleum-Gas%20University%20of%20Ploie%C8%99ti
Petroleum-Gas University of Ploiești (Universitatea Petrol-Gaze, UPG) is a public university in Ploiești, Romania. Founded in 1948 under the name of Institute of Petroleum and Gas, in response to the increasing industrialization in Romania and the lack of high level education in the petroleum and gas fields, it gained fast the status of university, hence changing its name to the actual one in 1993 and extending with new faculties and departments in the field of economic sciences and humanities. The UPG's academic structure includes 5 faculties: Faculty of Petroleum and Gas Engineering, Faculty of Mechanical and Electrical Engineering, Faculty of Petroleum Technology and Petrochemistry, Faculty of Economic Sciences and Faculty of Letters and Sciences. References External links Petroleum-Gas University of Ploiești Engineering universities and colleges in Romania Universities in Romania Ploiești Petroleum engineering schools Universities and colleges established in 1948 1948 establishments in Romania
Petroleum-Gas University of Ploiești
[ "Engineering" ]
190
[ "Petroleum engineering", "Petroleum engineering schools", "Engineering universities and colleges" ]
14,574,654
https://en.wikipedia.org/wiki/OPN1MW
Green-sensitive opsin is a protein that in humans is encoded by the OPN1MW gene. OPN1MW2 is a similar opsin. The OPN1MW gene provides instructions for making an opsin pigment that is more sensitive to light in the middle of the visible spectrum (yellow/green light). See also Opsin OPN1LW References Further reading External links GeneReviews/NIH/NCBI/UW entry on Red-Green Color Vision Defects G protein-coupled receptors Color vision
OPN1MW
[ "Chemistry" ]
113
[ "G protein-coupled receptors", "Signal transduction" ]
14,575,416
https://en.wikipedia.org/wiki/Nonanal
Nonanal is an organic compound with the chemical formula . It is one of several isomers, all are colorless oil. The nonanals are classified as aldehydes. The linear nonanal is produced commercially by the hydroformylation of 1-octene. It is used as a fragrance. Mosquitoes Nonanal has been identified as a compound that attracts Culex mosquitoes. Nonanal acts synergistically with carbon dioxide in that regard. References Fatty aldehydes Alkanals
Nonanal
[ "Chemistry" ]
106
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
14,575,422
https://en.wikipedia.org/wiki/Penguinone
Penguinone is an organic compound with the molecular formula . Its name comes from the fact that its 2-dimensional molecular structure resembles a penguin. The suffix "-one" indicates that it is a ketone. The systematic name of the molecule is 3,4,4,5-tetramethylcyclohexa-2,5-dienone. Although it is a dienone and thus has the necessary structure for a dienone–phenol rearrangement, the methyl groups in positions 3 and 5 of the ring block the movement of the group at position 4, so even the action of trifluoroacetic acid will not cause transformation to a phenol. See also List of chemicals with unusual names NanoPutian Penguin diagram References Ketones Cyclohexadienes
Penguinone
[ "Chemistry" ]
168
[ "Ketones", "Functional groups" ]
14,575,927
https://en.wikipedia.org/wiki/Lupus%20Research%20Alliance
The Lupus Research Alliance (LRA) is an American voluntary health organization based in New York City whose mission is to find better treatments and ultimately prevent and cure systemic lupus erythematosus (SLE or lupus), a debilitating autoimmune disease, through supporting medical research. The organization was born from the merger of three organizations: the Lupus Research Institute (LRI) and the S.L.E. Lupus Foundation with the Alliance for Lupus Research (ALR), which was founded by Robert Wood "Woody" Johnson IV, a member of the Johnson & Johnson family and owner of the New York Jets. As of 2020 the LRA's cumulative research commitment was $220,000,000. History The Lupus Research Alliance (LRA) was founded as The Alliance for Lupus Research in 1999 by former chairperson Woody Johnson, a member of the founding family of Johnson & Johnson and owner of the New York Jets. The organizations's fundraising efforts include committing money to lupus research and charity walk-a-thons under the "Walk with Us to Cure Lupus" program, which raises funds and public awareness of the disease. Research funding The hallmark of LRA's operations are an emphasis on multidisciplinary science. Research funded by the LRA has included the 2008 International SLE Genetics Consortium (SLEGEN), which identified several genes associated with lupus. The Lupus Research Alliance has given more money to lupus research than any non-governmental agency in the world. As of 2015 the LRA's cumulative research commitment was over $100,000,000, making it the largest lupus research organization and the only organization the funds lupus biomedical research internationally. One hundred percent of all donations from the public including donations in support of the "Walk with Us to Cure Lupus" program go directly to research programs because the LRA's board of directors funds all administrative and fundraising costs. LRA uses a peer review system to make all funding decisions. References External links Lupus Research Alliance official website Biomedical research foundations Lupus organizations Organizations established in 1999 Health charities in the United States Medical and health organizations based in New York City 1999 establishments in New York City
Lupus Research Alliance
[ "Engineering", "Biology" ]
457
[ "Biotechnology organizations", "Biomedical research foundations" ]
14,576,408
https://en.wikipedia.org/wiki/Radiative%20transfer%20equation%20and%20diffusion%20theory%20for%20photon%20transport%20in%20biological%20tissue
Photon transport in biological tissue can be equivalently modeled numerically with Monte Carlo simulations or analytically by the radiative transfer equation (RTE). However, the RTE is difficult to solve without introducing approximations. A common approximation summarized here is the diffusion approximation. Overall, solutions to the diffusion equation for photon transport are more computationally efficient, but less accurate than Monte Carlo simulations. Definitions The RTE can mathematically model the transfer of energy as photons move inside a tissue. The flow of radiation energy through a small area element in the radiation field can be characterized by radiance with units . Radiance is defined as energy flow per unit normal area per unit solid angle per unit time. Here, denotes position, denotes unit direction vector and denotes time (Figure 1). Several other important physical quantities are based on the definition of radiance: Fluence rate or intensity Fluence Current density (energy flux) . This is the vector counterpart of fluence rate pointing in the prevalent direction of energy flow. Radiative transfer equation The RTE is a differential equation describing radiance . It can be derived via conservation of energy. Briefly, the RTE states that a beam of light loses energy through divergence and extinction (including both absorption and scattering away from the beam) and gains energy from light sources in the medium and scattering directed towards the beam. Coherence, polarization and non-linearity are neglected. Optical properties such as refractive index , absorption coefficient μa, scattering coefficient μs, and scattering anisotropy are taken as time-invariant but may vary spatially. Scattering is assumed to be elastic. The RTE (Boltzmann equation) is thus written as: where is the speed of light in the tissue, as determined by the relative refractive index μtμa+μs is the extinction coefficient is the phase function, representing the probability of light with propagation direction being scattered into solid angle around . In most cases, the phase function depends only on the angle between the scattered and incident directions, i.e. . The scattering anisotropy can be expressed as describes the light source. Diffusion theory Assumptions In the RTE, six different independent variables define the radiance at any spatial and temporal point (, , and from , polar angle and azimuthal angle from , and ). By making appropriate assumptions about the behavior of photons in a scattering medium, the number of independent variables can be reduced. These assumptions lead to the diffusion theory (and diffusion equation) for photon transport. Two assumptions permit the application of diffusion theory to the RTE: Relative to scattering events, there are very few absorption events. Likewise, after numerous scattering events, few absorption events will occur, and the radiance will become nearly isotropic. This assumption is sometimes called directional broadening. In a primarily scattering medium, the time for substantial current density change is much longer than the time to traverse one transport mean free path. Thus, over one transport means free path, the fractional change in current density is much less than unity. This property is sometimes called temporal broadening. Both of these assumptions require a high-albedo (predominantly scattering) medium. The RTE in the diffusion approximation Radiance can be expanded on a basis set of spherical harmonics n, m. In diffusion theory, radiance is taken to be largely isotropic, so only the isotropic and first-order anisotropic terms are used: where n, m are the expansion coefficients. Radiance is expressed with 4 terms: one for n = 0 (the isotropic term) and 3 terms for n = 1 (the anisotropic terms). Using properties of spherical harmonics and the definitions of fluence rate and current density , the isotropic and anisotropic terms can respectively be expressed as follows: Hence, we can approximate radiance as Substituting the above expression for radiance, the RTE can be respectively rewritten in scalar and vector forms as follows (The scattering term of the RTE is integrated over the complete solid angle. For the vector form, the RTE is multiplied by direction before evaluation.): The diffusion approximation is limited to systems where reduced scattering coefficients are much larger than their absorption coefficients and having a minimum layer thickness of the order of a few transport mean free path. The diffusion equation Using the second assumption of diffusion theory, we note that the fractional change in current density over one transport mean free path is negligible. The vector representation of the diffusion theory RTE reduces to Fick's law , which defines current density in terms of the gradient of fluence rate. Substituting Fick's law into the scalar representation of the RTE gives the diffusion equation: is the diffusion coefficient and μ'sμs is the reduced scattering coefficient. Notably, there is no explicit dependence on the scattering coefficient in the diffusion equation. Instead, only the reduced scattering coefficient appears in the expression for . This leads to an important relationship; diffusion is unaffected if the anisotropy of the scattering medium is changed while the reduced scattering coefficient stays constant. Solutions to the diffusion equation For various configurations of boundaries (e.g. layers of tissue) and light sources, the diffusion equation may be solved by applying appropriate boundary conditions and defining the source term as the situation demands. Point sources in infinite homogeneous media A solution to the diffusion equation for the simple case of a short-pulsed point source in an infinite homogeneous medium is presented in this section. The source term in the diffusion equation becomes , where is the position at which fluence rate is measured and is the position of the source. The pulse peaks at time . The diffusion equation is solved for fluence rate to yield the Green function for the diffusion equation: The term represents the exponential decay in fluence rate due to absorption in accordance with Beer's law. The other terms represent broadening due to scattering. Given the above solution, an arbitrary source can be characterized as a superposition of short-pulsed point sources. Taking time variation out of the diffusion equation gives the following for a time-independent point source : is the effective attenuation coefficient and indicates the rate of spatial decay in fluence. Boundary conditions Fluence rate at a boundary Consideration of boundary conditions permits use of the diffusion equation to characterize light propagation in media of limited size (where interfaces between the medium and the ambient environment must be considered). To begin to address a boundary, one can consider what happens when photons in the medium reach a boundary (i.e. a surface). The direction-integrated radiance at the boundary and directed into the medium is equal to the direction-integrated radiance at the boundary and directed out of the medium multiplied by reflectance : where is normal to and pointing away from the boundary. The diffusion approximation gives an expression for radiance in terms of fluence rate and current density . Evaluating the above integrals after substitution gives: Substituting Fick's law () gives, at a distance from the boundary z=0, The extrapolated boundary It is desirable to identify a zero-fluence boundary. However, the fluence rate at a physical boundary is, in general, not zero. An extrapolated boundary, at b for which fluence rate is zero, can be determined to establish image sources. Using a first order Taylor series approximation, which evaluates to zero since . Thus, by definition, b must be z as defined above. Notably, when the index of refraction is the same on both sides of the boundary, F is zero and the extrapolated boundary is at b. Pencil beam normally incident on a semi-infinite medium Using boundary conditions, one may approximately characterize diffuse reflectance for a pencil beam normally incident on a semi-infinite medium. The beam will be represented as two point sources in an infinite medium as follows (Figure 2): Set scattering anisotropy 2 for the scattering medium and set the new scattering coefficient μs2 to the original μs1 multiplied by 1, where 1 is the original scattering anisotropy. Convert the pencil beam into an isotropic point source at a depth of one transport mean free path ' below the surface and power = '. Implement the extrapolated boundary condition by adding an image source of opposite sign above the surface at 'b. The two point sources can be characterized as point sources in an infinite medium via is the distance from observation point to source location in cylindrical coordinates. The linear combination of the fluence rate contributions from the two image sources is This can be used to get diffuse reflectance d via Fick's law: is the distance from the observation point to the source at and is the distance from the observation point to the image source at b. Properties of diffusion equation Scaling Let be the Green function solution to the diffusion equation for a homogeneous medium of optical properties , , then the Green function solution for a homogeneous medium which differs from the former only by optical properties , , such that , can be obtained with the following rescaling: where and . Such property can also be extended to the radiance in the more general general framework of the RTE, by substituting the transport coefficients , with the extinction coefficients , . The usefulness of the property resides in taking the results obtained for a given geometry and set of optical properties, typical of a lab scale setting, rescaling them and extending them to contexts in which it would be complicated to perform measurements due to the sheer extension or inaccessibility. Dependence on absorption Let be the Green function solution to the diffusion equation for a non-absorbing homogeneous medium. Then, the Green function solution for the medium when its absorption coefficient is can be obtained as: Again, the same property also holds for radiance within the RTE. Diffusion theory solutions vs. Monte Carlo simulations Monte Carlo simulations of photon transport, though time consuming, will accurately predict photon behavior in a scattering medium. The assumptions involved in characterizing photon behavior with the diffusion equation generate inaccuracies. Generally, the diffusion approximation is less accurate as the absorption coefficient μa increases and the scattering coefficient μs decreases. For a photon beam incident on a medium of limited depth, error due to the diffusion approximation is most prominent within one transport mean free path of the location of photon incidence (where radiance is not yet isotropic) (Figure 3). Among the steps in describing a pencil beam incident on a semi-infinite medium with the diffusion equation, converting the medium from anisotropic to isotropic (step 1) (Figure 4) and converting the beam to a source (step 2) (Figure 5) generate more error than converting from a single source to a pair of image sources (step 3) (Figure 6). Step 2 generates the most significant error. See also Monte Carlo method for photon transport Radiative transfer References Further reading (2011) Scattering, absorption and radiative transfer (optics)
Radiative transfer equation and diffusion theory for photon transport in biological tissue
[ "Chemistry" ]
2,229
[ "Scattering", " absorption and radiative transfer (optics)" ]
14,576,735
https://en.wikipedia.org/wiki/Virtual%20design%20and%20construction
Virtual design and construction (VDC) is the management of integrated multi-disciplinary performance models of design–construction projects, including the product (facilities), work processes, and organization of the design – construction – operation team to support explicit and public business objectives. This is usually achieved creating a digital twin of the project, in where to manage the information. The theoretical basis of VDC includes: Engineering modeling methods: product, organization, process Analysis methods – model-based design: including quantities, schedule, cost, 4D interactions, and process risks, these are termed building information modeling (BIM) tools Visualization methods Business metrics – within business analytics – and a focus on strategic management Economic impact analysis, i.e., models of both the cost and value of capital investments BIM managed project "Virtual design and construction BIMs are virtual because they show computer-based descriptions of the project. The BIM project model emphasizes those aspects of the project that can be designed and managed, i.e., the product (typically a building or plant [and infrastructure]), the organization that will define, design, construct, and operate it, and the process the organization teams will follow, that is, the product–organization–process or POP. These models are logically integrated in the sense that they all can access shared data, and if a user highlights or changes an aspect of one, the integrated models can highlight or change the dependent aspects of related models. The models are multi-disciplinary in the sense that they represent the architect, engineering, construction (AEC), and owner of the project, as well as relevant sub-disciplines. The models are performance models in the sense that they predict some aspects of project performance, track many that are relevant, and can show predicted and measured performance in relationship to stated project performance objectives. Some companies now practice the first steps of BIM modeling, and they consistently find that they improve business performance by doing so." Companies are also now considering developing BIMs in various levels of detail, since depending on the application of BIM, more or less detail is needed, and there is varying modeling effort associated with generating building information models at different levels of detail. Methodologies underpinning BIM Advances in construction engineering began with the ten volumes on architecture completed by Vitruvi, a 1 century B.C. Roman. Vitruvi laid the key and lasting foundation for a study of construction. A principle of construction is a use of an applied ontology based in the upper ontology. In practice, these ontologies take on a form of breakdown structures such as the work breakdown structure. Usually breakdown structures form metadata to represent a construction activity; there are notable cases at exceptionally large construction companies where they are simply numbered. In practice, an ontology approach requires a semantic integration approach to construction data so to capture a present status of construction activities (i.e., the project). The research that forms virtual design and construction (VDC) is based in scientific evidence and a validation measured against a best theory opposed to a best practice. This approach, pioneered by the illustrious Dr. Kunz, was a departure from earlier construction engineering methodologies that focused on studies of best practices. The scientific evidence method requires formulating a hypothesis and then testing that hypothesis to failure so to validate. A range of scientific methodologies have proven useful in construction engineering research, in both qualitative research and quantitative research. Because construction is difficult to replicate in a controlled setting, the case-based reasoning, case study and action research methodologies prevail. Power of a method is important to include in results; the case study is often broad and the action research is often focused. A core concept in VDC is spacetime dimensions. There are four dimensions; three space dimensions and a fourth, time. There are additional dimensions of cost and quality, but a core is formed by these four. The four dimensions were first understood by Vitruvi as an importance of perspective (i.e., 3D) and time (i.e., 4D). Prior to computing, a focus was on the fourth dimension of time. In practice, time is a focus of the critical path method. With advances in computing, the representation of three dimensions of space has increased. The merging of space and the above discussed ontology formed the information model, in the construction engineering field, known as building information modeling. The combination of space and time in practice is shown by the linear scheduling method and in close relation the 4D model. Computing brought about the advent of the need to align with a software developer. Previously, pencil and paper was forgiving on the mixing of methods from different schools of thought. Software is not as forgiving and to mix software requires this as a goal. This forms the field of interoperability research. The practical application is demonstrated by the Industry Foundation Classes. Today, the most compelling advances in VDC are in computer vision (List of computer vision topics), artificial intelligence, and the architecture of transmission (AoT), an object-oriented project lifecycle management process, which acts as a counterpoint to commissioned IoT technologies. An important application of VDC is in the workzone. This is where the construction activities reside, and the workforce is a core component. To create an educated workforce with the technical knowhow to use the technology tools now available, VDC includes the development of advanced vocational education topics. See also Construction management Construction engineering List of project management topics Research centers Center for Integrated Facility Engineering (CIFE), Stanford University mosaic, Carnegie Mellon University BIM at UTexas Field Systems and Construction Automation Laboratory (FSCAL), UTexas Construction Information Technology Laboratory (CITL), Georgia Institute of Technology RAPIDS Laboratory, Georgia Institute of Technology References Civil engineering Building engineering Computer-aided design Building information modeling Data modeling Construction management
Virtual design and construction
[ "Engineering" ]
1,198
[ "Computer-aided design", "Design engineering", "Building engineering", "Data modeling", "Construction", "Data engineering", "Civil engineering", "Building information modeling", "Construction management", "Architecture" ]
14,577,528
https://en.wikipedia.org/wiki/Eledoisin
Eledoisin is an undecapeptide of mollusk origin, belonging to the tachykinin family of neuropeptides. It was first isolated from the posterior salivary glands of two mollusk species Eledone muschata and Eledone aldovandi, which belong to the octopod order of Cephalopoda. Other tachykinins from nonmammalian sources include kassinin and physalaemin. The mammalian tachykinins substance P, NKA, and NKB have similar effects as tachykinins of nonmammals and have been more widely studied and characterized. These peptides exhibit a wide and complex spectrum of pharmacological and physiological activities such as vasodilation, hypertension, and stimulation of extravascular smooth muscle. Eledoisin has the amino acid sequence pGlu-Pro-Ser-Lys-Asp-Ala-Phe-Ile-Gly-Leu-Met-NH2 (qPSKDAFIGLM-NH2) where pGlu and q stand for pyroglutamic acid. Like all tachykinin peptides, Eledoisin shares the same consensus C-terminal sequence, that is, Phe-Xxx-Gly-Leu-Met-NH. The invariant "Phe7" residue is probably required for receptor binding. "Xxx" is either an aromatic (phenylalanine, tyrosine) or a branched aliphatic (valine, isoleucine) side chain and is thought to be important in receptor selectivity. This common region, often referred to as the "message domain," is believed to be responsible for activating the receptor. The divergent N-terminal region or the "address domain" varies in amino-acid sequence and length and is believed to play a role in determining the receptor subtype specificity. References Marine neurotoxins Neuropeptides Neurotransmitters Octopus toxins Hendecapeptides
Eledoisin
[ "Chemistry" ]
427
[ "Neurochemistry", "Neurotransmitters" ]
14,577,679
https://en.wikipedia.org/wiki/WVLC%20%28paper%29
WVLC stands for "White Vat Lined Chip" (a.k.a. white vat) and is type of chipboard with white sides. It is used when a white interior is desired. A good example would be a white shoe box. See also White Lined Chipboard References Paper products
WVLC (paper)
[ "Physics" ]
64
[ "Materials stubs", "Materials", "Matter" ]
14,578,074
https://en.wikipedia.org/wiki/Myocardial%20depressant%20factor
Myocardial depressant factor (MDF) or Myocardial Toxic factor (MTF) is a low-molecular-weight peptide released from the pancreas into the blood in mammals during various shock states. MDF is a significant mediator of shock pathophysiology, reducing myocardial contractility, constricting splanchnic arteries and impairing phagocytosis by the reticuloendothelial system. Survival can be improved by preventing its release or blocking its activity, for example using glucocorticoids, prostaglandins, aprotinin, captopril, imidazole or lidocaine. References Peptides Cardiology
Myocardial depressant factor
[ "Chemistry" ]
147
[ "Biomolecules by chemical classification", "Molecular biology stubs", "Peptides", "Molecular biology" ]
14,578,596
https://en.wikipedia.org/wiki/French%20flag%20model
The French flag model is a conceptual definition of a morphogen, described by Lewis Wolpert in the 1960s. A morphogen is defined as a signaling molecule that acts directly on cells (not through serial induction) to produce specific cellular responses dependent on morphogen concentration. During early development, morphogen gradients generate different cell types in distinct spatial order. French flag patterning is often found in combination with others: vertebrate limb development is one of the many phenotypes exhibiting French flag patterning overlapped with a complementary pattern (in this case Turing pattern). Overview In the French flag model, the French flag is used to represent the effect of a morphogen on cell differentiation: a morphogen affects cell states based on concentration, these states are represented by the different colors of the French flag: high concentrations activate a "blue" gene, lower concentrations activate a "white" gene, with "red" serving as the default state in cells below the necessary concentration threshold. The French flag model was championed by the leading Drosophila biologist, Peter Lawrence. Christiane Nüsslein-Volhard identified the first morphogen, Bicoid, one of the transcription factors present in a gradient in the Drosophila syncytial embryo. Two labs, that of Gary Struhl and that of Stephen Cohen, then demonstrated that a secreted signaling protein, Decapentaplegic (the Drosophila homologue of transforming growth factor beta), acted as a morphogen during later stages of Drosophila development. The substance governs the pattern of tissue development and, in particular, the positions of the various specialized cell types within a tissue. It spreads from a localized source and forms a concentration gradient across a developing tissue. Well-known morphogens include: decapentaplegic/transforming growth factor beta, Hedgehog/Sonic hedgehog, Wingless/Wnt, epidermal growth factor, and fibroblast growth factor. Some of the earliest and best-studied morphogens are transcription factors that diffuse within early Drosophila melanogaster (fruit fly) embryos. However, most morphogens are secreted proteins that signal between cells. Morphogens are defined conceptually, not chemically, so simple chemicals such as retinoic acid may also act as morphogens. Difficulties The basis of the French flag model is the idea that a morphogen autonomously forms a gradient with individual cells reading the concentration of the gradient. Cells then respond to a specific level of gradient with a specific differentiation to match the position the gradient indicates they are in. While widely accepted as an important model for understanding morphogenesis, it is not universally accepted by all developmental biologists. The difficulties with all gradient based models of morphogenesis were extensively reviewed by Natalie and Richard Gordon and include seven specific points: In order to maintain a gradient at steady state there has to be a sink i.e. a way in which diffusing molecules are destroyed or removed along the way and/or at some boundaries. Sinks are rarely, if ever, even considered when the gradient model is invoked. Diffusion must occur in a confined space if a gradient is to be established. However, many organisms such as the axolotl develop normally even if the vitelline membrane and jelly layers are removed and development occurs in flowing water. Diffusion is temperature dependent yet development can proceed normally over a wide variety of temperatures in animals whose eggs develop external to the mother. Diffusion gradients do not scale well yet embryos come in variety of sizes. Diffusion gradients follow the superposition principle. This means that a gradient of one substance in one direction, and a gradient of the same substance in a perpendicular direction, result in a single one-dimensional gradient in the diagonal direction, not a two dimensional gradient. Developmental biologists frequently invoke a two dimensional gradient even though a two dimensional gradient system requires two morphogen gradients with two different sources and sinks placed approximately perpendicular to one another. Fluctuations in gradients always occur, especially at the low concentrations commonly found during embryogenesis, making a specific response by an individual cell to particular concentration thresholds problematic. Each cell has to be able to “read” the morphogen concentration accurately, otherwise boundaries between tissues become ragged. Yet such ragged boundaries are rare in development. References External links Interactive Fly Flybase Artificial Life model of multicellular morphogenesis with autonomously generated gradients for positional information OMIM PubMed Morphogens
French flag model
[ "Biology" ]
929
[ "Morphogens", "Induced stem cells" ]
14,578,984
https://en.wikipedia.org/wiki/Frigorific%20mixture
A frigorific mixture is a mixture of two or more phases in a chemical system that, so long as none of the phases are completely consumed during equilibration, reaches an equilibrium temperature that is independent of the starting temperature of the phases before they are mixed. The equilibrium temperature is also independent of the quantities of the phases used as long as sufficient amounts of each are present to reach equilibrium without consuming one or more. Ice Liquid water and ice, for example, form a frigorific mixture at 0 °C or 32 °F. This mixture was once used to define 0 °C. That temperature is now defined as the triple point of Water with well-defined isotope ratios. A mixture of ammonium chloride, water, and ice form a frigorific mixture at about −17.8 °C or 0 °F. This mixture was once used to define 0 °F. Explanation The existence of frigorific mixtures can be viewed as a consequence of the Gibbs phase rule, which describes the relationship at equilibrium between the number of components, the number of coexisting phases, and the number of degrees of freedom permitted by the conditions of heterogeneous equilibrium. Specifically, at constant atmospheric pressure, in a system containing linearly independent chemical components, if +1 phases are specified to be present in equilibrium, then the system is fully determined (there are no degrees of freedom). That is, the temperature and the compositions of all phases are determined. Thus, in for example the chemical system H2O-NaCl, which has two components, the simultaneous presence of the three phases liquid, ice, and hydrohalite can exist only at atmospheric pressure at the unique temperature of –21.2 °C . The approach to equilibrium of a frigorific mixture involves spontaneous temperature change driven by the conversion of latent heat into sensible heat as the phase proportions adjust to accommodate the decrease in thermodynamic potential associated with the approach to equilibrium. Other examples Other examples of frigorific mixtures include: Uses A frigorific mixture may be used to obtain a liquid medium that has a reproducible temperature below ambient temperature. Such mixtures were used to calibrate thermometers. In chemistry a cooling bath may be used to control the temperature of a strongly exothermic reaction. A frigorific mixture may be used as an alternative to mechanical refrigeration. For example to fit two machined metal parts together, one part is placed in a frigorific mixture, causing it to contract so that may be easily inserted into the uncooled second part; on warming the two parts are held together tightly. Another example is the Piper process, used in the second half of the 19th century for freezing and cold storage of fish. Limitations of acid base slushes Mixtures relying on the use of acid base slushes are of limited practical value beyond producing melting point references as the enthalpy of dissolution for the melting point depressant is often significantly greater (e.g. ΔH -57.61 kJ/mol for KOH) than the enthalpy of fusion for water itself (ΔH 6.02 kJ/mol); for reference, ΔH for the dissolution of NaCl is 3.88 kJ/mol. This results in little to no net cooling capacity at the desired temperatures and an end mixture temperature that is higher than it was to begin with. The values claimed in the table are produced by first precooling and then combining each subsequent mixture with it surrounded by a mixture of the previous temperature increment; the mixtures must be 'stacked' within one another. Such acid base slushes are corrosive and therefore present handling problems. Additionally, they can not be replenished easily, as the volume of the mixture increases with each addition of refrigerant; the container (be it a bath or cold finger) will eventually need emptying and refilling to prevent it from overflowing. This makes these mixtures largely unsuitable for use in synthetic applications, as there will be no cooling surface present during the emptying of the container. See also Cooling bath References Thermodynamics Physical chemistry Chemical thermodynamics
Frigorific mixture
[ "Physics", "Chemistry", "Mathematics" ]
879
[ "Applied and interdisciplinary physics", "Thermodynamics", "nan", "Chemical thermodynamics", "Physical chemistry", "Dynamical systems" ]
14,579,013
https://en.wikipedia.org/wiki/Fitted%20carpet
Fitted carpet, also wall-to-wall carpet, is a carpet intended to cover a floor entirely. Carpet over 4 meters in length is usually installed with the use of a power-stretcher (tubed or tubeless). Fitted carpets were originally woven to the dimensions of the specific area they were covering. They were later made in smaller strips, around the time stair carpet became popular, and woven at the site of the job by the carpet fitter. These carpets were then held in place with individually nailed tacks driven through the carpet around the perimeter and occasionally small rings in the carpet which were folded over. The introduction of tack strip, "tackless strip", "gripper strip", or "Smoothedge" simplified the installation of wall-to-wall carpeting, increasing the neatness of the finish at the wall. Because gripper strips are essentially the same thickness as underlay, using gripper strips yields a level edge, whereas tacking gives an uneven edge. There are three types of carpets: the loop pile carpet, the cut carpet and the structured carpet, combining the first two. Very popular in the sixties thanks to its colorful prints, most carpets took a decorative appearance inside houses. History Thomas Sheraton wrote in 1806 that "since the introduction of carpets, fitted all over the floor of a room, the nicety of flooring anciently practiced in the best houses, is now laid aside". Fitted carpets, assembled from strips, had become popular by the second half of the 18th century, remaining so until the 1870s when loose carpets and varnished hardwood became the fashion. One of the most famous carpets in history was given by Louis XVI to George Washington. It was woven for the banquet room of Mount Vernon, where it can still be admired today. In the early twentieth century, a new manufacturing method called "tufting" revolutionized the carpeting industry. Invented in Dalton, Georgia, it quickly replaced the traditional method of weaving. The pile yarns are stitched through a textile backing and coated on the underside of the coating. From 1930, the mechanization of tufting favored its development. It now represents 51% of total production while it amounted to only 10% in the 1950s. Fabrication Tufted carpet The tufted carpet is the most common manufacturing technique. It implies poking yarn tufts in a textile support close to a sewing machine. The carpet is then equipped with a folder (rewoven, jute, plastic or cotton) pasted on the back of the tuft. This technique makes possible the production of cut pile, curly or structured carpets. Woven carpet The woven carpet is one of the oldest manufacturing processes. It is woven like a carpet through a traditional weaving loom. The top and the back of the carpet are made simultaneously. Needled carpet From several superposed layers of fibers, the needling technique consists in hanging the fibers together through the use of special needles. The carpets obtained are very solid but intended for temporary use since they do not have the comfort of woven and tufted carpets. Fibers The different fibers constitute the carpet’s velvet. They have a direct impact on the physical properties of the floor they are covering such as resistance or longevity. There are three types of fiber: natural, coming from animals (wool), vegetable (seagrass, coir, sisal) and synthetic (polyamide or polypropylene). The wool was used for weaving carpets more than five centuries B.C before being predominantly used in the raw carpets' manufacture. However, synthetic fibers are predominantly used nowadays. References External links Floors Rugs and carpets
Fitted carpet
[ "Engineering" ]
733
[ "Structural engineering", "Floors" ]
14,579,421
https://en.wikipedia.org/wiki/Introduction%20to%20viruses
A virus is a tiny infectious agent that reproduces inside the cells of living hosts. When infected, the host cell is forced to rapidly produce thousands of identical copies of the original virus. Unlike most living things, viruses do not have cells that divide; new viruses assemble in the infected host cell. But unlike simpler infectious agents like prions, they contain genes, which allow them to mutate and evolve. Over 4,800 species of viruses have been described in detail out of the millions in the environment. Their origin is unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. Viruses are made of either two or three parts. All include genes. These genes contain the encoded biological information of the virus and are built from either DNA or RNA. All viruses are also covered with a protein coat to protect the genes. Some viruses may also have an envelope of fat-like substance that covers the protein coat, and makes them vulnerable to soap. A virus with this "viral envelope" uses it—along with specific receptors—to enter a new host cell. Viruses vary in shape from the simple helical and icosahedral to more complex structures. Viruses range in size from 20 to 300 nanometres; it would take 33,000 to 500,000 of them, side by side, to stretch to . Viruses spread in many ways. Although many are very specific about which host species or tissue they attack, each species of virus relies on a particular method to copy itself. Plant viruses are often spread from plant to plant by insects and other organisms, known as vectors. Some viruses of humans and other animals are spread by exposure to infected bodily fluids. Viruses such as influenza are spread through the air by droplets of moisture when people cough or sneeze. Viruses such as norovirus are transmitted by the faecal–oral route, which involves the contamination of hands, food and water. Rotavirus is often spread by direct contact with infected children. The human immunodeficiency virus, HIV, is transmitted by bodily fluids transferred during sex. Others, such as the dengue virus, are spread by blood-sucking insects. Viruses, especially those made of RNA, can mutate rapidly to give rise to new types. Hosts may have little protection against such new forms. Influenza virus, for example, changes often, so a new vaccine is needed each year. Major changes can cause pandemics, as in the 2009 swine influenza that spread to most countries. Often, these mutations take place when the virus has first infected other animal hosts. Some examples of such "zoonotic" diseases include coronavirus in bats, and influenza in pigs and birds, before those viruses were transferred to humans. Viral infections can cause disease in humans, animals and plants. In healthy humans and animals, infections are usually eliminated by the immune system, which can provide lifetime immunity to the host for that virus. Antibiotics, which work against bacteria, have no impact, but antiviral drugs can treat life-threatening infections. Those vaccines that produce lifelong immunity can prevent some infections. Discovery In 1884, French microbiologist Charles Chamberland invented the Chamberland filter (or Chamberland–Pasteur filter), that contains pores smaller than bacteria. He could then pass a solution containing bacteria through the filter, and completely remove them. In the early 1890s, Russian biologist Dmitri Ivanovsky used this method to study what became known as the tobacco mosaic virus. His experiments showed that extracts from the crushed leaves of infected tobacco plants remain infectious after filtration. At the same time, several other scientists showed that, although these agents (later called viruses) were different from bacteria and about one hundred times smaller, they could still cause disease. In 1899, Dutch microbiologist Martinus Beijerinck observed that the agent only multiplied when in dividing cells. He called it a "contagious living fluid" ()—or a "soluble living germ" because he could not find any germ-like particles. In the early 20th century, English bacteriologist Frederick Twort discovered viruses that infect bacteria, and French-Canadian microbiologist Félix d'Herelle described viruses that, when added to bacteria growing on agar, would lead to the formation of whole areas of dead bacteria. Counting these dead areas allowed him to calculate the number of viruses in the suspension. The invention of the electron microscope in 1931 brought the first images of viruses. In 1935, American biochemist and virologist Wendell Meredith Stanley examined the tobacco mosaic virus (TMV) and found it to be mainly made from protein. A short time later, this virus was shown to be made from protein and RNA. Rosalind Franklin developed X-ray crystallographic pictures and determined the full structure of TMV in 1955. Franklin confirmed that viral proteins formed a spiral hollow tube, wrapped by RNA, and also showed that viral RNA was a single strand, not a double helix like DNA. A problem for early scientists was that they did not know how to grow viruses without using live animals. The breakthrough came in 1931, when American pathologists Ernest William Goodpasture and Alice Miles Woodruff grew influenza, and several other viruses, in fertilised chickens' eggs. Some viruses could not be grown in chickens' eggs. This problem was solved in 1949, when John Franklin Enders, Thomas Huckle Weller, and Frederick Chapman Robbins grew polio virus in cultures of living animal cells. Over 4,800 species of viruses have been described in detail. Origins Viruses co-exist with life wherever it occurs. They have probably existed since living cells first evolved. Their origin remains unclear because they do not fossilize, so molecular techniques have been the best way to hypothesise about how they arose. These techniques rely on the availability of ancient viral DNA or RNA, but most viruses that have been preserved and stored in laboratories are less than 90 years old. Molecular methods have only been successful in tracing the ancestry of viruses that evolved in the 20th century. New groups of viruses might have repeatedly emerged at all stages of the evolution of life. There are three major theories about the origins of viruses: Regressive theory Viruses may have once been small cells that parasitised larger cells. Eventually, the genes they no longer needed for a parasitic way of life were lost. The bacteria Rickettsia and Chlamydia are living cells that, like viruses, can reproduce only inside host cells. This lends credence to this theory, as their dependence on being parasites may have led to the loss of the genes that once allowed them to live on their own. Cellular origin theory Some viruses may have evolved from bits of DNA or RNA that "escaped" from the genes of a larger organism. The escaped DNA could have come from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. Coevolution theory Viruses may have evolved from complex molecules of protein and DNA at the same time as cells first appeared on earth, and would have depended on cellular life for many millions of years. There are problems with all of these theories. The regressive hypothesis does not explain why even the smallest of cellular parasites do not resemble viruses in any way. The escape or the cellular origin hypothesis does not explain the presence of unique structures in viruses that do not appear in cells. The coevolution, or "virus-first" hypothesis, conflicts with the definition of viruses, because viruses depend on host cells. Also, viruses are recognised as ancient, and to have origins that pre-date the divergence of life into the three domains. This discovery has led modern virologists to reconsider and re-evaluate these three classical hypotheses. Structure A virus particle, also called a virion, consists of genes made from DNA or RNA which are surrounded by a protective coat of protein called a capsid. The capsid is made of many smaller, identical protein molecules called capsomers. The arrangement of the capsomers can either be icosahedral (20-sided), helical, or more complex. There is an inner shell around the DNA or RNA called the nucleocapsid, made out of proteins. Some viruses are surrounded by a bubble of lipid (fat) called an envelope, which makes them vulnerable to soap and alcohol. Size Viruses are among the smallest infectious agents, and are too small to be seen by light microscopy; most of them can only be seen by electron microscopy. Their sizes range from 20 to 300 nanometres; it would take 30,000 to 500,000 of them, side by side, to stretch to one centimetre (0.4 in). In comparison, bacteria are typically around 1000 nanometres (1 micrometer) in diameter, and host cells of higher organisms are typically a few tens of micrometers. Some viruses such as megaviruses and pandoraviruses are relatively large viruses. At around 1000 nanometres, these viruses, which infect amoebae, were discovered in 2003 and 2013. They are around ten times wider (and thus a thousand times larger in volume) than influenza viruses, and the discovery of these "giant" viruses astonished scientists. Genes The genes of viruses are made from DNA (deoxyribonucleic acid) and, in many viruses, RNA (ribonucleic acid). The biological information contained in an organism is encoded in its DNA or RNA. Most organisms use DNA, but many viruses have RNA as their genetic material. The DNA or RNA of viruses consists of either a single strand or a double helix. Viruses can reproduce rapidly because they have relatively few genes. For example, influenza virus has only eight genes and rotavirus has eleven. In comparison, humans have 20,000–25,000. Some viral genes contain the code to make the structural proteins that form the virus particle. Other genes make non-structural proteins found only in the cells the virus infects. All cells, and many viruses, produce proteins that are enzymes that drive chemical reactions. Some of these enzymes, called DNA polymerase and RNA polymerase, make new copies of DNA and RNA. A virus's polymerase enzymes are often much more efficient at making DNA and RNA than the equivalent enzymes of the host cells, but viral RNA polymerase enzymes are error-prone, causing RNA viruses to mutate and form new strains. In some species of RNA virus, the genes are not on a continuous molecule of RNA, but are separated. The influenza virus, for example, has eight separate genes made of RNA. When two different strains of influenza virus infect the same cell, these genes can mix and produce new strains of the virus in a process called reassortment. Protein synthesis Proteins are essential to life. Cells produce new protein molecules from amino acid building blocks based on information coded in DNA. Each type of protein is a specialist that usually only performs one function, so if a cell needs to do something new, it must make a new protein. Viruses force the cell to make new proteins that the cell does not need, but are needed for the virus to reproduce. Protein synthesis consists of two major steps: transcription and translation. Transcription is the process where information in DNA, called the genetic code, is used to produce RNA copies called messenger RNA (mRNA). These migrate through the cell and carry the code to ribosomes where it is used to make proteins. This is called translation because the protein's amino acid structure is determined by the mRNA's code. Information is hence translated from the language of nucleic acids to the language of amino acids. Some nucleic acids of RNA viruses function directly as mRNA without further modification. For this reason, these viruses are called positive-sense RNA viruses. In other RNA viruses, the RNA is a complementary copy of mRNA and these viruses rely on the cell's or their own enzyme to make mRNA. These are called negative-sense RNA viruses. In viruses made from DNA, the method of mRNA production is similar to that of the cell. The species of viruses called retroviruses behave completely differently: they have RNA, but inside the host cell a DNA copy of their RNA is made with the help of the enzyme reverse transcriptase. This DNA is then incorporated into the host's own DNA, and copied into mRNA by the cell's normal pathways. Life-cycle When a virus infects a cell, the virus forces it to make thousands more viruses. It does this by making the cell copy the virus's DNA or RNA, making viral proteins, which all assemble to form new virus particles. There are six basic, overlapping stages in the life cycle of viruses in living cells: Attachment is the binding of the virus to specific molecules on the surface of the cell. This specificity restricts the virus to a very limited type of cell. For example, the human immunodeficiency virus (HIV) infects only human T cells, because its surface protein, gp120, can only react with CD4 and other molecules on the T cell's surface. Plant viruses can only attach to plant cells and cannot infect animals. This mechanism has evolved to favour those viruses that only infect cells in which they are capable of reproducing. Penetration follows attachment; viruses penetrate the host cell by endocytosis or by fusion with the cell. Uncoating happens inside the cell when the viral capsid is removed and destroyed by viral enzymes or host enzymes, thereby exposing the viral nucleic acid. Replication of virus particles is the stage where a cell uses viral messenger RNA in its protein synthesis systems to produce viral proteins. The RNA or DNA synthesis abilities of the cell produce the virus's DNA or RNA. Assembly takes place in the cell when the newly created viral proteins and nucleic acid combine to form hundreds of new virus particles. Release occurs when the new viruses escape or are released from the cell. Most viruses achieve this by making the cells burst, a process called lysis. Other viruses such as HIV are released more gently by a process called budding. Effects on the host cell Viruses have an extensive range of structural and biochemical effects on the host cell.These are called cytopathic effects. Most virus infections eventually result in the death of the host cell. The causes of death include cell lysis (bursting), alterations to the cell's surface membrane and apoptosis (cell "suicide"). Often cell death is caused by cessation of its normal activity due to proteins produced by the virus, not all of which are components of the virus particle. Some viruses cause no apparent changes to the infected cell. Cells in which the virus is latent (inactive) show few signs of infection and often function normally. This causes persistent infections and the virus is often dormant for many months or years. This is often the case with herpes viruses. Some viruses, such as Epstein–Barr virus, often cause cells to proliferate without causing malignancy; but some other viruses, such as papillomavirus, are an established cause of cancer. When a cell's DNA is damaged by a virus such that the cell cannot repair itself, this often triggers apoptosis. One of the results of apoptosis is destruction of the damaged DNA by the cell itself. Some viruses have mechanisms to limit apoptosis so that the host cell does not die before progeny viruses have been produced; HIV, for example, does this. Viruses and diseases There are many ways in which viruses spread from host to host but each species of virus uses only one or two. Many viruses that infect plants are carried by organisms; such organisms are called vectors. Some viruses that infect animals, including humans, are also spread by vectors, usually blood-sucking insects, but direct transmission is more common. Some virus infections, such as norovirus and rotavirus, are spread by contaminated food and water, by hands and communal objects, and by intimate contact with another infected person, while others like SARS-CoV-2 and influenza viruses are airborne. Viruses such as HIV, hepatitis B and hepatitis C are often transmitted by unprotected sex or contaminated hypodermic needles. To prevent infections and epidemics, it is important to know how each different kind of virus is spread. In humans Common human diseases caused by viruses include the common cold, influenza, chickenpox and cold sores. Serious diseases such as Ebola and AIDS are also caused by viruses. Many viruses cause little or no disease and are said to be "benign". The more harmful viruses are described as virulent. Viruses cause different diseases depending on the types of cell that they infect. Some viruses can cause lifelong or chronic infections where the viruses continue to reproduce in the body despite the host's defence mechanisms. This is common in hepatitis B virus and hepatitis C virus infections. People chronically infected with a virus are known as carriers. They serve as important reservoirs of the virus. Endemic If the proportion of carriers in a given population reaches a given threshold, a disease is said to be endemic. Before the advent of vaccination, infections with viruses were common and outbreaks occurred regularly. In countries with a temperate climate, viral diseases are usually seasonal. Poliomyelitis, caused by poliovirus often occurred in the summer months. By contrast colds, influenza and rotavirus infections are usually a problem during the winter months. Other viruses, such as measles virus, caused outbreaks regularly every third year. In developing countries, viruses that cause respiratory and enteric infections are common throughout the year. Viruses carried by insects are a common cause of diseases in these settings. Zika and dengue viruses for example are transmitted by the female Aedes mosquitoes, which bite humans particularly during the mosquitoes' breeding season. Pandemic and emergent Although viral pandemics are rare events, HIV—which evolved from viruses found in monkeys and chimpanzees—has been pandemic since at least the 1980s. During the 20th century there were four pandemics caused by influenza virus and those that occurred in 1918, 1957 and 1968 were severe. Before its eradication, smallpox was a cause of pandemics for more than 3,000 years. Throughout history, human migration has aided the spread of pandemic infections; first by sea and in modern times also by air. With the exception of smallpox, most pandemics are caused by newly evolved viruses. These "emergent" viruses are usually mutants of less harmful viruses that have circulated previously either in humans or in other animals. Severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) are caused by new types of coronaviruses. Other coronaviruses are known to cause mild infections in humans, so the virulence and rapid spread of SARS infections—that by July 2003 had caused around 8,000 cases and 800 deaths—was unexpected and most countries were not prepared. A related coronavirus emerged in Wuhan, China, in November 2019 and spread rapidly around the world. Thought to have originated in bats and subsequently named severe acute respiratory syndrome coronavirus 2, infections with the virus cause a disease called COVID-19, that varies in severity from mild to deadly, and led to a pandemic in 2020. Restrictions unprecedented in peacetime were placed on international travel, and curfews imposed in several major cities worldwide. In plants There are many types of plant virus, but often they only cause a decrease in yield, and it is not economically viable to try to control them. Plant viruses are frequently spread from plant to plant by organisms called "vectors". These are normally insects, but some fungi, nematode worms and single-celled organisms have also been shown to be vectors. When control of plant virus infections is considered economical (perennial fruits, for example) efforts are concentrated on killing the vectors and removing alternate hosts such as weeds. Plant viruses are harmless to humans and other animals because they can only reproduce in living plant cells. Bacteriophages Bacteriophages are viruses that infect bacteria and archaea. They are important in marine ecology: as the infected bacteria burst, carbon compounds are released back into the environment, which stimulates fresh organic growth. Bacteriophages are useful in scientific research because they are harmless to humans and can be studied easily. These viruses can be a problem in industries that produce food and drugs by fermentation and depend on healthy bacteria. Some bacterial infections are becoming difficult to control with antibiotics, so there is a growing interest in the use of bacteriophages to treat infections in humans. Host resistance Innate immunity of animals Animals, including humans, have many natural defences against viruses. Some are non-specific and protect against many viruses regardless of the type. This innate immunity is not improved by repeated exposure to viruses and does not retain a "memory" of the infection. The skin of animals, particularly its surface, which is made from dead cells, prevents many types of viruses from infecting the host. The acidity of the contents of the stomach destroys many viruses that have been swallowed. When a virus overcomes these barriers and enters the host, other innate defences prevent the spread of infection in the body. A special hormone called interferon is produced by the body when viruses are present, and this stops the viruses from reproducing by killing the infected cells and their close neighbours. Inside cells, there are enzymes that destroy the RNA of viruses. This is called RNA interference. Some blood cells engulf and destroy other virus-infected cells. Adaptive immunity of animals Specific immunity to viruses develops over time and white blood cells called lymphocytes play a central role. Lymphocytes retain a "memory" of virus infections and produce many special molecules called antibodies. These antibodies attach to viruses and stop the virus from infecting cells. Antibodies are highly selective and attack only one type of virus. The body makes many different antibodies, especially during the initial infection. After the infection subsides, some antibodies remain and continue to be produced, usually giving the host lifelong immunity to the virus. Plant resistance Plants have elaborate and effective defence mechanisms against viruses. One of the most effective is the presence of so-called resistance (R) genes. Each R gene confers resistance to a particular virus by triggering localised areas of cell death around the infected cell, which can often be seen with the unaided eye as large spots. This stops the infection from spreading. RNA interference is also an effective defence in plants. When they are infected, plants often produce natural disinfectants that destroy viruses, such as salicylic acid, nitric oxide and reactive oxygen molecules. Resistance to bacteriophages The major way bacteria defend themselves from bacteriophages is by producing enzymes which destroy foreign DNA. These enzymes, called restriction endonucleases, cut up the viral DNA that bacteriophages inject into bacterial cells. Prevention and treatment of viral disease Vaccines Vaccines simulate a natural infection and its associated immune response, but do not cause the disease. Their use has resulted in the eradication of smallpox and a dramatic decline in illness and death caused by infections such as polio, measles, mumps and rubella. Vaccines are available to prevent over fourteen viral infections of humans and more are used to prevent viral infections of animals. Vaccines may consist of either live or killed viruses. Live vaccines contain weakened forms of the virus, but these vaccines can be dangerous when given to people with weak immunity. In these people, the weakened virus can cause the original disease. Biotechnology and genetic engineering techniques are used to produce "designer" vaccines that only have the capsid proteins of the virus. Hepatitis B vaccine is an example of this type of vaccine. These vaccines are safer because they can never cause the disease. Antiviral drugs Since the mid-1980s, the development of antiviral drugs has increased rapidly, mainly driven by the AIDS pandemic. Antiviral drugs are often nucleoside analogues, which masquerade as DNA building blocks (nucleosides). When the replication of virus DNA begins, some of the fake building blocks are used. This prevents DNA replication because the drugs lack the essential features that allow the formation of a DNA chain. When DNA production stops the virus can no longer reproduce. Examples of nucleoside analogues are aciclovir for herpes virus infections and lamivudine for HIV and hepatitis B virus infections. Aciclovir is one of the oldest and most frequently prescribed antiviral drugs. Other antiviral drugs target different stages of the viral life cycle. HIV is dependent on an enzyme called the HIV-1 protease for the virus to become infectious. There is a class of drugs called protease inhibitors, which bind to this enzyme and stop it from functioning. Hepatitis C is caused by an RNA virus. In 80% of those infected, the disease becomes chronic, and they remain infectious for the rest of their lives unless they are treated. There are effective treatments that use direct-acting antivirals. Treatments for chronic carriers of the hepatitis B virus have been developed by a similar strategy, using lamivudine and other anti-viral drugs. In both diseases, the drugs stop the virus from reproducing and the interferon kills any remaining infected cells. HIV infections are usually treated with a combination of antiviral drugs, each targeting a different stage in the virus's life cycle. There are drugs that prevent the virus from attaching to cells, others that are nucleoside analogues and some poison the virus's enzymes that it needs to reproduce. The success of these drugs is proof of the importance of knowing how viruses reproduce. Role in ecology Viruses are the most abundant biological entity in aquatic environments; one teaspoon of seawater contains about ten million viruses, and they are essential to the regulation of saltwater and freshwater ecosystems. Most are bacteriophages, which are harmless to plants and animals. They infect and destroy the bacteria in aquatic microbial communities and this is the most important mechanism of recycling carbon in the marine environment. The organic molecules released from the bacterial cells by the viruses stimulate fresh bacterial and algal growth. Microorganisms constitute more than 90% of the biomass in the sea. It is estimated that viruses kill approximately 20% of this biomass each day and that there are fifteen times as many viruses in the oceans as there are bacteria and archaea. They are mainly responsible for the rapid destruction of harmful algal blooms, which often kill other marine life. The number of viruses in the oceans decreases further offshore and deeper into the water, where there are fewer host organisms. Their effects are far-reaching; by increasing the amount of respiration in the oceans, viruses are indirectly responsible for reducing the amount of carbon dioxide in the atmosphere by approximately 3 gigatonnes of carbon per year. Marine mammals are also susceptible to viral infections. In 1988 and 2002, thousands of harbour seals were killed in Europe by phocine distemper virus. Many other viruses, including caliciviruses, herpesviruses, adenoviruses and parvoviruses, circulate in marine mammal populations. Viruses can also serve as an alternative food source for microorganisms which engage in virovory, supplying nucleic acids, nitrogen, and phosphorus through their consumption. See also References Notes Bibliography External links Virus Pathogen Resource – Genomic and other research data about human pathogenic viruses Influenza Research DatabaseGenomic and other research data about influenza viruses simple:Virus
Introduction to viruses
[ "Biology" ]
5,662
[ "Viruses", "Tree of life (biology)", "Microorganisms" ]
4,161,179
https://en.wikipedia.org/wiki/Electronic%20common%20technical%20document
The electronic common technical document (eCTD) is an interface and international specification for the pharmaceutical industry to agency transfer of regulatory information. The specification is based on the Common Technical Document (CTD) format and was developed by the International Council for Harmonisation (ICH) Multidisciplinary Group 2 Expert Working Group (ICH M2 EWG). History Version 2.0 of eCTD – an upgrade over the original CTD – was finalized on February 12, 2002, and version 3.0 was finalized on October 8 of the same year. , the most current version is 3.2.2, released on July 16, 2008. A Draft Implementation Guide (DIG) for version 4.0 of eCTD was released in August 2012. However, work stalled on the project. An additional Draft Implementation Guide was released in February 2015 The ICH and the FDA released draft specifications and guides in April 2016, and on May 13 there was an ICH "teleconference" to discuss the guidance and any queries or clarifications that might be necessary. U.S. On May 5, 2015, the U.S. Food & Drug Administration published a final, binding guidance document requiring certain submissions in electronic (eCTD) format within 24 months. The projected date for mandatory electronic submissions is May 5, 2017 for New Drug Applications (NDAs), Biologic License Applications (BLAs), Abbreviated New Drug Applications (ANDAs) and Drug Master Files (DMFs). Canada Health Canada was a sponsor and an early adopter of the eCTD workflow, especially for its Health Products and Food Branch regulator, but as of April 2015 had not yet fully automated it. E.U. The E.U. and its European Medicines Agency began accepting eCTD submissions in 2003. In February 2015, the "EMA announced it would no longer accept paper application forms for products applying to the centralized procedure beginning 1 July 2015." The EMA verified on that date that it would no longer accept "human and veterinary centralised procedure applications" and that all electronic application forms would have to be eCTD by January 2016. China In November 2017, China Food and Drug Administration (CFDA) publishes draft eCTD structure for drug registration for public consultations. This is a big transition for China to move from paper submission to eCTD submissions. Japan The Japan PhMDA has been eCTD compliant at least since December 2017. Governing specifications The Electronic Common Technical Document Specification, the main ICH standard, largely determines the structure of an eCTD submission. However, additional specifications may be applied in national and continental contexts. In the United States, the Food and Drug Administration (FDA) layers additional specifications onto its requirements for eCTD submissions, including PDF, transmission, file format, and supportive file specifications. In the European Union, the European Medicines Agency's EU Module 1 specification as well as other QA documents lay out additional requirements for eCTD submissions. Pharmaceutical point of view The eCTD has five modules: Administrative information and prescribing information. Common technical document summaries. Quality. Nonclinical study reports. Clinical study reports A full table of contents could be quite large. There are two categories of modules: Regional module: 1 (different for each region; i.e., country) Common modules: 2–5 (common to all the regions) The CTD defines the content only of the common modules. The contents of the Regional Module 1 are defined by each of the ICH regions (USA, Europe and Japan). IT point of view eCTD (data structure) The eCTD is a message specification for the transfer of files and metadata from a submitter to a receiver. The primary technical components are: A high level folder structure (required) An XML "backbone" file that provides metadata about content files and lifecycle instructions for the receiving system An optional lower level folder structure (recommended folder names are provided in Appendix 4 of the eCTD specification) Associated document type definitions (DTDs) and stylesheets. Each submission message constitutes one "sequence". A cumulative eCTD consists of one or more sequences. While a single sequence may be viewed with web browser and the ICH stylesheet provided, viewing a cumulative eCTD requires specialized eCTD viewers. The top part of the directory structure is as follows: ctd-123456/0000/index.xml ctd-123456/0000/index-md5.txt ctd-123456/0000/m1 ctd-123456/0000/m2 ctd-123456/0000/m3 ctd-123456/0000/m4 ctd-123456/0000/m5 ctd-123456/0000/util The string ctd-123456/0000 is just an example. Backbone (header) This is the file index.xml in the submission sequence number folder. For example: ctd-123456/0000/index.xml The purpose of this file is twofold: Manage meta-data for the entire submission Constitute a comprehensive table of contents and provide corresponding navigation aid. Stylesheets Stylesheets that support the presentation and navigation should be included. They must be placed in the directory: ctd-123456/0000/util/style See entry 377 in Appendix 4. DTDs DTDs must be placed in the directory: ctd-123456/0000/util/dtd See entries 371–76 in Appendix 4. They must follow a naming convention. The DTD of the backbone is in Appendix 8. It must be placed in the above directory. Business process (protocol) The business process to be supported can be described as follows: Industry <-----> Message <-----> Agency The lifecycle management is composed at least of: Initial submission: should be self-contained. Incremental updates: with its sequence number. See also Clinical trial Clinical Data Interchange Standards Consortium European Medicines Agency (EMA) Food and Drug Administration (FDA) Ministry of Health, Labour and Welfare (Japan). Russian Ministry of Healthcare and Social Development (Russia). References External links eCTD Specification and Related Files (ICH) Electronic Common Technical Document (eCTD) (FDA) EU Module 1 (EMA) Clinical research Clinical data management Health informatics Health standards International standards International trade World government Food and Drug Administration Health Canada Intellectual property law Pharmaceutical industry Medical research Drug safety Experimental drugs Biotechnology products Regulators of biotechnology products Regulation in the European Union
Electronic common technical document
[ "Chemistry", "Biology" ]
1,372
[ "Pharmacology", "Biotechnology products", "Regulation of biotechnologies", "Life sciences industry", "Pharmaceutical industry", "Regulators of biotechnology products", "Drug safety", "Health informatics", "Medical technology" ]
4,161,647
https://en.wikipedia.org/wiki/INFOhio
INFOhio, the Information Network for Ohio schools, is the state's virtual PreK-12 library that uses the existing school telecommunications infrastructure to address equity issues by providing electronic resources, library automation, and other services to Ohio schools. These resources are linked to student achievement and performance, standards-based instruction, teacher effectiveness, and technological competency and are accessible from not only the school library, but also from classroom, lab, and home computers. INFOhio provides the standardized library automation software to put card catalogs online, which makes it possible for students and educators to access a variety of materials, including books and other resources in the school library as well as other libraries across the state. Since 1994, INFOhio has automated more than 2,343 school libraries serving more than 1.1 million students. External links INFOhio Home Page Library-related organizations
INFOhio
[ "Technology" ]
177
[ "Computing stubs" ]