id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
75,165,357 | https://en.wikipedia.org/wiki/Groq | Groq, Inc. is an American artificial intelligence (AI) company that builds an AI accelerator application-specific integrated circuit (ASIC) that they call the Language Processing Unit (LPU) and related hardware to accelerate the inference performance of AI workloads.
Examples of the types AI workloads that run on Groq's LPU are: large language models (LLMs), image classification, anomaly detection, and predictive analysis.
Groq is headquartered in Mountain View, CA, and has offices in San Jose, CA, Liberty Lake, WA, Toronto, Canada, London, U.K. and remote employees throughout North America and Europe.
History
Groq was founded in 2016 by a group of former Google engineers, led by Jonathan Ross, one of the designers of the Tensor Processing Unit (TPU), an AI accelerator ASIC, and Douglas Wightman, an entrepreneur and former engineer at Google X (known as X Development), who served as the company’s first CEO.
Groq received seed funding from Social Capital's Chamath Palihapitiya, with a $10 million investment in 2017 and soon after secured additional funding.
In April 2021, Groq raised $300 million in a series C round led by Tiger Global Management and D1 Capital Partners. Current investors include: The Spruce House Partnership, Addition, GCM Grosvenor, Xⁿ, Firebolt Ventures, General Global Capital, and Tru Arrow Partners, as well as follow-on investments from TDK Ventures, XTX Ventures, Boardman Bay Capital Management, and Infinitum Partners. After Groq’s series C funding round, it was valued at over $1billion, making the startup a unicorn.
On March 1, 2022, Groq acquired Maxeler Technologies, a company known for its dataflow systems technologies.
On August 16, 2023, Groq selected Samsung Electronics foundry in Taylor, Texas to manufacture its next generation chips, on Samsung's 4-nanometer (nm) process node. This was the first order at this new Samsung chip factory.
On February 19, 2024, Groq soft launched a developer platform, GroqCloud, to attract developers into using the Groq API and rent access to their chips. On March 1, 2024 Groq acquired Definitive Intelligence, a startup known for offering a range of business-oriented AI solutions, to help with its cloud platform.
Groq raised $640 million in a series D round led by BlackRock Private Equity Partners in August 2024, valuing the company at $2.8 billion.
Language Processing Unit
Groq's initial name for their ASIC was the Tensor Streaming Processor (TSP), but later rebranded the TSP as the Language Processing Unit (LPU).
The LPU features a functionally sliced microarchitecture, where memory units are interleaved with vector and matrix computation units. This design facilitates the exploitation of dataflow locality in AI compute graphs, improving execution performance and efficiency. The LPU was designed off of two key observations:
AI workloads exhibit substantial data parallelism, which can be mapped onto purpose built hardware, leading to performance gains.
A deterministic processor design, coupled with a producer-consumer programming model, allows for precise control and reasoning over hardware components, allowing for optimized performance and energy efficiency.
In addition to its functionally sliced microarchitecture, the LPU can also be characterized by its single core, deterministic architecture. The LPU is able to achieve deterministic execution by avoiding the use of traditional reactive hardware components (branch predictors, arbiters, reordering buffers, caches) and by having all execution explicitly controlled by the compiler thereby guaranteeing determinism in execution of an LPU program.
The first generation of the LPU (LPU v1) yields a computational density of more than 1TeraOp/s per square mm of silicon for its 25×29 mm 14nm chip operating at a nominal clock frequency of 900 MHz. The second generation of the LPU (LPU v2) will be manufactured on Samsung's 4nm process node.
Performance
Groq emerged as the first API provider to break the 100 tokens per second generation rate while running Meta’s Llama2-70B parameter model.
Groq currently hosts a variety of open-source large language models running on its LPUs for public access. Access to these demos are available through Groq's website. The LPU's performance while running these open source LLMs has been independently benchmarked by ArtificialAnalysis.ai, in comparison with other LLM providers. The LPU's measured performance is shown in the table below:
See also
Central processing unit
Graphics processing unit
References
External links
Computer companies of the United States
Companies based in California
Companies based in Sunnyvale, California
Companies based in Silicon Valley
Computer hardware companies
Semiconductor companies of the United States
Fabless semiconductor companies
Electronics companies established in 2016
2016 establishments in California | Groq | Technology | 1,047 |
79,668 | https://en.wikipedia.org/wiki/Uniq | uniq is a utility command on Unix, Plan 9, Inferno, and Unix-like operating systems which, when fed a text file or standard input, outputs the text with adjacent identical lines collapsed to one, unique line of text.
Overview
The command is a kind of filter program. Typically it is used after sort. It can also output only the duplicate lines (with the -d option), or add the number of occurrences of each line (with the -c option). For example, the following command lists the unique lines in a file, sorted by the number of times each occurs:
$ sort file | uniq -c | sort -n
Using uniq like this is common when building pipelines in shell scripts.
History
First appearing in Version 3 Unix, uniq is now available for a number of different Unix and Unix-like operating systems. It is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX and the Single Unix Specification.
The version bundled in GNU coreutils was written by Richard Stallman and David MacKenzie.
A uniq command is also part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2.
The command is available as a separate package for Microsoft Windows as part of the GnuWin32 project and the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
The command has also been ported to the IBM i operating system.
See also
List of Unix commands
References
External links
SourceForge UnxUtils – Port of several GNU utilities to Windows
Unix text processing utilities
Unix SUS2008 utilities
Plan 9 commands
Inferno (operating system) commands
IBM i Qshell commands | Uniq | Technology | 355 |
49,224,749 | https://en.wikipedia.org/wiki/Anthony%20Zboralski | Anthony Zboralski is a French hacker, artist and internet entrepreneur.
Computer hacking
In 1994, Zboralski social engineered the FBI to connect to the internet and set up teleconferences with other hackers. Over a period of four months, Zboralski posed as its legal attaché in Paris, Thomas Baker, costing the FBI $250,000.
Zboralski was part of a group of hackers called w00w00. He went by the handles "gaius" and "kugutsumen".
Career
From 2011 to 2013, Zboralski worked as a managing consultant for IOActive. In 2014, Zboralski co-founded Belua Systems Limited.
Television
Surfez Couvert was a TV show co-written by Antoine Rivière and Zboralski in which Zboralski offered practical recommendations on how to protect our "life 2.0". It began airing on Game One in 2013 and was co-produced by Flair Production and MTV Networks France.
See also
Gameplay_of_Eve_Online#Developer_misconduct
Parmiter's School
References
External links
1975 births
Living people
French businesspeople
21st-century French male artists
Hackers | Anthony Zboralski | Technology | 248 |
167,632 | https://en.wikipedia.org/wiki/Chaperone%20%28protein%29 | In molecular biology, molecular chaperones are proteins that assist the conformational folding or unfolding of large proteins or macromolecular protein complexes. There are a number of classes of molecular chaperones, all of which function to assist large proteins in proper protein folding during or after synthesis, and after partial denaturation. Chaperones are also involved in the translocation of proteins for proteolysis.
The first molecular chaperones discovered were a type of assembly chaperones which assist in the assembly of nucleosomes from folded histones and DNA. One major function of molecular chaperones is to prevent the aggregation of misfolded proteins, thus many chaperone proteins are classified as heat shock proteins, as the tendency for protein aggregation is increased by heat stress.
The majority of molecular chaperones do not convey any steric information for protein folding, and instead assist in protein folding by binding to and stabilizing folding intermediates until the polypeptide chain is fully translated. The specific mode of function of chaperones differs based on their target proteins and location. Various approaches have been applied to study the structure, dynamics and functioning of chaperones. Bulk biochemical measurements have informed us on the protein folding efficiency, and prevention of aggregation when chaperones are present during protein folding. Recent advances in single-molecule analysis have brought insights into structural heterogeneity of chaperones, folding intermediates and affinity of chaperones for unstructured and structured protein chains.
Functions of molecular chaperones
Many chaperones are heat shock proteins, that is, proteins expressed in response to elevated temperatures or other cellular stresses. Heat shock protein chaperones are classified based on their observed molecular weights into Hsp60, Hsp70, Hsp90, Hsp104, and small Hsps. The Hsp60 family of protein chaperones are termed chaperonins, and are characterized by a stacked double-ring structure and are found in prokaryotes, in the cytosol of eukaryotes, and in mitochondria.
Some chaperone systems work as foldases: they support the folding of proteins in an ATP-dependent manner (for example, the GroEL/GroES or the DnaK/DnaJ/GrpE system). Although most newly synthesized proteins can fold in absence of chaperones, a minority strictly requires them for the same. Other chaperones work as holdases: they bind folding intermediates to prevent their aggregation, for example DnaJ or Hsp33.
Chaperones can also work as disaggregases, which interact with aberrant protein assemblies and revert them to monomers. Some chaperones can assist in protein degradation, leading proteins to protease systems, such as the ubiquitin-proteasome system in eukaryotes. Chaperone proteins participate in the folding of over half of all mammalian proteins.
Macromolecular crowding may be important in chaperone function. The crowded environment of the cytosol can accelerate the folding process, since a compact folded protein will occupy less volume than an unfolded protein chain. However, crowding can reduce the yield of correctly folded protein by increasing protein aggregation. Crowding may also increase the effectiveness of the chaperone proteins such as GroEL, which could counteract this reduction in folding efficiency. Some highly specific 'steric chaperones' convey unique structural information onto proteins, which cannot be folded spontaneously. Such proteins violate Anfinsen's dogma, requiring protein dynamics to fold correctly.
Other types of chaperones are involved in transport across membranes, for example membranes of the mitochondria and endoplasmic reticulum (ER) in eukaryotes. A bacterial translocation-specific chaperone SecB maintains newly synthesized precursor polypeptide chains in a translocation-competent (generally unfolded) state and guides them to the translocon.
New functions for chaperones continue to be discovered, such as bacterial adhesin activity, induction of aggregation towards non-amyloid aggregates, suppression of toxic protein oligomers via their clustering, and in responding to diseases linked to protein aggregation and cancer maintenance.
Human chaperone proteins
In human cell lines, chaperone proteins were found to compose ~10% of the gross proteome mass, and are ubiquitously and highly expressed across human tissues.
Chaperones are found extensively in the endoplasmic reticulum (ER), since protein synthesis often occurs in this area.
Endoplasmic reticulum
In the endoplasmic reticulum (ER) there are general, lectin- and non-classical molecular chaperones that moderate protein folding.
General chaperones: GRP78/BiP, GRP94, GRP170.
Lectin chaperones: calnexin and calreticulin
Non-classical molecular chaperones: HSP47 and ERp29
Folding chaperones:
Protein disulfide isomerase (PDI),
Peptidyl prolyl cis-trans isomerase (PPI), Prolyl isomerase
ERp57
Nomenclature and examples of chaperone families
There are many different families of chaperones; each family acts to aid protein folding in a different way. In bacteria like E. coli, many of these proteins are highly expressed under conditions of high stress, for example, when the bacterium is placed in high temperatures, thus heat shock protein chaperones are the most extensive.
A variety of nomenclatures are in use for chaperones. As heat shock proteins, the names are classically formed by "Hsp" followed by the approximate molecular mass in kilodaltons; such names are commonly used for eukaryotes such as yeast. The bacterial names have more varied forms, and refer directly to their apparent function at discovery. For example, "GroEL" originally stands for "phage growth defect, overcome by mutation in phage gene E, large subunit".
Hsp10 and Hsp60
Hsp10/60 (GroEL/GroES complex in E. coli) is the best characterized large (~ 1 MDa) chaperone complex. GroEL (Hsp60) is a double-ring 14mer with a hydrophobic patch at its opening; it is so large it can accommodate native folding of 54-kDa GFP in its lumen. GroES (Hsp10) is a single-ring heptamer that binds to GroEL in the presence of ATP or ADP. GroEL/GroES may not be able to undo previous aggregation, but it does compete in the pathway of misfolding and aggregation. Also acts in the mitochondrial matrix as a molecular chaperone.
Hsp70 and Hsp40
Hsp70 (DnaK in E. coli) is perhaps the best characterized small (~ 70 kDa) chaperone. The Hsp70 proteins are aided by Hsp40 proteins (DnaJ in E. coli), which increase the ATP consumption rate and activity of the Hsp70s. The two proteins are named "Dna" in bacteria because they were initially identified as being required for E. coli DNA replication.
It has been noted that increased expression of Hsp70 proteins in the cell results in a decreased tendency toward apoptosis. Although a precise mechanistic understanding has yet to be determined, it is known that Hsp70s have a high-affinity bound state to unfolded proteins when bound to ADP, and a low-affinity state when bound to ATP.
It is thought that many Hsp70s crowd around an unfolded substrate, stabilizing it and preventing aggregation until the unfolded molecule folds properly, at which time the Hsp70s lose affinity for the molecule and diffuse away. Hsp70 also acts as a mitochondrial and chloroplastic molecular chaperone in eukaryotes.
Hsp90
Hsp90 (HtpG in E. coli) may be the least understood chaperone. Its molecular weight is about 90 kDa, and it is necessary for viability in eukaryotes (possibly for prokaryotes as well). Heat shock protein 90 (Hsp90) is a molecular chaperone essential for activating many signaling proteins in the eukaryotic cell.
Each Hsp90 has an ATP-binding domain, a middle domain, and a dimerization domain. Originally thought to clamp onto their substrate protein (also known as a client protein) upon binding ATP, the recently published structures by Vaughan et al. and Ali et al. indicate that client proteins may bind externally to both the N-terminal and middle domains of Hsp90.
Hsp90 may also require co-chaperones-like immunophilins, Sti1, p50 (Cdc37), and Aha1, and also cooperates with the Hsp70 chaperone system.
Hsp100
Hsp100 (Clp family in E. coli) proteins have been studied in vivo and in vitro for their ability to target and unfold tagged and misfolded proteins.
Proteins in the Hsp100/Clp family form large hexameric structures with unfoldase activity in the presence of ATP. These proteins are thought to function as chaperones by processively threading client proteins through a small 20 Å (2 nm) pore, thereby giving each client protein a second chance to fold.
Some of these Hsp100 chaperones, like ClpA and ClpX, associate with the double-ringed tetradecameric serine protease ClpP; instead of catalyzing the refolding of client proteins, these complexes are responsible for the targeted destruction of tagged and misfolded proteins.
Hsp104, the Hsp100 of Saccharomyces cerevisiae, is essential for the propagation of many yeast prions. Deletion of the HSP104 gene results in cells that are unable to propagate certain prions.
Bacteriophage
The genes of bacteriophage (phage) T4 that encode proteins with a role in determining phage T4 structure were identified using conditional lethal mutants. Most of these proteins proved to be either major or minor structural components of the completed phage particle. However among the gene products (gps) necessary for phage assembly, Snustad identified a group of gps that act catalytically rather than being incorporated themselves into the phage structure. These gps were gp26, gp31, gp38, gp51, gp28, and gp4 [gene 4 is synonymous with genes 50 and 65, and thus the gp can be designated gp4(50)(65)]. The first four of these six gene products have since been recognized as being chaperone proteins. Additionally, gp40, gp57A, gp63 and gpwac have also now been identified as chaperones.
Phage T4 morphogenesis is divided into three independent pathways: the head, the tail and the long tail fiber pathways as detailed by Yap and Rossman. With regard to head morphogenesis, chaperone gp31 interacts with the bacterial host chaperone GroEL to promote proper folding of the major head capsid protein gp23. Chaperone gp40 participates in the assembly of gp20, thus aiding in the formation of the connector complex that initiates head procapsid assembly. Gp4(50)(65), although not specifically listed as a chaperone, acts catalytically as a nuclease that appears to be essential for morphogenesis by cleaving packaged DNA to enable the joining of heads to tails.
During overall tail assembly, chaperone proteins gp26 and gp51 are necessary for baseplate hub assembly. Gp57A is required for correct folding of gp12, a structural component of the baseplate short tail fibers.
Synthesis of the long tail fibers depends on the chaperone protein gp57A that is needed for the trimerization of gp34 and gp37, the major structural proteins of the tail fibers. The chaperone protein gp38 is also required for the proper folding of gp37. Chaperone proteins gp63 and gpwac are employed in attachment of the long tail fibers to the tail baseplate.
History
The investigation of chaperones has a long history. The term "molecular chaperone" appeared first in the literature in 1978, and was invented by Ron Laskey to describe the ability of a nuclear protein called nucleoplasmin to prevent the aggregation of folded histone proteins with DNA during the assembly of nucleosomes. The term was later extended by R. John Ellis in 1987 to describe proteins that mediated the post-translational assembly of protein complexes. In 1988, it was realised that similar proteins mediated this process in both prokaryotes and eukaryotes. The details of this process were determined in 1989, when the ATP-dependent protein folding was demonstrated in vitro.
Clinical significance
There are many disorders associated with mutations in genes encoding chaperones (i.e. multisystem proteinopathy) that can affect muscle, bone and/or the central nervous system.
See also
Biological machines
Chaperome
Chaperonin
Chemical chaperones
Heat shock protein
Heat shock factor 1
Molecular chaperone therapy
Pharmacoperone
Proteasome
Protein dynamics
Notes
References
Protein biosynthesis | Chaperone (protein) | Chemistry | 2,804 |
275,735 | https://en.wikipedia.org/wiki/Shackle | A shackle (or shacklebolt), also known as a gyve, is a U-shaped piece of metal secured with a clevis pin or bolt across the opening, or a hinged metal loop secured with a quick-release locking pin mechanism. The term also applies to handcuffs and other similarly conceived restraint devices that function in a similar manner. Shackles are the primary connecting link in all manner of rigging systems, from boats and ships to industrial crane rigging, as they allow different rigging subsets to be connected or disconnected quickly.
A shackle is also the similarly shaped piece of metal used with a locking mechanism in padlocks. A carabiner is a type of shackle used in mountaineering.
Types
Bow shackle
With a larger "O" shape to the loop, this shackle can take loads from many directions without developing as much side load. However, the larger shape to the loop does reduce its overall strength. Also referred to as an anchor shackle.
D-shackle
Also known as a chain shackle, D-shackles are narrow shackles shaped like a loop of chain, usually with a pin or threaded pin closure. D-shackles are very common and most other shackle types are a variation of the D-shackle. The small loop can take high loads primarily in line. Side and racking loads may twist or bend a D-shackle.
Headboard shackle
This longer version of a D-shackle is used to attach halyards to sails, especially sails fitted with a headboard such as on Bermuda rigged boats. Headboard shackles are often stamped from flat strap stainless steel, and feature an additional pin between the top of the loop and the bottom so the headboard does not chafe the spliced eye of the halyard.
Pin shackle
A pin shackle is closed with an anchor bolt and cotter pin, in a manner similar to a clevis. It is for this reason they are often referred to, in industrial jargon, as clevises. Pin shackles can be inconvenient to work with, at times, as the bolt will need to be secured to the shackle body to avoid its loss, usually with a split pin or seizing wire. A more secure version used in crane rigging features the combination of a securing nut (hardware) located alongside the cotter pin. Pin shackles are practical in many rigging applications where the anchor bolt is expected to experience some rotation.
Snap shackle
As the name implies, a snap shackle is a fast action fastener which can be implemented single-handedly. It uses a spring-activated locking mechanism to close a hinged shackle, and can be unfastened under load. This is a potential safety hazard, but can also be extremely useful at times. The snap shackle is not as secure as any other form of shackle, but can come in handy for temporary uses or in situations which must be moved or replaced often, such as a sailor's harness tether or to attach spinnaker sheets. Note: When this type of shackle is used to release a significant load, it will work rather poorly (hard to release) and is likely to have the pin assembly or the split ring fail.
Threaded shackle
The pin is threaded and one leg of the shackle is tapped. The pin may be captive, which means it is mated to the shackle, usually with a wire. The threads may gall if overtightened or have been corroding in salty air, so a liberal coating of lanolin or a heavy grease is not out of place on any and all threads. A shackle key or metal marlinspike are useful tools for loosening a tight nut.
For safety, it is common to mouse a threaded shackle to keep the pin from coming loose. This is done by looping mousing wire or a nylon zip tie through the hole in the pin and around the shackle body. For pins that have a cross-hole in the threaded end a cotter pin can be used. One disadvantage of wire is that mousing can introduce galvanic corrosion because of material differences; it is especially bad when used in places where the shackle is exposed to air and water. Nylon is not recommended for use where significant movement of the shackle is expected.
Twist shackle
A twist shackle is usually somewhat longer than the average, and features a 90° twist so the top of the loop is perpendicular to the pin. One of the uses for this shackle include attaching the jib halyard block to the mast, or the jib halyard to the sail, to reduce twist on the luff and allow the sail to set better.
Soft shackle
Modern strong fibers such as PBO (IUPAC name: poly(p-phenylene-2,6-benzobisoxazole), aramids (Kevlar, Technora, Twaron), Vectran, carbon fibers, ultra-high-molecular-weight polyethylene (UHMWPE, Dyneema, Spectra) and other synthetic fibers are used to make extra strong ropes which can also be tied into lockable loops called soft shackles.
According to sailmagazine.com, "a soft shackle can handle just about every function performed by a metal shackle, in many cases better. Soft shackles articulate better, don’t rattle around when not under load, don’t chew up toe rails or beat up masts and decks, don’t hurt when they whack you on the head, are easier to undo and don’t have pins that fall overboard at a critical moment". A modern rope can lift as heavy loads as a steel wire 3 times as thick and much heavier. Metal shackles may be preferred because soft shackles can be cut by sharp edges, burned, or deteriorate in some environments.
One disadvantage of soft shackles made of Dyneema and other modern fibers is their susceptibility to be weakened by heat, also heat caused by friction. It is advisable to subject it to medium level loads for a while to remove any slack that may cause friction heat when loaded to its maximum.
The stopping knot of a soft shackle may be a true lover's knot, a diamond knot, a double line celtic button knot or a chinese button knot (ABOK #600, ABOK #601, ABOK #603 doubled as one tail-end reverse-trails the other, emerging tail-ends preferably buried in the opposing main part as it emerges together from the knot for Chinese finger trap attachment). Another preferred stopping knot is a two rope combined wall+crown+wall+crown knot (a two string rose knot). The latter ones with the added thickness of the returning tail ends have the advantage of less of a sharp curvature at their weakest spot, the eye around the neck of the button.
References
Further reading
Edwards, Fred (1988). Sailing as a Second Language. Camden, ME: International Marine Publishing. .
Hiscock, Eric C. (1965). Cruising Under Sail. Oxford University Press. .
Marino, Emiliano (1994). The Sailmaker's Apprentice: A guide for the self-reliant sailor. Camden, ME: International Marine Publishing. .
Sailing rigs and rigging
Fasteners
Locksmithing | Shackle | Engineering | 1,523 |
13,409,455 | https://en.wikipedia.org/wiki/Clarkson%27s%20inequalities | In mathematics, Clarkson's inequalities, named after James A. Clarkson, are results in the theory of Lp spaces. They give bounds for the Lp-norms of the sum and difference of two measurable functions in Lp in terms of the Lp-norms of those functions individually.
Statement of the inequalities
Let (X, Σ, μ) be a measure space; let f, g : X → R be measurable functions in Lp. Then, for 2 ≤ p < +∞,
For 1 < p < 2,
where
i.e., q = p ⁄ (p − 1).
References
.
.
.
External links
Banach spaces
Inequalities
Measure theory
Lp spaces | Clarkson's inequalities | Mathematics | 146 |
39,919,360 | https://en.wikipedia.org/wiki/Technosignature | Technosignature or technomarker is any measurable property or effect that provides scientific evidence of past or present technology. Technosignatures are analogous to biosignatures, which signal the presence of life, whether intelligent or not. Some authors prefer to exclude radio transmissions from the definition, but such restrictive usage is not widespread. Jill Tarter has proposed that the search for extraterrestrial intelligence (SETI) be renamed "the search for technosignatures". Various types of technosignatures, such as radiation leakage from megascale astroengineering installations such as Dyson spheres, the light from an extraterrestrial ecumenopolis, or Shkadov thrusters with the power to alter the orbits of stars around the Galactic Center, may be detectable with hypertelescopes. Some examples of technosignatures are described in Paul Davies's 2010 book The Eerie Silence, although the terms "technosignature" and "technomarker" do not appear in the book.
In February 2023, astronomers reported, after scanning 820 stars, the detection of 8 possible technosignatures for follow-up studies.
Astroengineering projects
A Dyson sphere, constructed by life forms dwelling in proximity to a Sun-like star, would cause an increase in the amount of infrared radiation in the star system's emitted spectrum. Hence, Freeman Dyson selected the title "Search for Artificial Stellar Sources of Infrared Radiation" for his 1960 paper on the subject. SETI has adopted these assumptions in its search, looking for such "infrared heavy" spectra from solar analogs. Since 2005, Fermilab has conducted an ongoing survey for such spectra, analyzing data from the Infrared Astronomical Satellite.
Identifying one of the many infra-red sources as a Dyson sphere would require improved techniques for discriminating between a Dyson sphere and natural sources. Fermilab discovered 17 "ambiguous" candidates, of which four have been named "amusing but still questionable". Other searches also resulted in several candidates, which remain unconfirmed. In October 2012, astronomer Geoff Marcy, one of the pioneers of the search for extrasolar planets, was given a research grant to search data from the Kepler telescope, with the aim of detecting possible signs of Dyson spheres.
Orbital paths, transit signatures, stellar activity and star-system composition
Shkadov thrusters, with the hypothetical ability to change the orbital paths of stars in order to avoid various dangers to life such as cold molecular clouds or cometary impacts, would also be detectable in a similar fashion to the transiting extrasolar planets searched by Kepler. Unlike planets, though, the thrusters would appear to abruptly stop over the surface of a star rather than crossing it completely, revealing their technological origin. In addition, evidence of targeted extrasolar asteroid mining may also reveal extraterrestrial intelligence (ETI). Furthermore, it has been suggested that information could be hidden within the transit signatures of other planets. Advanced civilizations could "cloak their presence, or deliberately broadcast it, through controlled laser emission". Other characteristics proposed as potential technosignatures (or starting points for detection of clearer signatures) include peculiar orbital periods such as arranging planets in prime number patterns. Coronal and chromospheric activity on stars might be altered. Extraterrestrial civilizations may use free-floating planets (rogue planets) for interstellar transportation with a number of proposed possible technosignatures.
Communication networks
A study suggests that if ETs exist, they may have established communications network(s) and may already have probes in the solar system whose communication may be detectable. Studies by John Gertz suggest flyby (scout) probes might intermittently surveil nascent solar systems and permanent probes would communicate with a home base, potentially using triggers and conditions such as detection of electromagnetic leakage or biosignatures. They also suggest several strategies to detecting local ET probes such as detecting emitted optical messages. He also finds that due to interstellar networks of communications nodes, the search for deliberate interstellar signals – as is common in SETI – may be futile. The architecture may consist of nodes separated by sub-light-year distances and strung out between neighboring stars. It may also contain pulsars as beacons or nodes whose beams are modulated by mechanisms that could be searched for. Moreover, a study suggests prior searches wouldn't have detected cost-effective electromagnetic signal beacons.
Planetary analysis
Artificial heat and light
Various astronomers, including Avi Loeb of the Harvard-Smithsonian Center for Astrophysics. Edwin L. Turner of Princeton University, and Thomas Beatty of the University of Wisconsin have proposed that artificial light from extraterrestrial planets, such as that originating from cities, industries, and transport networks, could be detected and signal the presence of an advanced civilization.
Light and heat detected from planets must be distinguished from natural sources to conclusively prove the existence of intelligent life on a planet. For example, NASA's 2012 Black Marble experiment showed that significant stable light and heat sources on Earth, such as chronic wildfires in arid Western Australia, originate from uninhabited areas and are naturally occurring.
Spectroscopic observations of exoplanet nightsides would be able to identify artificial lighting via its distinct spectroscopic signature. Work by astronomer Thomas Beatty has shown that the spectrally concentrated emission from sodium street lights would be distinguishable from natural sources using proposed next generation space telescopes. The proposed LUVOIR A may be able to detect city lights twelve times those of Earth on Proxima b in 300 hours.
Atmospheric analysis
Atmospheric analysis of planetary atmospheres, as is already done on various Solar System bodies and in a rudimentary fashion on several hot Jupiter extrasolar planets, may reveal the presence of chemicals produced by technological civilizations. For example, atmospheric emissions from human technology use on Earth, including nitrogen dioxide and chlorofluorocarbons, are detectable from space. Artificial air pollution may therefore be detectable on extrasolar planets and on Earth via "atmospheric SETI" – including NO2 pollution levels and with telescopic technology close to today. Such technosignatures may consist not of the detection of the level of one specific chemical but simultaneous detections of levels of multiple specific chemicals in atmospheres.
However, there remains a possibility of mis-detection; for example, the atmosphere of Titan has detectable signatures of complex chemicals that are similar to what on Earth are industrial pollutants, though not the byproduct of civilisation. Some SETI scientists have proposed searching for artificial atmospheres created by planetary engineering to produce habitable environments for colonisation by an ETI.
Extraterrestrial artifacts, influence and spacecraft
Spacecraft
Interstellar spacecraft may be detectable from hundreds to thousands of light-years away through various forms of radiation, such as the photons emitted by an antimatter rocket or cyclotron radiation from the interaction of a magnetic sail with the interstellar medium. Such a signal would be easily distinguishable from a natural signal and could hence firmly establish the existence of extraterrestrial life, were it to be detected. In addition, smaller Bracewell probes within the Solar System itself may also be detectable by means of optical or radio searches. Self-replicating spacecraft or their communications networks could potentially be detectable within our Solar system or in nearby star-based systems, if they are located there. Such technologies or their footprints could be in Earth's orbit, on the Moon or on the Earth.
Satellites
A less advanced technology, and one closer to humanity's current technological level, is the Clarke Exobelt proposed by Astrophysicist Hector Socas-Navarro of the Instituto de Astrofisica de Canarias. This hypothetical belt would be formed by all the artificial satellites occupying geostationary/geosynchronous orbits around an exoplanet. From early simulations it appeared that a very dense satellite belt, requiring only a moderately more-advanced civilization than ours, would be detectable with existing technology in the light curves from transiting exoplanets, but subsequent analysis has questioned this result, suggesting that exobelts detectable by current and upcoming missions will be very rare.
Extraterrestrial influence or activity on Earth
It has been suggested that once extraterrestrials arrive "at a new home, such life will almost certainly create technosignatures (because it used technology to get there), and some fraction of them may also eventually give rise to a new biosphere". Microorganism DNA may have been used for self-replicating messages. See also: DNA digital data storage
On exoplanets
Low- or high-albedo installations such as solar panels may also be detectable, albeit distinguishing artificial megastructures from high- and low-albedo natural environments (e.g., bright ice caps) may make it unfeasible.
Scientific projects searching for technosignatures
One of the first attempts to search for Dyson Spheres was made by Vyacheslav Slysh from the Russian Space Research Institute in Moscow in 1985 using data from the Infrared Astronomical Satellite (IRAS).
Another search for technosignatures, , involved an analysis of data from the Compton Gamma Ray Observatory for traces of anti-matter, which, besides one "intriguing spectrum probably not related to SETI", came up empty.
In 2005, Fermilab had an ongoing survey for such spectra by analyzing data from IRAS. Identifying one of the many infra-red sources as a Dyson Sphere would require improved techniques for discriminating between a Dyson Sphere and natural sources. Fermilab discovered 17 potential "ambiguous" candidates of which four have been named "amusing but still questionable". Other searches also resulted in several candidates, which are, however, unconfirmed.
In a 2005 paper, Luc Arnold proposed a means of detecting planetary-sized artifacts from their distinctive transit light curve signature. He showed that such technosignature was within the reach of space missions aimed at detecting exoplanets by the transit method, as were Corot or Kepler projects at that time. The principle of the detection remains applicable for future exoplanets missions.
In 2012, a trio of astronomers led by Jason Wright started a two-year search for Dyson Spheres, aided by grants from the Templeton Foundation.
In 2013, Geoff Marcy received funding to use data from the Kepler Telescope to search for Dyson Spheres and interstellar communication using lasers, and Lucianne Walkowicz received funding to detect artificial signatures in stellar photometry.
Starting in 2016, astronomer Jean-Luc Margot of UCLA has been searching for technosignatures with large radio telescopes.
Vanishing stars
In 2016, it was proposed that vanishing stars are a plausible technosignature. A pilot project searching for vanishing stars was carried out, finding one candidate object. In 2019, the Vanishing & Appearing Sources during a Century of Observations (VASCO) project began more general searches for vanishing and appearing stars, and other astrophysical transients They identified 100 red transients of "most likely natural origin", while analyzing 15% of the image data. In 2020, the VASCO collaboration started up a citizen science project, vetting through images of many thousands of candidate objects. The citizen science project is carried out in close collaboration with schools and amateur associations mainly in African countries. The VASCO project has been referred to as "Perhaps the most general artefact search to date". In 2021, VASCO's principal investigator Beatriz Villarroel received a L'Oreal-Unesco prize in Sweden for the project. In June 2021, the collaboration published the discovery of nine light sources seemingly appearing and vanishing simultaneously from archival plates taken in 1950. Villarroel's team also found three 16th magnitude stars which had vanished on plates exposed within one hour of each other on 19 July 1952.
Organization of novel projects
In June 2020, NASA was awarded their first SETI-specific grant in three decades. The grant funds the first NASA-funded search for technosignatures from advanced extraterrestrial civilizations other than radio waves, including the creation and population of an online technosignature library. A 2021 scientific review produced by the i.a. NASA-sponsored online workshop TechnoClimes 2020 classified possible optimal mission concepts for the search of technosignatures. It evaluates signatures based on a metric about the distance of humanity to the capacity of developing the signature's required technology – a comparison to contemporary human technology footprints, associated methods of detection and ancillary benefits of their search for other astronomy. The study's conclusions include a robust rationale for organizing missions for searching artifacts – including probes – within the Solar system.
In 2021, astronomers proposed a sequence of "verification checks for narrowband technosignature signals" after concluding that technosignature candidate BLC1 could be the result of a form of local radiofrequency interference.
It has been suggested that observatories on the Moon could be more successful. In 2022, scientists provided an overview of the capabilities of ongoing, recent, past, planned and proposed missions and observatories for detecting various alien technosignatures.
Implications of detection
Steven J. Dick states that there generally are no principles for dealing with successful SETI detections. Detections of technosignatures may have ethical implications, such as conveying information related to astroethical and related machine ethics ones (e.g., related to machines' applied ethical values), or include information about alien societies or histories or fates, which may vary depending on the type, prevalence and form of the detected signature's technology. Moreover, various types of information about detected technosignatures and their distribution or dissemination may have varying implications that may also depend on time and context.
See also
Laser SETI
UFO Report (U.S. Intelligence)
Further reading
References
Astrobiology
Technology | Technosignature | Astronomy,Biology | 2,850 |
8,591,368 | https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Camelopardalis | This is the list of notable stars in the constellation Camelopardalis, sorted by decreasing brightness.
See also
List of stars by constellation
References
Sources
List
Camelopardalis | List of stars in Camelopardalis | Astronomy | 36 |
33,965,855 | https://en.wikipedia.org/wiki/Infoveillance | Infoveillance is a type of syndromic surveillance that specifically utilizes information found online. The term, along with the term infodemiology, was coined by Gunther Eysenbach to describe research that uses online information to gather information about human behavior.
Eysenbach's work using Google Search queries led to the birth of Google Flu Trends, and other search engines have also been used. Other researchers have utilized social media sites such as Twitter to observe disease outbreak patterns. Infoveillance can detect disease outbreaks faster than traditional public health surveillance systems with minimal costs involved.
Types
Infoveillance methods may be either passive or active. Traditional infoveillance data like search engine queries and website navigation behavior are considered passive, as they attempt to recognize trends automatically, without action (or often even awareness) on the part of the internet users who are generating the data for analysis. Active infoveillance occurs when users choose to respond to a survey, enter symptoms into a website or app, or otherwise participate directly in surveillance efforts by contributing additional information.
Examples
Google Health Trends
Beginning in 2008, Google used aggregated search query data to detect influenza trends and compared the results to countries' official surveillance data with the goal of predicting the spread of the flu. In light of evidence that emerged in 2013 showing that Google Flu Trends sometimes substantially overestimated actual flu rates, researchers proposed a series of more advanced and better-performing approaches to flu modeling from Google search queries. Google Flu Trends stopped publishing reports in 2015.
Google also used aggregated search query data to detect dengue fever trends. Research has also cast doubt on the accuracy of some of these predictions. Google has continued this work to track and predict the COVID-19 pandemic, creating an open dataset on COVID-related search queries for use by researchers.
Flu Detector
Other flu prediction projects, including Flu Detector, have come and gone since the advent and removal of Google Flu Trends. Flu Detector was developed by Vasileios Lampos and other researchers at the University of Bristol. It was an application of machine learning that first used feature selection to automatically extract flu-related terms from Twitter content and then used those terms to compute a flu-score for several UK regions based on geolocated tweets. It also formed the basis for a proposed generalized scheme able to track other events.
Mood of the Nation
Mood of the Nation was also developed by Lampos' team. It performed mood analysis on tweets geo-located in various regions of the United Kingdom by computing on a daily basis scores for four types of emotion: anger, fear, joy and sadness.
Privacy issues
The rise of infoveillance brings up questions about privacy. Privacy concerns are partially dependent on the level of analysis and how data are collected and managed. For instance, individuals may be re-identifiable from search query datasets that have not been properly de-identified. Privacy concerns are increased if data analysis is not done automatically and if search trajectories of individual users are examined.
See also
Participatory surveillance
infodemiology
References
External links
"Google Flu Trend"
"Google Dengue Trend"
"Flu Detector"
"Health informatics"
"JMIR e-collection of peer-reviewed articles on Infodemiology and Infoveillance"
"JMIR Public Health & Surveillance e-collection of peer-reviewed articles on Infoveillance, Infodemiology and Digital Disease Surveillance"
Internet culture
Internet Society people
Epidemiology | Infoveillance | Environmental_science | 716 |
51,117,614 | https://en.wikipedia.org/wiki/Cortinarius%20thiersii | Cortinarius thiersii is a basidiomycete fungus of the genus Cortinarius native to North America.
See also
List of Cortinarius species
References
External links
verecundus
Fungi of North America
Fungi described in 1977
Taxa named by Alexander H. Smith
Fungus species | Cortinarius thiersii | Biology | 63 |
337,864 | https://en.wikipedia.org/wiki/Fuzzball%20router | Fuzzball routers were the first modern routers on the Internet. They were DEC PDP-11 computers (usually LSI-11 personal workstations) loaded with the Fuzzball software written by David L. Mills (of the University of Delaware). The name "Fuzzball" was the colloquialism for Mills's routing software. The software evolved from the Distributed Computer Network (DCN) that started at the University of Maryland in 1973. It acquired the nickname sometime after it was rewritten in 1977.
Six Fuzzball routers provided the routing backbone of the first 56 kbit/s NSFNET, allowing the testing of many of the Internet's first protocols. It allowed the development of the first TCP/IP routing protocols, and the Network Time Protocol. They were the first routers to implement key refinements to TCP/IP such as variable-length subnet masks.
See also
Interface Message Processor
References
External links
The Fuzzball, with photographs
Fuzzball source code, last update in 1992, 16 megabytes
American inventions
Hardware routers
History of telecommunications | Fuzzball router | Technology | 232 |
220,455 | https://en.wikipedia.org/wiki/Monoamine%20neurotransmitter | Monoamine neurotransmitters are neurotransmitters and neuromodulators that contain one amino group connected to an aromatic ring by a two-carbon chain (such as -CH2-CH2-). Examples are dopamine, norepinephrine and serotonin.
All monoamines are derived from aromatic amino acids like phenylalanine, tyrosine, and tryptophan by the action of aromatic amino acid decarboxylase enzymes. They are deactivated in the body by the enzymes known as monoamine oxidases which clip off the amine group.
Monoaminergic systems, i.e., the networks of neurons that use monoamine neurotransmitters, are involved in the regulation of processes such as emotion, arousal, and certain types of memory. It has also been found that monoamine neurotransmitters play an important role in the secretion and production of neurotrophin-3 by astrocytes, a chemical which maintains neuron integrity and provides neurons with trophic support.
Drugs used to increase or reduce the effect of monoamine neurotransmitters are used to treat patients with psychiatric and neurological disorders, including depression, anxiety, schizophrenia and Parkinson's disease.
Examples
Classical monoamines
Imidazoleamines:
Histamine
Catecholamines:
Adrenaline (Ad; Epinephrine, Epi)
Dopamine (DA)
Noradrenaline (NAd; Norepinephrine, NE)
Indolamines:
Serotonin (5-HT)
Melatonin (MT)
Trace amines
Specific transporter proteins called monoamine transporters that transport monoamines in or out of a cell exist. These are the dopamine transporter (DAT), serotonin transporter (SERT), and the norepinephrine transporter (NET) in the outer cell membrane and the vesicular monoamine transporter (VMAT1 and VMAT2) in the membrane of intracellular vesicles.
After release into the synaptic cleft, monoamine neurotransmitter action is ended by reuptake into the presynaptic terminal. There, they can be repackaged into synaptic vesicles or degraded by the enzyme monoamine oxidase (MAO), which is a target of monoamine oxidase inhibitors, a class of antidepressants.
Evolution
Monoamine neurotransmitter systems occur in virtually all vertebrates, where the evolvability of these systems has served to promote the adaptability of vertebrate species to different environments.
A recent computational investigation of genetic origins shows that the earliest development of monoamines occurred 650 million years ago and that the appearance of these chemicals, necessary for active or participatory awareness and engagement with the environment, coincides with the emergence of bilaterian or “mirror” body in the midst of (or perhaps in some sense catalytic of?) the Cambrian Explosion.
See also
Monoamine reuptake inhibitor
Monoamine receptor
Monoamine oxidase
Monoamine transporter
Monoamine Hypothesis
Biogenic amine
Trace amine
Monoamine nuclei
Biology of depression
References
External links
Neurotransmitters
TAAR1 agonists
Amphetamine
fi:Hermoston välittäjäaine#Monoamiinit | Monoamine neurotransmitter | Chemistry | 705 |
246,891 | https://en.wikipedia.org/wiki/Y%20chromosome | The Y chromosome is one of two sex chromosomes in therian mammals and other organisms. Along with the X chromosome, it is part of the XY sex-determination system, in which the Y is the sex-determining chromosome because the presence of the Y chromosome causes offspring produced in sexual reproduction to be of male sex. In mammals, the Y chromosome contains the SRY gene, which triggers development of male gonads. The Y chromosome is passed only from male parents to male offspring.
Overview
Discovery
The Y chromosome was identified as a sex-determining chromosome by Nettie Stevens at Bryn Mawr College in 1905 during a study of the mealworm Tenebrio molitor. Edmund Beecher Wilson independently discovered the same mechanisms the same year, working with Hemiptera. Stevens proposed that chromosomes always existed in pairs and that the smaller chromosome (now labelled "Y") was the pair of the X chromosome discovered in 1890 by Hermann Henking. She realized that the previous idea of Clarence Erwin McClung, that the X chromosome determines sex, was wrong and that sex determination is, in fact, due to the presence or absence of the Y chromosome. In the early 1920s, Theophilus Painter determined that X and Y chromosomes determined sex in humans (and other mammals).
The chromosome was given the name "Y" simply to follow on from Henking's "X" alphabetically. The idea that the Y chromosome was named after its similarity in appearance to the letter "Y" is mistaken. All chromosomes normally appear as an amorphous blob under the microscope and only take on a well-defined shape during mitosis. This shape is vaguely X-shaped for all chromosomes. It is entirely coincidental that the Y chromosome, during mitosis, has two very short branches which can look merged under the microscope and appear as the descender of a Y-shape.
Variations
Most therian mammals have only one pair of sex chromosomes in each cell. Males have one Y chromosome and one X chromosome, while females have two X chromosomes. In mammals, the Y chromosome contains a gene, SRY, which triggers embryonic development as a male. The Y chromosomes of humans and other mammals also contain other genes needed for normal sperm production.
There are exceptions, however. Among humans, some males are born two Xs and a Y ("XXY", see Klinefelter syndrome), one X and two Ys (see XYY syndrome). Some females have three Xs (Trisomy X), and some have a single X instead of two Xs ("X0", see Turner syndrome). There are other variations in which, during embryonic development, the WNT4 gene is activated and/or the SRY gene is damaged leading to birth of an XY female (Swyer syndrome). A Y chromosome may also be present but fail to result in the development of a male phenotype in individuals with androgen insensitivity syndrome, instead resulting in a female or ambiguous phenotype. In other cases, the SRY gene is copied to the X, leading to birth of an XX male.
Origins and evolution
Before Y chromosome
Many ectothermic vertebrates have no sex chromosomes. If these species have different sexes, sex is determined environmentally rather than genetically. For some species, especially reptiles, sex depends on the incubation temperature. Some vertebrates are hermaphrodites, though hermaphroditic species are most commonly sequential, meaning the organism switches sex, producing male or female gametes at different points in its life, but never producing both at the same time. This is opposed to simultaneous hermaphroditism, where the same organism produces male and female gametes at the same time. Most simultaneous hermaphrodite species are invertebrates, and among vertebrates, simultaneous hermaphroditism has only been discovered in a few orders of fish.
Origin
The X and Y chromosomes are thought to have evolved from a pair of identical chromosomes, termed autosomes, when an ancestral animal developed an allelic variation (a so-called "sex locus") and simply possessing this allele caused the organism to be male. The chromosome with this allele became the Y chromosome, while the other member of the pair became the X chromosome. Over time, genes that were beneficial for males and harmful to (or had no effect on) females either developed on the Y chromosome or were acquired by the Y chromosome through the process of translocation.
Until recently, the X and Y chromosomes in mammals were thought to have diverged around 300 million years ago. However, research published in 2008 analyzing the platypus genome suggested that the XY sex-determination system would not have been present more than 166 million years ago, when monotremes split from other mammals. This re-estimation of the age of the therian XY system is based on the finding that sequences that are on the X chromosomes of marsupials and eutherian mammals are not present on the autosomes of platypus and birds. The older estimate was based on erroneous reports that the platypus X chromosomes contained these sequences.
Recombination inhibition
Most chromosomes recombine during meiosis. However, in males, the X and Y pair in a shared region known as the pseudoautosomal region (PAR). The PAR undergoes frequent recombination between the X and Y chromosomes, but recombination is suppressed in other regions of the Y chromosome. These regions contain sex-determining and other male-specific genes. Without this suppression, these genes could be lost from the Y chromosome from recombination and cause issues such as infertility.
The lack of recombination across the majority of the Y chromosome makes it a useful tool in studying human evolution, since recombination complicates the mathematical models used to trace ancestries.
Degeneration
By one estimate, the human Y chromosome has lost 1,393 of its 1,438 original genes over the course of its existence, and linear extrapolation of this 1,393-gene loss over 300 million years gives a rate of genetic loss of 4.6 genes per million years. Continued loss of genes at the rate of 4.6 genes per million years would result in a Y chromosome with no functional genes – that is the Y chromosome would lose complete function – within the next 10 million years, or half that time with the current age estimate of 160 million years. Comparative genomic analysis reveals that many mammalian species are experiencing a similar loss of function in their heterozygous sex chromosome. Degeneration may simply be the fate of all non-recombining sex chromosomes, due to three common evolutionary forces: high mutation rate, inefficient selection, and genetic drift.
With a 30% difference between humans and chimpanzees, the Y chromosome is one of the fastest-evolving parts of the human genome. However, these changes have been limited to non-coding sequences and comparisons of the human and chimpanzee Y chromosomes (first published in 2005) show that the human Y chromosome has not lost any genes since the divergence of humans and chimpanzees between 6–7 million years ago. Additionally, a scientific report in 2012 stated that only one gene had been lost since humans diverged from the rhesus macaque 25 million years ago. These facts provide direct evidence that the linear extrapolation model is flawed and suggest that the current human Y chromosome is either no longer shrinking or is shrinking at a much slower rate than the 4.6 genes per million years estimated by the linear extrapolation model.
High mutation rate
The human Y chromosome is particularly exposed to high mutation rates due to the environment in which it is housed. The Y chromosome is passed exclusively through sperm, which undergo multiple cell divisions during gametogenesis. Each cellular division provides further opportunity to accumulate base pair mutations. Additionally, sperm are stored in the highly oxidative environment of the testis, which encourages further mutation. These two conditions combined put the Y chromosome at a greater opportunity of mutation than the rest of the genome. The increased mutation opportunity for the Y chromosome is reported by Graves as a factor 4.8. However, her original reference obtains this number for the relative mutation rates in male and female germ lines for the lineage leading to humans.
The observation that the Y chromosome experiences little meiotic recombination and has an accelerated rate of mutation and degradative change compared to the rest of the genome suggests an evolutionary explanation for the adaptive function of meiosis with respect to the main body of genetic information. Brandeis proposed that the basic function of meiosis (particularly meiotic recombination) is the conservation of the integrity of the genome, a proposal consistent with the idea that meiosis is an adaptation for repairing DNA damage.
Inefficient selection
Without the ability to recombine during meiosis, the Y chromosome is unable to expose individual alleles to natural selection. Deleterious alleles are allowed to "hitchhike" with beneficial neighbors, thus propagating maladapted alleles into the next generation. Conversely, advantageous alleles may be selected against if they are surrounded by harmful alleles (background selection). Due to this inability to sort through its gene content, the Y chromosome is particularly prone to the accumulation of "junk" DNA. Massive accumulations of retrotransposable elements are scattered throughout the Y. The random insertion of DNA segments often disrupts encoded gene sequences and renders them nonfunctional. However, the Y chromosome has no way of weeding out these "jumping genes". Without the ability to isolate alleles, selection cannot effectively act upon them.
A clear, quantitative indication of this inefficiency is the entropy rate of the Y chromosome. Whereas all other chromosomes in the human genome have entropy rates of 1.5–1.9 bits per nucleotide (compared to the theoretical maximum of exactly 2 for no redundancy), the Y chromosome's entropy rate is only 0.84. From the definition of entropy rate, the Y chromosome has a much lower information content relative to its overall length, and is more redundant.
Genetic drift
Even if a well adapted Y chromosome manages to maintain genetic activity by avoiding mutation accumulation, there is no guarantee it will be passed down to the next generation. The population size of the Y chromosome is inherently limited to 1/4 that of autosomes: diploid organisms contain two copies of autosomal chromosomes while only half the population contains 1 Y chromosome. Thus, genetic drift is an exceptionally strong force acting upon the Y chromosome. Through sheer random assortment, an adult male may never pass on his Y chromosome if he only has female offspring. Thus, although a male may have a well adapted Y chromosome free of excessive mutation, it may never make it into the next gene pool. The repeat random loss of well-adapted Y chromosomes, coupled with the tendency of the Y chromosome to evolve to have more deleterious mutations rather than less for reasons described above, contributes to the species-wide degeneration of Y chromosomes through Muller's ratchet.
Gene conversion
As has been already mentioned, the Y chromosome is unable to recombine during meiosis like the other human chromosomes; however, in 2003, researchers from MIT discovered a process which may slow down the process of degradation.
They found that human Y chromosome is able to "recombine" with itself, using palindrome base pair sequences. Such a "recombination" is called gene conversion.
In the case of the Y chromosomes, the palindromes are not noncoding DNA; these strings of nucleotides contain functioning genes important for male fertility. Most of the sequence pairs are greater than 99.97% identical. The extensive use of gene conversion may play a role in the ability of the Y chromosome to edit out genetic mistakes and maintain the integrity of the relatively few genes it carries. In other words, since the Y chromosome is single, it has duplicates of its genes on itself instead of having a second, homologous, chromosome. When errors occur, it can use other parts of itself as a template to correct them.
Findings were confirmed by comparing similar regions of the Y chromosome in humans to the Y chromosomes of chimpanzees, bonobos and gorillas. The comparison demonstrated that the same phenomenon of gene conversion appeared to be at work more than 5 million years ago, when humans and the non-human primates diverged from each other.
Gene conversion tracts formed during meiosis are long, about 2,068 base pairs, and significantly biased towards the fixation of G or C nucleotides (GC biased). The recombination intermediates preceding gene conversion were found to rarely take the alternate route of crossover recombination. The Y-Y gene conversion rate in humans is about 1.52 x 10−5 conversions/base/year. These gene conversion events may reflect a basic function of meiosis, that of conserving the integrity of the genome.
Future evolution
According to some theories, in the terminal stages of the degeneration of the Y chromosome, other chromosomes may increasingly take over genes and functions formerly associated with it and finally, within the framework of this theory, the Y chromosome disappears entirely, and a new sex-determining system arises.
Several species of rodent in the sister families Muridae and Cricetidae have reached a stage where the XY system has been modified, in the following ways:
The Transcaucasian mole vole, Ellobius lutescens, the Zaisan mole vole, Ellobius tancrei, and the Japanese spinous country rats Tokudaia osimensis and Tokudaia tokunoshimensis, have lost the Y chromosome and SRY entirely. Tokudaia spp. have relocated some other genes ancestrally present on the Y chromosome to the X chromosome. Both sexes of Tokudaia spp. and Ellobius lutescens have an XO genotype (Turner syndrome), whereas all Ellobius tancrei possess an XX genotype. The new sex-determining system(s) for these rodents remains unclear.
The wood lemming Myopus schisticolor, the Arctic lemming, Dicrostonyx torquatus, and multiple species in the grass mouse genus Akodon have evolved fertile females who possess the genotype generally coding for males, XY, in addition to the ancestral XX female, through a variety of modifications to the X and Y chromosomes.
In the creeping vole, Microtus oregoni, the females, with just one X chromosome each, produce X gametes only, and the males, XY, produce Y gametes, or gametes devoid of any sex chromosome, through nondisjunction.
Outside of the rodents, the black muntjac, Muntiacus crinifrons, evolved new X and Y chromosomes through fusions of the ancestral sex chromosomes and autosomes.
Modern data cast doubt on the hypothesis that the Y-chromosome will disappear. This conclusion was reached by scientists who studied the Y chromosomes of rhesus monkeys. When genomically comparing the Y chromosome of rhesus monkeys and humans, scientists found very few differences, given that humans and rhesus monkeys diverged 30 million years ago.
Outside of mammals, some organisms have lost the Y chromosome, such as most species of Nematodes. However, in order for the complete elimination of Y to occur, it was necessary to develop an alternative way of determining sex (for example, by determining sex by the ratio of the X chromosome to autosomes), and any genes necessary for male function had to be moved to other chromosomes. In the meantime, modern data demonstrate the complex mechanisms of Y chromosome evolution and the fact that the disappearance of the Y chromosome is not guaranteed.
1:1 sex ratio
Fisher's principle outlines why almost all species using sexual reproduction have a sex ratio of 1:1. W. D. Hamilton gave the following basic explanation in his 1967 paper on "Extraordinary sex ratios", given the condition that males and females cost equal amounts to produce:
Suppose male births are less common than female.
A newborn male then has better mating prospects than a newborn female, and therefore can expect to have more offspring.
Therefore, parents genetically disposed to produce males tend to have more than average numbers of grandchildren born to them.
Therefore, the genes for male-producing tendencies spread, and male births become more common.
As the 1:1 sex ratio is approached, the advantage associated with producing males dies away.
The same reasoning holds if females are substituted for males throughout. Therefore, 1:1 is the equilibrium ratio.
Non-therian Y chromosome
Many groups of organisms in addition to therian mammals have Y chromosomes, but these Y chromosomes do not share common ancestry with therian Y chromosomes. Such groups include monotremes, Drosophila, some other insects, some fish, some reptiles, and some plants. In Drosophila melanogaster, the Y chromosome does not trigger male development. Instead, sex is determined by the number of X chromosomes. The D. melanogaster Y chromosome does contain genes necessary for male fertility. So XXY D. melanogaster are female, and D. melanogaster with a single X (X0), are male but sterile. There are some species of Drosophila in which X0 males are both viable and fertile.
ZW chromosomes
Other organisms have mirror image sex chromosomes: where the homogeneous sex is the male, with two Z chromosomes, and the female is the heterogeneous sex with a Z chromosome and a W chromosome. For example, the ZW sex-determination system is found in birds, snakes, and butterflies; the females have ZW sex chromosomes, and males have ZZ sex chromosomes.
Non-inverted Y chromosome
There are some species, such as the Japanese rice fish, in which the XY system is still developing and cross over between the X and Y is still possible. Because the male specific region is very small and contains no essential genes, it is even possible to artificially induce XX males and YY females to no ill effect.
Multiple XY pairs
Monotremes like platypuses possess four or five pairs of XY sex chromosomes, each pair consisting of sex chromosomes with homologous regions. The chromosomes of neighboring pairs are partially homologous, such that a chain is formed during mitosis. The first X chromosome in the chain is also partially homologous with the last Y chromosome, indicating that profound rearrangements, some adding new pieces from autosomes, have occurred in history.
Platypus sex chromosomes have strong sequence similarity with the avian Z chromosome, indicating close homology, and the SRY gene so central to sex-determination in most other mammals is apparently not involved in platypus sex-determination.
Human Y chromosome
The human Y chromosome is composed of about 62 million base pairs of DNA, making it similar in size to chromosome 19 and represents almost 2% of the total DNA in a male cell. The human Y chromosome carries 693 genes, 107 of which are protein-coding. However, some genes are repeated, making the number of exclusive protein-coding genes just 42. The Consensus Coding Sequence (CCDS) Project only classifies 63 out of 107 genes, though CCDS estimates are often considered lower bounds due to their conservative classification strategy. All single-copy Y-linked genes are hemizygous (present on only one chromosome) except in cases of aneuploidy such as XYY syndrome or XXYY syndrome. Traits that are inherited via the Y chromosome are called Y-linked traits, or holandric traits (from Ancient Greek ὅλος hólos, "whole" + ἀνδρός andrós, "male").
Sequence of the human Y chromosome
At the end of the Human Genome Project (and after many updates) almost half of the Y chromosome remained un-sequenced even in 2021; a different Y chromosome from the HG002 (GM24385) genome was completely sequenced in January 2022 and is included in the new "complete genome" human reference genome sequence, CHM13. The complete sequencing of a human Y chromosome was shown to contain 62,460,029 base pairs and 41 additional genes. This added 30 million base pairs, but it was discovered that the Y chromosome can vary a lot in size between individuals, from 45.2 million to 84.9 million base pairs.
Since almost half of the human Y sequence was unknown before 2022, it could not be screened out as contamination in microbial sequencing projects. As a result, the NCBI RefSeq bacterial genome database mistakenly includes some Y chromosome data.
Structure
Cytogenetic band
Non-combining region of Y (NRY)
The human Y chromosome is normally unable to recombine with the X chromosome, except for small pieces of pseudoautosomal regions (PARs) at the telomeres (which comprise about 5% of the chromosome's length). These regions are relics of ancient homology between the X and Y chromosomes. The bulk of the Y chromosome, which does not recombine, is called the "NRY", or non-recombining region of the Y chromosome. Single-nucleotide polymorphisms (SNPs) in this region are used to trace direct paternal ancestral lines.
More specifically, PAR1 is at 0.1–2.7 Mb. PAR2 is at 56.9–57.2 Mb. The non-recombining region (NRY) or male-specific region (MSY) sits between. Their sizes is now known perfectly from CHM13: 2.77 Mb and 329.5 kb. Until CHM13 the data in PAR1 and PAR2 was just copied over from X chromosome.
Sequence classes
Genes
Number of genes
The following are some of the gene count estimates of human Y chromosome. Because researchers use different approaches to genome annotation their predictions of the number of genes on each chromosome varies (for technical details, see gene prediction). Among various projects, CCDS takes an extremely conservative strategy. So CCDS's gene number prediction represents a lower bound on the total number of human protein-coding genes.
Gene list
In general, the human Y chromosome is extremely gene poor—it is one of the largest gene deserts in the human genome. Disregarding pseudoautosomal genes, genes encoded on the human Y chromosome include:
Y-chromosome-linked diseases
Diseases linked to the Y chromosome typically involve an aneuploidy, an atypical number of chromosomes.
Loss of Y chromosome
Males can lose the Y chromosome in a subset of cells, known as mosaic loss. Mosaic loss is strongly associated with age, and smoking is another important risk factor for mosaic loss.
Mosaic loss may be related to health outcomes, indicating that the Y chromosome plays important roles outside of sex determination. Males with a higher percentage of hematopoietic stem cells lacking the Y chromosome have a higher risk of certain cancers and have a shorter life expectancy. In many cases, a cause and effect relationship between the Y chromosome and health outcomes has not been determined, and some propose loss of the Y chromosome could be a "neutral karyotype related to normal aging". However, a 2022 study showed that mosaic loss of the Y chromosome causally contributes to fibrosis, heart risks, and mortality.
Further studies are needed to understand how mosaic Y chromosome loss may contribute to other sex differences in health outcomes, such as how male smokers have between 1.5 and 2 times the risk of non-respiratory cancers as female smokers. Potential countermeasures identified so far include not smoking or stopping smoking and at least one potential drug that "may help counteract the harmful effects of the chromosome loss" is under investigation.
Y chromosome microdeletion
Y chromosome microdeletion (YCM) is a family of genetic disorders caused by missing genes in the Y chromosome. Many affected men exhibit no symptoms and lead normal lives. However, YCM is also known to be present in a significant number of men with reduced fertility or reduced sperm count.
Defective Y chromosome
This results in the person presenting a female phenotype (i.e., is born with female-like genitalia) even though that person possesses an XY karyotype. The lack of the second X results in infertility. In other words, viewed from the opposite direction, the person goes through defeminization but fails to complete masculinization.
The cause can be seen as an incomplete Y chromosome: the usual karyotype in these cases is 45X, plus a fragment of Y. This usually results in defective testicular development, such that the infant may or may not have fully formed male genitalia internally or externally. The full range of ambiguity of structure may occur, especially if mosaicism is present. When the Y fragment is minimal and nonfunctional, the child is usually a girl with the features of Turner syndrome or mixed gonadal dysgenesis.
XXY
Klinefelter syndrome (47, XXY) is not an aneuploidy of the Y chromosome, but a condition of having an extra X chromosome, which usually results in defective postnatal testicular function. The mechanism is not fully understood; it does not seem to be due to direct interference by the extra X with expression of Y genes.
XYY
47, XYY syndrome (simply known as XYY syndrome) is caused by the presence of a single extra copy of the Y chromosome in each of a male's cells. 47, XYY males have one X chromosome and two Y chromosomes, for a total of 47 chromosomes per cell. Researchers have found that an extra copy of the Y chromosome is associated with increased stature and an increased incidence of learning problems in some boys and men, but the effects are variable, often minimal, and the vast majority do not know their karyotype.
In 1965 and 1966 Patricia Jacobs and colleagues published a chromosome survey of 315 male patients at
Scotland's only special security hospital for the developmentally disabled,
finding a higher than expected number of patients to have an extra Y chromosome. The authors of this study wondered "whether an extra Y chromosome predisposes its carriers to unusually aggressive behaviour", and this conjecture "framed the next fifteen years of research on the human Y chromosome".
Through studies over the next decade, this conjecture was shown to be incorrect: the elevated crime rate of XYY males is due to lower median intelligence and not increased aggression, and increased height was the only characteristic that could be reliably associated with XYY males. The "criminal karyotype" concept is therefore inaccurate.
There are also XXXY syndrome and XXXXY syndrome.
Rare
The following Y-chromosome-linked diseases are rare, but notable because of their elucidation of the nature of the Y chromosome.
More than two Y chromosomes
Greater degrees of Y chromosome polysomy (having more than one extra copy of the Y chromosome in every cell, e.g., XYYY) are considerably more rare. The extra genetic material in these cases can lead to skeletal abnormalities, dental abnormalities, decreased IQ, delayed development, and respiratory issues, but the severity features of these conditions are variable.
XX male syndrome
XX male syndrome occurs due to a genetic recombination in the formation of the male gametes, causing the SRY portion of the Y chromosome to move to the X chromosome. When such an X chromosome is present in a zygote, male gonads develop because of the SRY gene.
Genetic genealogy
In human genetic genealogy (the application of genetics to traditional genealogy), use of the information contained in the Y chromosome is of particular interest because, unlike other chromosomes, the Y chromosome is passed exclusively from father to son, on the patrilineal line. Mitochondrial DNA, maternally inherited to both sons and daughters, is used in an analogous way to trace the matrilineal line.
Brain function
Research is currently investigating whether male-pattern neural development is a direct consequence of Y-chromosome-related gene expression or an indirect result of Y-chromosome-related androgenic hormone production.
Microchimerism
In 1974, male chromosomes were discovered in fetal cells in the blood circulation of women.
In 1996, it was found that male fetal progenitor cells could persist postpartum in the maternal blood stream for as long as 27 years.
A 2004 study at the Fred Hutchinson Cancer Research Center, Seattle, investigated the origin of male chromosomes found in the peripheral blood of women who had not had male progeny. A total of 120 subjects (women who had never had sons) were investigated, and it was found that 21% of them had male DNA in their peripheral blood. The subjects were categorised into four groups based on their case histories:
Group A (8%) had had only female progeny.
Patients in Group B (22%) had a history of one or more miscarriages.
Patients Group C (57%) had their pregnancies medically terminated.
Group D (10%) had never been pregnant before.
The study noted that 10% of the women had never been pregnant before, raising the question of where the Y chromosomes in their blood could have come from. The study suggests that possible reasons for occurrence of male chromosome microchimerism could be one of the following:
miscarriages,
pregnancies,
vanished male twin,
possibly from sexual intercourse.
A 2012 study at the same institute has detected cells with the Y chromosome in multiple areas of the brains of deceased women.
See also
Genealogical DNA test
Genetic genealogy
Haplodiploid sex-determination system
Human Y chromosome DNA haplogroups
List of Y-STR markers
Muller's ratchet
Single nucleotide polymorphism
Y chromosome Short Tandem Repeat (STR)
Y linkage
Y-chromosomal Aaron
Y-chromosomal Adam
Y-chromosome haplogroups in populations of the world
References
External links
CHM13v2.0 Y chromosome
Ensembl genome browser
Human Genome Project Information—Human Chromosome Y Launchpad
On Topic: Y Chromosome—From the Whitehead Institute for Biomedical Research
Nature—focus on the Y chromosome
National Human Genome Research Institute (NHGRI)—Use of Novel Mechanism Preserves Y chromosome Genes
Ysearch.org – Public Y-DNA database
Y chromosome Consortium (YCC)
NPR's Human Male: Still A Work In Progress
Genetic Genealogy: About the use of mtDNA and Y chromosome analysis in ancestry testing
Andrology
Chromosomes
Chromosome Y
Male
Sex-determination systems
Sexual dimorphism | Y chromosome | Physics,Biology | 6,344 |
934,407 | https://en.wikipedia.org/wiki/Force%20of%20infection | In epidemiology, force of infection (denoted ) is the rate at which susceptible individuals acquire an infectious disease. Because it takes account of susceptibility it can be used to compare the rate of transmission between different groups of the population for the same infectious disease, or even between different infectious diseases. That is to say, is directly proportional to ; the effective transmission rate.
Such a calculation is difficult because not all new infections are reported, and it is often difficult to know how many susceptibles were exposed. However, can be calculated for an infectious disease in an endemic state if homogeneous mixing of the population and a rectangular population distribution (such as that generally found in developed countries), rather than a pyramid, is assumed. In this case, is given by:
where is the average age of infection. In other words, is the average time spent in the susceptible group before becoming infected. The rate of becoming infected () is therefore (since rate is 1/time). The advantage of this method of calculating is that data on the average age of infection is very easily obtainable, even if not all cases of the disease are reported.
See also
Basic reproduction number
Compartmental models in epidemiology
Epidemic
Mathematical modelling of infectious disease
References
Further reading
Muench, H. (1934) Derivation of rates from summation data by the catalytic curve. Journal of the American Statistical Association, 29: 25–38.
Epidemiology | Force of infection | Environmental_science | 293 |
4,297,164 | https://en.wikipedia.org/wiki/Lindisfarne%20Association | The Lindisfarne Association (1972–2012) was a nonprofit foundation and diverse group of intellectuals organized by cultural historian William Irwin Thompson for the "study and realization of a new planetary culture".
It was inspired by the philosophy of Alfred North Whitehead's idea of an integral philosophy of organism, and by Teilhard de Chardin's idea of planetization.
History
Thompson conceived the idea for the Lindisfarne association while touring spiritual sites and experimental communities around the world. The Lindisfarne Association is named for Lindisfarne Priory—a monastery, known for the Lindisfarne Gospels, founded on the British island of Lindisfarne in the 7th century.
Advertising executive Gene Fairly had just left his position at Interpublic Group of Companies and begun studying Zen Buddhism when he read a review of Thompson's At the Edge of History in the New York Times. Fairly visited Thompson at York University in Toronto to discuss forming a group for the promotion of planetary culture. Upon returning to New York he raised $150,000 from such donors as Nancy Wilson Ross and Sydney and Jean Lanier. Support from these donors served as an entrée to the Rockefeller Brothers Fund.
Incorporation and first years in New York
Lindisfarne was incorporated as a non-profit educational foundation in December 1972. It began operations at a refitted summer camp in Southampton, New York on August 31, 1973.
From 1974–1977 Lindisfarne held an annual conference "to explore the new planetary culture" with the following themes:
Planetary Culture and the New Image of Humanity, 1974
Conscious Evolution and the Evolution of Consciousness, 1975
A Light Governance for America: the Cultures and Strategies of Decentralization, 1976
Mind in Nature, 1977
Earth's answer : explorations of planetary culture at the Lindisfarne conferences (1977) reprints some of the lectures given at the 1974 and 1975 conferences.
The Lindisfarne Association was first based in Southampton, New York in 1973 and then in Manhattan at the Church of the Holy Communion and Buildings which was leased to Lindisfarne from 1976–1979.
Move to Crestone and formation of other branches
As Lindisfarne began to run low on funding, it faced the loss of its lease on the Church of the Holy Communion. At a conference at the New Alchemy Institute in Cape Cod, Massachusetts, Petro-Canada CEO and United Nations official Maurice Strong offered to donate land from his ranch in Crestone, Colorado. Thompson chose 77 acres of land near Spanish Creek—self-reportedly because his "Irish Druid Radar" had gone off while driving past—where Lindisfarne began to construct new buildings for its purposes.
Today the Lindisfarne Fellows House, the Lindisfarne Chapel, and the Lindisfarne Mountain Retreat are under the ownership and management of the Crestone Mountain Zen Center. Lindisfarne has functioned variously as a sponsor of classes, conferences, and concerts and public lectures events, and as a think tank and retreat, similar to the Esalen Institute in California. Lindisfarne functioned as a not-for-profit foundation until 2009; the Lindisfarne Fellowship continued to hold annual meetings until 2012. It is no longer an active organization.
In addition to its facility in Crestone (the "Lindisfarne Mountain Retreat"), three other branches of the organization were formed:
a headquarters in New York City at the Cathedral of St. John the Divine;
the Lindisfarne Press was established in Stockbridge, Massachusetts; and
the Lindisfarne Fellows House was opened at the San Francisco Zen Center.
Goals and doctrine
The Lindisfarne doctrine is closely related to that of its founder, William Thompson. Mentioned as part of the Lindisfarne ideology are a long list of spiritual and esoteric traditions including yoga, Tibetan Buddhism, Chinese traditional medicine, Hermeticism, Celtic animism, Gnosticism, cabala, geomancy, ley lines, Pythagoreanism, and ancient mystery religions.
The group placed a special emphasis on sacred geometry, defined by Thompson as "a vision of divine intelligence, the logos, revealing itself in all forms, from the logarithmic spiral of a seashell to the hexagonal patterns of cooling basalt, from the architecture of the molecule to the galaxy." Rachel Fletcher, Robert Lawlor, and Keith Critchlow lectured at Crestone on the application of sacred geometry, Platonism, and Pythagoreanism to architecture. The exemplar of these ideas is the Grail Chapel in Crestone (also known as Lindisfarne Chapel), which is built to reflect numerous basic geometrical relationships.
Lindisfarne's social agenda was exemplified by the "meta-industrial village", a small community focused on subsistence and crafts while yet connected to a world culture. All members of a community might participate in essential tasks such as the harvest. (Thompson has speculated that in the United States, 40% of the population could work at agriculture, and another 40% in social services.) The villages would have a sense of shared purpose in transforming world culture. They would combine "the four classical economies of human history, hunting and gathering, agriculture, industry, and cybernetics", all "recapitulated within a single deme."
The "Meadowcreek Project" in Arkansas, begun in 1979 by David and Wilson Orr, was an effort to actualize a meta-industrial village as envisioned by the Lindisfarne Association. This project received funding from the Ozarks Regional Commission, the Arkansas Energy Department, and the Winthrop Rockefeller Foundation.
The villages would be linked together by an electronic information network (i.e., what today we call the internet). Thompson called for a counter-cultural vanguard "which can formulate an integral vision of culture and maintain the high standards of that culture without compromise to the forces of electronic vulgarization."
According to the Lindisfarne Association website, Lindisfarne's fourfold goals are:
The Planetization of the Esoteric
The realization of the inner harmony of all the great universal religions and the spiritual traditions of the tribal peoples of the world.
The fostering of a new and healthier balance between nature and culture through the research and development of appropriate technologies, architectural settlements and compassionate economies for meta-industrial villages and convivial cities.
The illumination of the spiritual foundations of political governance through scholarship and artistic communications that foster a global ecology of consciousness beyond the present ideological systems of warring industrial nation-states, outraged traditional societies, and ravaged lands and seas.
Thompson has also stated the United States has a unique role to play in the promotion of planetary culture because people from all over the world mingle there. Lindisfarne sought to spread its message widely, through a mailing list and through book publications of the Lindisfarne press.
Journalist Sally Helgesen, after a visit in 1977, criticized Lindisfarne as confused pseudo-intellectuals, citing for example their attempt to build an expensive fish "bioshelter" while overlooking a marsh with fish in it.
Members
Members of the Lindisfarne Fellowship have included, among others:
mathematician Ralph Abraham
ecological philosopher David Abram
economist W. Brian Arthur
Zen Buddhist Zentatsu Richard Baker
anthropologist Gregory Bateson
anthropologist Mary Catherine Bateson
poet Wendell Berry
composer Evan Chambers
geometer and art historian Keith Critchlow
international law specialist Richard Falk
physicist David Ritz Finkelstein
Zen Buddhist Joan Halifax-Roshi
economist Hazel Henderson
poet Jane Hirshfield
Sufi Pir Zia Inayat-Khan
ecologist Wes Jackson
biologist Stuart Kauffman
scientist James Lovelock
physicist and "soft energy" advocate Amory Lovins
biologist Lynn Margulis
dean James Parks Morton
philosopher/author John Michell
author Michael Murphy
dancer/anthropologist Natasha Myers
religious scholar Elaine Pagels
poet Kathleen Raine
writer Dorion Sagan
economist E. F. Schumacher
astronaut Rusty Schweickart
poet Gary Snyder
architect Paolo Soleri
spiritual teacher David Spangler
monk David Steindl-Rast
United Nations undersecretary Maurice Strong
philosopher Evan Thompson
biologist John Todd
architect Sim Van der Ryn
philosopher/biologist Francisco Varela
composer Paul Winter
physicist/contemplative Arthur Zajonc
Current status
The Lindisfarne Association disbanded as a not-for-profit institution in 2009. The Lindisfarne Fellows continued to meet once a year up to 2012 at varying locations as an informal group interested in one another's creative projects.
References
Works cited
Further reading
Lindisfarne Cafe Memoir in Wild River Review, wildriverreview.com:
Pilgrimage to Lindisfarne 1972
LINDISFARNE CAFE - MEMOIR - Building a Dream - PART ONE: Lindisfarne in Crestone, Colorado, 1979-1997
LINDISFARNE CAFE - MEMOIR - Building a Dream/The Shadow Side PART TWO: Lindisfarne in Crestone, Colorado, 1979-1997
LINDISFARNE CAFE - MEMOIR - Building a Dream/The Cathedral PART THREE: Lindisfarne in Crestone, Colorado, 1979-1997
LINDISARNE CAFE - MEMOIR - Conclusion: The Economic Relevance of Lindisfarne
External links
Lindisfarne Association website at WilliamIrwinThompson.org. Archived.
2007 Symposium Notes from the Wild River Review
Lindisfarne Tapes (lecture recordings): index at Schumaker Center for a New Economics; search results from the Internet Archive
Julia Rubin,"Colorado Site Called 'a Place of Power': Spiritualists, Environmentalists Find Haven in the Baca." Los Angeles Times, 20 August 1989.
New Age communities
New Age organizations
Organizations established in 1972
Sacred geometry
Small press publishing companies
Spiritual organizations
Utopian communities | Lindisfarne Association | Engineering | 1,988 |
20,835,484 | https://en.wikipedia.org/wiki/Liquid%20junction%20potential | Liquid junction potential (shortly LJP) occurs when two solutions of electrolytes of different concentrations are in contact with each other. The more concentrated solution will have a tendency to diffuse into the comparatively less concentrated one. The rate of diffusion of each ion will be roughly proportional to its speed in an electric field, or their ion mobility. If the anions diffuse more rapidly than the cations, they will diffuse ahead into the dilute solution, leaving the latter negatively charged and the concentrated solution positively charged. This will result in an electrical double layer of positive and negative charges at the junction of the two solutions. Thus at the point of junction, a potential difference will develop because of the ionic transfer. This potential is called liquid junction potential or diffusion potential which is non-equilibrium potential. The magnitude of the potential depends on the relative speeds of the ions' movement.
Calculation
The liquid junction potential cannot be measured directly but calculated. The electromotive force (EMF) of a concentration cell with transference includes the liquid junction potential.
The EMF of a concentration cell without transport is:
where and are activities of HCl in the two solutions, is the universal gas constant, is the temperature and is the Faraday constant.
The EMF of a concentration cell with transport (including the ion transport number) is:
where and are activities of HCl solutions of right and left hand electrodes, respectively, and is the transport number of Cl−.
Liquid junction potential is the difference between the two EMFs of the two concentration cells, with and without ionic transport:
Elimination
The liquid junction potential interferes with the exact measurement of the electromotive force of a chemical cell, so its effect should be minimized as much as possible for accurate measurement. The most common method of eliminating the liquid junction potential is to place a salt bridge consisting of a saturated solution of potassium chloride (KCl) and ammonium nitrate (NH4NO3) with lithium acetate (CH3COOLi) between the two solutions constituting the junction. When such a bridge is used, the ions in the bridge are present in large excess at the junction and they carry almost the whole of the current across the boundary. The efficiency of KCl/NH4NO3 is connected with the fact that in these salts, the transport numbers of anions and cations are the same.
See also
Concentration cell
Ion transport number
ITIES
Electrochemical kinetics
References
Advanced Physical Chemistry by Gurtu & Snehi
Principles of Physical Chemistry by Puri, Sharma, Pathania
External links
J. Phys. Chem. Elimination of the junction potențial with glass electrode
Open source Liquid Junction Potential calculator
Junction Potential Explanation Video
Diffusion
Ions
Physical chemistry
Electrochemistry
Electrochemical potentials | Liquid junction potential | Physics,Chemistry | 555 |
17,306,626 | https://en.wikipedia.org/wiki/Copy%20%28command%29 | In computing, copy is a command in various operating systems. The command copies computer files from one directory to another.
Overview
Generally, the command copies files from one location to another. It is used to make copies of existing files, but can also be used to combine (concatenate) multiple files into target files. The destination defaults to the current working directory. If multiple source files are indicated, the destination must be a directory, or an error will result. The command can copy in text mode or binary mode; in text mode, copy will stop when it reaches the EOF character; in binary mode, the files will be concatenated in their entirety, ignoring EOF characters.
Files may be copied to devices. For example, copy file con outputs file to the screen console. Devices themselves may be copied to a destination file, for example, copy con file takes the text typed into the console and puts it into FILE, stopping when EOF (Ctrl+Z) is typed.
Implementations
The command is available in DEC RT-11, OS/8, RSX-11, Intel ISIS-II, iRMX 86, DEC TOPS-10, TOPS-20, OpenVMS, MetaComCo TRIPOS, Heath Company HDOS, Zilog Z80-RIO, Microware OS-9, DOS, DR FlexOS, IBM/Toshiba 4690 OS, TSL PC-MOS, HP MPE/iX, IBM OS/2, Microsoft Windows, Datalight ROM-DOS, ReactOS, SymbOS and DexOS.
The copy command is supported by Tim Paterson's SCP 86-DOS. Under IBM PC DOS/MS-DOS it is available since version 1. A more advanced copy command is called xcopy.
The equivalent Unix command is cp, the CP/M command is PIP.
The command is analogous to the Stratus OpenVOS copy_file command.
Example for DOS
copy letter.txt [destination]
Files may be copied to device files (e.g. copy letter.txt lpt1 sends the file to the printer on lpt1. copy letter.txt con would output to stdout, like the type command. Note that copy page1.txt+page2.txt book.txt will concatenate the files and output them as book.txt. Which is just like the cat command). It can also copy files between different disk drives.
There are two command-line switches to modify the behaviour when concatenating files:
Text mode - This copies the text content of the file, stopping when it reaches the EOF character.
copy /a doc1.txt + doc2.txt doc3.txt
copy /a *.txt doc3.txt
Binary mode - This concatenates files in their entirety, ignoring EOF characters.
copy /b image1.jpg + image2.jpg image3.jpg
See also
XCOPY in DOS, OS/2, Windows etc.
cp (Unix)
Peripheral Interchange Program
References
Further reading
External links
copy | Microsoft Docs
Open source COPY implementation that comes with MS-DOS v2.0
Internal DOS commands
MSX-DOS commands
OS/2 commands
ReactOS commands
Windows commands
Microcomputer software
Microsoft free software
Windows administration
File copy utilities | Copy (command) | Technology | 691 |
15,159,573 | https://en.wikipedia.org/wiki/SystemVerilog%20DPI | SystemVerilog DPI (Direct Programming Interface) is an interface which can be used to interface SystemVerilog with foreign languages. These foreign languages can be C, C++, SystemC as well as others. DPIs consist of two layers: a SystemVerilog layer and a foreign language layer. Both the layers are isolated from each other.
Explanation
Direct Programming Interface (DPI) allows direct inter language function calls between the SystemVerilog and Foreign language. The functions implemented in Foreign language can be called from SystemVerilog and such functions are called Import functions. Similarly, functions implemented in SystemVerilog can be called from Foreign language (C/C++ or System C); such functions are called Export functions. DPIs allow transfer of data between two domains through function arguments and return.
Function import and export
1) Function Import:- A function implemented in Foreign language can be used in SystemVerilog by importing it. A Foreign language function used in SystemVerilog is called Imported function.
Properties of imported function and task
An Imported function shall complete their execution instantly and consume zero simulation time. Imported task can consume time.
Imported function can have input, output, and inout arguments.
The formal input arguments shall not be modified. If such arguments are changed within a function, the changes shall not be visible outside the function.
Imported function shall not assume any initial values of formal output arguments. The initial value of output arguments is undetermined and implementation dependent.
Imported function can access the initial value of a formal inout argument. Changes that the Imported function makes to a formal inout argument shall be visible outside the function.
An Imported function shall not free the memory allocated by SystemVerilog code nor expect SystemVerilog code to free memory allocated by Foreign code or (Foreign Compiler).
A call to an Imported task can result in suspension of the currently executing thread. This occurs when an Imported task calls an Exported task, and the Exported task executes a delay control, event control or wait statement. Thus it is possible for an Imported task to be simultaneously active in multiple execution threads.
An Imported function or task can be equip with special properties called pure or context.
Pure and context tasks and functions
Pure functions
A function whose results solely depends on the value of its input arguments with no side effects is called Pure function.
Properties of pure functions
Only Non-Void functions with no output or inout arguments can be called as Pure functions.
Functions specified as Pure shall have no side effects, their results need to depend solely on the values of their input arguments.
A Pure function call can be safely eliminated if its result is not needed or if its results for the same value of input arguments is available for reuse without needing to recalculate.
A Pure function is assumed not to directly or indirectly perform the following:
Perform any file operation.
Read or Write anything in Environment Variable, Shared memory, Sockets etc.
Access any persistent data like Global or Static variable.
An Imported task can never be declared Pure.
Context tasks and functions
An Imported task or function which calls "Exported" tasks or functions or accesses SystemVerilog data objects other than its actual arguments is called Context task or function.
Properties of context tasks and functions
1) A Context Imported task or function can access (read or write) any SystemVerilog data object by calling (PLI/VPI) or by calling Export task or function. Therefore, a call to Context task or function is a barrier for SystemVerilog compiler optimization.
Import declaration
import "DPI-C" function int calc_parity (input int a);
Export declaration
export "DPI-C" my_cfunction = function myfunction;
Calling Unix functions
SystemVerilog code can call Unix functions directly by importing them, with no need for a wrapper.
DPI example
Calling 'C' functions in SystemVerilog
C - code file
#include <stdio.h>
#include <stdlib.h>
extern int add() {
int a = 10, b = 20;
a = a + b;
printf("Addition Successful and Result = %d\n", a);
return a;
}
SystemVerilog code file
module tb_dpi;
import "DPI-C" function int add();
import "DPI-C" function int sleep(input int secs);
int j;
initial
begin
$display("Entering in SystemVerilog Initial Block");
#20
j = add();
$display("Value of J = %d", j);
$display("Sleeping for 3 seconds with Unix function");
sleep(3);
$display("Exiting from SystemVerilog Initial Block");
#5 $finish;
end
endmodule
References
SystemVerilog DPI Tutorial from Project VeriPage
Application programming interfaces
Hardware verification languages | SystemVerilog DPI | Engineering | 1,019 |
365,092 | https://en.wikipedia.org/wiki/Neodymium%20magnet | A neodymium magnet (also known as NdFeB, NIB or Neo magnet) is a permanent magnet made from an alloy of neodymium, iron, and boron to form the Nd2Fe14B tetragonal crystalline structure. They are the most widely used type of rare-earth magnet.
Developed independently in 1984 by General Motors and Sumitomo Special Metals, neodymium magnets are the strongest type of permanent magnet available commercially. They have replaced other types of magnets in many applications in modern products that require strong permanent magnets, such as electric motors in cordless tools, hard disk drives and magnetic fasteners.
NdFeB magnets can be classified as sintered or bonded, depending on the manufacturing process used.
History
General Motors (GM) and Sumitomo Special Metals independently discovered the Nd2Fe14B compound almost simultaneously in 1984. The research was initially driven by the high raw materials cost of samarium-cobalt permanent magnets (SmCo), which had been developed earlier. GM focused on the development of melt-spun nanocrystalline Nd2Fe14B magnets, while Sumitomo developed full-density sintered Nd2Fe14B magnets.
GM commercialized its inventions of isotropic Neo powder, bonded neo magnets, and the related production processes by founding Magnequench in 1986 (Magnequench has since become part of Neo Materials Technology, Inc., which later merged into Molycorp). The company supplied melt-spun Nd2Fe14B powder to bonded magnet manufacturers. The Sumitomo facility became part of Hitachi, and has manufactured but also licensed other companies to produce sintered Nd2Fe14B magnets. Hitachi has held more than 600 patents covering neodymium magnets.
Chinese manufacturers have become a dominant force in neodymium magnet production, based on their control of much of the world's rare-earth mines.
The United States Department of Energy has identified a need to find substitutes for rare-earth metals in permanent magnet technology and has funded such research. The Advanced Research Projects Agency-Energy has sponsored a Rare Earth Alternatives in Critical Technologies (REACT) program, to develop alternative materials. In 2011, ARPA-E awarded 31.6 million dollars to fund Rare-Earth Substitute projects. Because of its role in permanent magnets used for wind turbines, it has been argued that neodymium will be one of the main objects of geopolitical competition in a world running on renewable energy. This perspective has been criticized for failing to recognize that most wind turbines do not use permanent magnets and for underestimating the power of economic incentives for expanded production.
Properties
Magnetic properties
In its pure form, neodymium has magnetic properties—specifically, it is antiferromagnetic, but only at low temperatures, below . However, some compounds of neodymium with transition metals such as iron are ferromagnetic, with Curie temperatures well above room temperature. These are used to make neodymium magnets.
The strength of neodymium magnets is the result of several factors. The most important is that the tetragonal Nd2Fe14B crystal structure has exceptionally high uniaxial magnetocrystalline anisotropy (HA ≈ 7T –
magnetic field strength H in units of A/m versus magnetic moment in A·m2). This means a crystal of the material preferentially magnetizes along a specific crystal axis but is very difficult to magnetize in other directions. Like other magnets, the neodymium magnet alloy is composed of microcrystalline grains which are aligned in a powerful magnetic field during manufacture so their magnetic axes all point in the same direction. The resistance of the crystal lattice to turning its direction of magnetization gives the compound a very high coercivity, or resistance to being demagnetized.
The neodymium atom can have a large magnetic dipole moment because it has 4 unpaired electrons in its electron structure as opposed to (on average) 3 in iron. In a magnet it is the unpaired electrons, aligned so that their spin is in the same direction, which generate the magnetic field. This gives the Nd2Fe14B compound a high saturation magnetization (Js ≈ 1.6T or 16kG) and a remanent magnetization of typically 1.3 teslas. Therefore, as the maximum energy density is proportional to Js2, this magnetic phase has the potential for storing large amounts of magnetic energy (BHmax ≈ 512kJ/m3 or 64MG·Oe).
This magnetic energy value is about 18 times greater than "ordinary" ferrite magnets by volume and 12 times by mass. This magnetic energy property is higher in NdFeB alloys than in samarium cobalt (SmCo) magnets, which were the first type of rare-earth magnet to be commercialized. In practice, the magnetic properties of neodymium magnets depend on the alloy composition, microstructure, and manufacturing technique employed.
The Nd2Fe14B crystal structure can be described as alternating layers of iron atoms and a neodymium-boron compound. The diamagnetic boron atoms do not contribute directly to the magnetism but improve cohesion by strong covalent bonding. The relatively low rare earth content (12% by volume, 26.7% by mass) and the relative abundance of neodymium and iron compared with samarium and cobalt makes neodymium magnets lower in price than the other major rare-earth magnet family, samarium–cobalt magnets.
Although they have higher remanence and much higher coercivity and energy product, neodymium magnets have lower Curie temperature than many other types of magnets. Special neodymium magnet alloys that include terbium and dysprosium have been developed that have higher Curie temperature, allowing them to tolerate higher temperatures.
Physical and mechanical properties
Corrosion
Sintered Nd2Fe14B tends to be vulnerable to corrosion, especially along grain boundaries of a sintered magnet. This type of corrosion can cause serious deterioration, including crumbling of a magnet into a powder of small magnetic particles, or spalling of a surface layer.
This vulnerability is addressed in many commercial products by adding a protective coating to prevent exposure to the atmosphere. Nickel, nickel-copper-nickel and zinc platings are the standard methods, although plating with other metals, or polymer and lacquer protective coatings, are also in use.
Temperature sensitivity
Neodymium has a negative coefficient, meaning the coercivity along with the magnetic energy density (BHmax) decreases as temperature increases. Neodymium-iron-boron magnets have high coercivity at room temperature, but as the temperature rises above , the coercivity decreases drastically until the Curie temperature (around ). This fall in coercivity limits the efficiency of the magnet under high-temperature conditions, such as in wind turbines and hybrid vehicle motors. Dysprosium (Dy) or terbium (Tb) is added to curb the fall in performance from temperature changes. This addition makes the magnets more costly to produce.
Grades
Neodymium magnets are graded according to their maximum energy product, which relates to the magnetic flux output per unit volume. Higher values indicate stronger magnets. For sintered NdFeB magnets, there is a widely recognized international classification. Their values range from N28 up to N55 with a theoretical maximum at N64. The first letter N before the values is short for neodymium, meaning sintered NdFeB magnets. Letters following the values indicate intrinsic coercivity and maximum operating temperatures (positively correlated with the Curie temperature), which range from default (up to ) to TH ().
Grades of sintered NdFeB magnets:
N27 – N55
N30M – N50M
N30H – N50H
N30SH – N48SH
N28UH – N42UH
N28EH – N40EH
N28TH – N35TH
N33VH/AH
Production
There are two principal neodymium magnet manufacturing methods:
Classical powder metallurgy or sintered magnet process
Sintered Nd-magnets are prepared by the raw materials being melted in a furnace, cast into a mold and cooled to form ingots. The ingots are pulverized and milled; the powder is then sintered into dense blocks. The blocks are then heat-treated, cut to shape, surface treated and magnetized.
Rapid solidification or bonded magnet process
Bonded Nd-magnets are prepared by melt spinning a thin ribbon of the NdFeB alloy. The ribbon contains randomly oriented Nd2Fe14B nano-scale grains. This ribbon is then pulverized into particles, mixed with a polymer, and either compression- or injection-molded into bonded magnets.
Bonded neo Nd-Fe-B powder is bound in a matrix of a thermoplastic polymer to form the magnets. The magnetic alloy material is formed by splat quenching onto a water-cooled drum. This metal ribbon is crushed to a powder and then heat-treated to improve its coercivity. The powder is mixed with a polymer to form a mouldable putty, similar to a glass-filled polymer. This is pelletised for storage and can later be shaped by injection moulding. An external magnetic field is applied during the moulding process, orienting the field of the completed magnet.
In 2015, Nitto Denko of Japan announced their development of a new method of sintering neodymium magnet material. The method exploits an "organic/inorganic hybrid technology" to form a clay-like mixture that can be fashioned into various shapes for sintering. It is said to be possible to control a non-uniform orientation of the magnetic field in the sintered material to locally concentrate the field, for instance to improve the performance of electric motors. Mass production is planned for 2017.
As of 2012, 50,000tons of neodymium magnets are produced officially each year in China, and 80,000tons in a "company-by-company" build-up done in 2013. China produces more than 95% of rare earth elements and produces about 76% of the world's total rare-earth magnets, as well as most of the world's neodymium.
Applications
Existing magnet applications
Neodymium magnets have replaced alnico and ferrite magnets in many of the myriad applications in modern technology where strong permanent magnets are required, because their greater strength allows the use of smaller, lighter magnets for a given application. Some examples are:
Head actuators for computer hard disks
Mechanical e-cigarette firing switches
Locks for doors
Loudspeakers and headphones
Mobile phones
Magnetic bearings and couplings
Benchtop NMR spectrometers
Electric motors
Cordless tools
Servomotors
Lifting and compressor motors
Synchronous motors
Spindle and stepper motors
Electrical power steering
Drive motors for hybrid and electric vehicles. The electric motors of each Toyota Prius require of neodymium.
Actuators
Electric generators for wind turbines (only those with permanent magnet excitation)
Retail media case decouplers
In process industries, powerful neodymium magnets are used to catch foreign bodies and protect product and processes
Identifying precious metals in various objects (cutlery, coins, jewelry etc.)
New applications
The greater strength of neodymium magnets has inspired new applications in areas where magnets were not used before, such as magnetic jewelry clasps, keeping up foil insulation, children's magnetic building sets (and other neodymium magnet toys) and as part of the closing mechanism of modern sport parachute equipment. They are the main metal in the formerly popular desk-toy magnets, "Buckyballs" and "Buckycubes", though some U.S. retailers have chosen not to sell them because of child-safety concerns, and they have been banned in Canada for the same reason. While a similar ban has been lifted in the United States in 2016, the minimum age requirement advised by the CPSC is now 14, and there are now new warning label requirements.
The strength and magnetic field homogeneity on neodymium magnets has also opened new applications in the medical field with the introduction of open magnetic resonance imaging (MRI) scanners used to image the body in radiology departments as an alternative to superconducting magnets that use a coil of superconducting wire to produce the magnetic field.
Neodymium magnets are used as a surgically placed anti-reflux system which is a band of magnets surgically implanted around the lower esophageal sphincter to treat gastroesophageal reflux disease (GERD). They have also been implanted in the fingertips in order to provide sensory perception of magnetic fields, though this is an experimental procedure only popular among biohackers and grinders.
Neodymium is used as a magnetic crane which is a lifting device that lifts objects by magnetic force. These cranes lift ferrous materials like steel plates, pipes, and scrap metal using the persistent magnetic field of the permanent magnets without requiring a continuous power supply. Magnetic cranes are used in scrap yards, shipyards, warehouses, and manufacturing plants.
Hazards
The greater forces exerted by rare-earth magnets create hazards that may not occur with other types of magnet. Neodymium magnets larger than a few cubic centimeters are strong enough to cause injuries to body parts pinched between two magnets, or a magnet and a ferrous metal surface, even causing broken bones.
Magnets that get too near each other can strike each other with enough force to chip and shatter the brittle magnets, and the flying chips can cause various injuries, especially eye injuries. There have even been cases where young children who have swallowed several magnets have had sections of the digestive tract pinched between two magnets, causing injury or death. Also this could be a serious health risk if working with machines that have magnets in or attached to them.
The stronger magnetic fields can be hazardous to mechanical and electronic devices, as they can erase magnetic media such as floppy disks and credit cards, and magnetize watches and the shadow masks of CRT-type monitors at a greater distance than other types of magnet. In some cases, chipped magnets can act as a fire hazard as they come together, sending sparks flying as if they were a lighter flint, because some neodymium magnets contain ferrocerium.
See also
References
Further reading
MMPA 0100-00, Standard Specifications for Permanent Magnet Materials
K.H.J. Buschow (1998) Permanent-Magnet Materials and their Applications, Trans Tech Publications Ltd., Switzerland,
The Dependence of Magnetic Properties and Hot Workability of Rare Earth-Iron-Boride Magnets Upon Composition.
"2023 Honda Prize Honors Neodymium Magnet Inventors," Advanced Materials & Processes, Vol. 181, No. 8, Nov/Dec 2023, p 29 - 31.
External links
Geeky Rare-Earth Magnets Repel Sharks
Concern as China clamps down on rare earth exports
What Are Neodymium Magnets?
Ferromagnetic materials
Loudspeaker technology
Magnetic alloys
Magnetic levitation
Rare earth alloys
Types of magnets
Borides
Neodymium compounds
Ferrous alloys
Japanese inventions | Neodymium magnet | Physics,Chemistry,Materials_science,Engineering | 3,183 |
17,170,770 | https://en.wikipedia.org/wiki/Shankar%20Sastry | S. Shankar Sastry is the founding chancellor of the Plaksha University, Mohali and a former Dean of Engineering at University of California, Berkeley.
From 1996-1999, he was the director of the Electronics Research Laboratory at Berkeley. From 1999-early 2001, he was on leave from Berkeley as director of the Information Technology Office at the Defense Advanced Research Projects Agency (DARPA). He has served as chairman, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley from January, 2001 through June 2004. From 2004 to 2007 he was the director of CITRIS (Center for Information Technology in the Interest of Society) an interdisciplinary center spanning UC Berkeley, Davis, Merced and Santa Cruz.
He is currently a professor of Electrical Engineering and Computer Science, a professor of Bioengineering, and faculty director of the Blum Center for Developing Economies at UC Berkeley.
Biography
Sastry obtained bachelor's degree from Indian Institute of Technology Bombay (1977) and Master's and PhD degrees from University of California, Berkeley (1979, 1980, 1981). His PhD advisor was Professor Charles Desoer. He was on the faculty of MIT as Asst. Professor from 1980–82 and Harvard University as a chaired Gordon Mc Kay professor in 1994. He is married to his former PhD student Claire J. Tomlin, who currently holds a joint appointment as an associate professor in the Department of Aeronautics and Astronautics and the Department of Electrical Engineering, at Stanford University, where she is director of the Hybrid Systems Laboratory and as an associate professor in the Department of Electrical Engineering and Computer Science at University of California, Berkeley.
His areas of personal research are resilient network control systems, cybersecurity, autonomous and unmanned systems (especially aerial vehicles), computer vision, nonlinear and adaptive control, control of hybrid and embedded systems, and software. Most recently he has been concerned with critical infrastructure protection, in the context of establishing a ten-year NSF Science and Technology Center, TRUST (Team for Research in Ubiquitous Secure Technologies).
He has coauthored over 600 technical papers and 10 books, including Adaptive Control: Stability, Convergence and Robustness (with M. Bodson, Prentice Hall, 1989), A Mathematical Introduction to Robotic Manipulation (with R. Murray and Z. Li, CRC Press, 1994), Nonlinear Systems: Analysis, Stability and Control (Springer-Verlag, 1999), An Invitation to 3D Vision: From Images to Models (Springer Verlag, 2003) (with Yi Ma, Stefano Soatto, and Jana Košecká), and Generalized Principal Component Analysis (Springer, 2016) (with René Vidal and Yi Ma). Sastry served as associate editor for numerous publications, including: IEEE Transactions on Automatic Control; IEEE Control Magazine; IEEE Transactions on Circuits and Systems; the Journal of Mathematical Systems, Estimation and Control; IMA Journal of Control and Information; the International Journal of Adaptive Control and Signal Processing; Journal of Biomimetic Systems and Materials. He is currently an associate editor of the IEEE Proceedings.
Sastry was elected a member of the National Academy of Engineering in 2001 and the American Academy of Arts and Sciences (AAAS) in 2004. He also received the President of India Gold Medal in 1977, the IBM Faculty Development award for 1983-1985, the NSF Presidential Young Investigator Award in 1985 and the Eckman Award of the American Automatic Control Council in 1990, the John R. Ragazzini Award for Distinguished Accomplishments in teaching in 2005, an M. A. (honoris causa) from Harvard in 1994, Fellow of the IEEE in 1994, the distinguished Alumnus Award of the Indian Institute of Technology in 1999, the David Marr prize for the best paper at the International Conference in Computer Vision in 1999, an honorary doctorate from the KTH Royal Institute of Technology in 2007, and the C.L. Tien Award for Academic Leadership in 2010. He has been a member of the Air Force Scientific Advisory Board from 2002-5 and the Defense Science Board in 2008 among other national boards. He is currently on the corporate board of HCL Technologies (India) and co-directs the C3.ai Digital Transformation Institute. Currently, He is serving as the chancellor of the Plaksha University, Mohali. He is on the scientific advisory boards of Interwest LLC, GE Software, and Eriksholm. He is ranked top 30 among electrical-engineering researchers worldwide.
References
External links
Home Page
Control theorists
Indian roboticists
20th-century Indian mathematicians
Living people
IIT Bombay alumni
Indian emigrants to the United States
UC Berkeley College of Environmental Design alumni
UC Berkeley College of Engineering faculty
Massachusetts Institute of Technology faculty
Year of birth missing (living people)
American people of Indian descent | Shankar Sastry | Engineering | 963 |
1,860,451 | https://en.wikipedia.org/wiki/Poppet | In folk magic and witchcraft, a poppet (also known as poppit, moppet, mommet or pippy) is a doll made to represent a person, for casting spells on them, or aiding that person through magic. They are occasionally found lodged in chimneys. These dolls may be fashioned from materials such as carved root, grain, corn shafts, fruit, paper, wax, a potato, clay, branches, or cloth stuffed with herbs, with the intent that any actions performed upon the effigy will be transferred to the subject based on sympathetic magic. Poppets are also used as kitchen witch figures.
Etymology
The word poppet is an older spelling of puppet, from Middle English popet, meaning a small child or a doll. In British English it continues to hold this meaning. Poppet is also a chiefly British term of endearment or diminutive referring to a young child, much like the words "dear" or "sweetie."
Purpose
Poppets are commonly believed, in folk magic, to serve as spirit bridges. A poppet can be designed for benevolent purposes, such as the wishing of good health or opportunities on the recipient, or for more malicious intents, such as bringing harm onto the person they represent.
Poppets throughout the world
Throughout the world, each culture has their own version of a poppet. Poppet doll materials vary across cultures, and, most importantly, the motive or intentions for the poppet.
Types of Poppets
German kitchen witch
The origin of the German kitchen witch poppet is debated by many. One suggested location for the kitchen poppet's origin is Scandinavia, although the first mentions of it poppet in writing come from England.
The kitchen witch poppet is intended to bring good energy into the home kitchen, and prevent kitchen disasters. Many of these common kitchen errors can include lowering the risk of food coming out bad such as having the meal burnt, or undercooked. In order for these intentions to be upheld, it is believed that a prayer or a ritual will be needed, due to the idea of the kitchen being one of the most important places in the household, as the source of remedies or basic nutrition to maintain the body.
Love poppet
A love poppet may be used for the healing of oneself, for displaying affection to loved ones, or to foster relationships or couples. Objects that are put inside the poppet could be rose quartz, petals of the recipient's favourite flowers. Small belongings of the intended person, placed within the poppet, can also serve as a way to make it more connected to the intent.
Prosperity poppet
The prosperity poppet could be used in hoping for a good outcome in one's life through school, work, physical status, or financial status.
Healing poppet
Healing poppets may be intended to grant good health mentally, physically and emotionally. In this poppet it is common to include objects traditionally associated with healing, such as rose quartz, rose petals, and sage to cleanse the body and mind.
Protection poppet
These poppets are designed for spiritual protection of a person's family and loved ones, and the removal of supposed curses or bad luck. These poppets may be designed to physically resemble their intended person. They may also include items such as hematite and amethyst, in addition to basil, patchouli, and coffee.
See also
Corn dolly
Corn husk doll
Hoko doll
Motanka doll
Effigy
Voodoo doll
Kachina doll
Hopi Kachina figure
Witch bottle
Mexican rag doll
Folk religion
Ushi no toki mairi
References
6. All About Love Magic February 9, 2020
7. What is Lodestone? April 4, 2019
8. Gods and Goddesses of Healing April 26, 2019
9. Protection Magic June 25, 2019
Talismans
Amulets
Anthropology
European witchcraft
Traditional dolls
English folklore
Magic items
Cunning folk | Poppet | Physics | 794 |
2,212,142 | https://en.wikipedia.org/wiki/R.%20J.%20Berry | Robert James "Sam" Berry (26 October 1934 – 29 March 2018) was a British geneticist, naturalist and Christian theorist. He was professor of genetics at University College London between 1974 and 2000. Before that he was a lecturer in genetics at The Royal Free Hospital School of Medicine in London. He was president from 1983 to 1986 of the Linnean Society, the British Ecological Society and the European Ecological Federation. He was one of the founding trustees on the creation of National Museums Liverpool in 1986. As a Christian, Berry spoke out in favour of theistic evolution, served as a lay member of the Church of England's General Synod and as president of Christians in Science. He was a member of the Board of Governors of Monkton Combe School from 1979 to 1991. He gave the 1997–98 Glasgow Gifford Lectures entitled Gods, Genes, Greens and Everything. His father, A. J. Berry, died in 1947.
Early life and education
He was educated at Kirkham Grammar School and Shrewsbury School. One of his first published works in 1961 was in the "Teach yourself books" series Genetics. The paperback version was released in 1972.
Bibliography
Biological works
Teach yourself books Genetics. (1965)
Inheritance and Natural History. New Naturalist series no. 61 (1977)
The Natural History of Shetland. New Naturalist series no. 64 (1980)
The Natural History of Orkney. New Naturalist series no. 70 (1985)
Genes in Ecology (ed. R. J. Berry, T. J. Crawford, G. M. Hewitt, N. R. Webb) (1992)
Islands. New Naturalist series no. 109 (2009)
Religious works
Adam and the Ape: a Christian Approach to the Theory of Evolution / [by] R. J. Berry · London : Church Pastoral-Aid Society, 1975 · 80 p.
God and the Biologist: Personal Exploration of Science and Faith (Apollos 1996)
Science, Life and Christian Belief:A Survey of Contemporary Issues (IVP 1998) (preface by Berry)
The Care of Creation: Focusing Concern and Action (IVP 2000) (edited by Berry)
God's Book of Works:The Nature and Theology of Nature (T & T Clark International 2003) (Gifford Lectures 1997–98)
"Did Darwin Kill God?" in God for the 21st Century, Russell Stannard ed., Templeton Foundation Press, 2000,
God and Evolution: Creation, Evolution, and the Bible (Regent College Publishing 2001)
Creation and Evolution, Not Creation or Evolution (2007, Faraday Institute Paper no. 12)
References
External links
Gifford Lecture Book summary
1934 births
2018 deaths
English biologists
English Anglicans
Fellows of the Linnean Society of London
English geneticists
Members of the International Society for Science and Religion
Presidents of the Linnean Society of London
Fellows of the Royal Society of Edinburgh
Theistic evolutionists
Academics of University College London
New Naturalist writers
Governors of Monkton Combe School
Writers about religion and science
Presidents of the British Ecological Society | R. J. Berry | Biology | 602 |
45,566,811 | https://en.wikipedia.org/wiki/Chemistry%20Europe | Chemistry Europe (formerly ChemPubSoc Europe) is an organization of 16 chemical societies from 15 European countries, representing over 75,000 chemists. It publishes a family of academic chemistry journals, covering a broad range of disciplines.
Chemistry Europe was founded on the initiative of the German Chemical Society in 1995. The first journal co-owned by Chemistry Europe was Chemistry: A European Journal (launched in 1995). In 1998, the European Journal of Inorganic Chemistry and European Journal of Organic Chemistry were created with the participation of six European chemical societies. Over the years, more societies merged their journals, bringing the total number of societies involved to 16 from 15 different countries (as of 2020).
The Chemistry Europe Fellows Program was established in 2015 (as the ChemPubSoc Europe Fellows Program) and is the highest award given by Chemistry Europe. The Fellowship is awarded based on the recipients' support as authors, advisors, guest editors, referees as well as services to their national chemical societies. New Fellows are announced every two years in the run-up to the biannual EuChemS Congress.
The Chemistry Europe Award recognizes outstanding contributions to chemistry. The inaugural Chemistry Europe Award was awarded in 2023 to Bert Weckhuysen, Professor at Utrecht University, The Netherlands, “for outstanding achievements and leadership in the field of sustainable chemistry and catalysis research”.
Participating societies
The 16 participating European chemical societies are:
Gesellschaft Österreichischer Chemiker (GÖCH), Austria
Société Royale de Chimie (SRC), Belgium
Koninklijke Vlaamse Chemische Vereniging (KVCV), Belgium
Česká společnost chemická (ČSCH), Czech Republic
Société Chimique de France (SCF), France
Gesellschaft Deutscher Chemiker (GDCh), Germany
Association of Greek Chemists (EEX), Greece
Magyar Kémikusok Egyesülete (MKE), Hungary
Società Chimica Italiana (SCI), Italy
Koninklijke Nederlandse Chemische Vereniging (KNCV), The Netherlands
Polskie Towarzystwo Chemiczne (PTChem), Poland
Sociedade Portuguesa de Química (SPQ), Portugal
Slovenská Chemická Spoloćnosť (SCHS), Slovakia
Real Sociedad Española de Química (RSEQ), Spain
Svenska Kemistsamfundet (SK), Sweden
Schweizerische Chemische Gesellschaft (SCG), Switzerland
Journals
The journals are published by Wiley-VCH and include the titles: Chemistry: A European Journal, European Journal of Organic Chemistry, European Journal of Inorganic Chemistry, Chemistry—Methods, Batteries & Supercaps, ChemBioChem, ChemCatChem, ChemElectroChem, ChemMedChem, ChemPhotoChem, ChemPhysChem, ChemPlusChem, ChemSusChem, ChemSystemsChem, ChemistrySelect, ChemistryOpen, as well as the ChemistryViews online science magazine.
References
External links
Chemistry societies
1995 establishments in Europe
Organizations established in 1995
Pan-European scientific societies | Chemistry Europe | Chemistry | 676 |
2,404,267 | https://en.wikipedia.org/wiki/Ramus%20Pomifer | Ramus Pomifer (Latin for apple branch) was a constellation between Hercules and Lyra.
It was depicted in the form of a branch held in Hercules' left hand. The also-obsolete constellation of Cerberus - made up of much the same stars - became combined with it in later depictions, with the name "Cerberus et Ramus".
References
Former constellations | Ramus Pomifer | Astronomy | 80 |
31,262,483 | https://en.wikipedia.org/wiki/Induction%20regulator | An induction regulator is an alternating current electrical machine, somewhat similar to an induction motor, which can provide a continuously variable output voltage. The induction regulator was an early device used to control the voltage of electric networks. Since the 1930s it has been replaced in distribution network applications by the tap transformer. Its usage is now mostly confined to electrical laboratories, electrochemical processes and arc welding. With minor variations, its setup can be used as a phase-shifting power transformer.
Construction
A single-phase induction regulator has a (primary) excitation winding, connected to the supply voltage, wound on a magnetic core which can be rotated. The stationary secondary winding is connected in series with the circuit to be regulated. As the excitation winding is rotated through 180 degrees, the voltage induced in the series winding changes from adding to the supply voltage to opposing it. By selection of the ratios of the number of turns on the excitation and series windings, the range of voltage can be adjusted, say, plus or minus 20% of the supply voltage, for example.
The three phase induction regulator can be regarded as a wound induction motor. The rotor is not allowed to turn freely and it can be mechanically shifted by means of a worm gear. The rest of the regulator's construction follows that of a wound rotor induction motor with a slotted three-phase stator and a wound three-phase rotor. Since the rotor is not allowed to turn more than 180 degrees, mechanically, the rotor leads can be connected by flexible cables to the exterior circuit. If the stator winding is a two-pole winding, moving the rotor through 180 degrees physically will change the phase of the induced voltage by 180 degrees. A four-pole winding only requires 90 degrees of physical movement to produce 180 degrees of phase shift.
Since a torque is produced by the interaction of the magnetic fields, the movable element is held by a mechanism such as a worm gear. The rotor may be rotated by a hand wheel attached to the machine, or an electric motor can be used to remotely or automatically adjust the rotor position.
Depending on the application, the ratio of number of turns on the rotor and the stator can vary.
Working
Since the single phase regulator only changes the flux linking the excitation and series windings, it does not introduce a phase shift between the supply voltage and the load voltage. However, the varying position of the movable element in the three-phase regulator does create a phase shift. This may be a concern if the load circuit may be connected to more than one supply, since circulating currents will flow owing to the phase shift.
If the rotor terminals are connected to a three-phase electric power network, a rotating magnetic field will be driven into the magnetic core.
The resulting flux will produce an emf on the windings of the stator with the particularity that if rotor and stator are physically shifted by an angle α, then the electric phase shifting of both windings is α too. Considering just the fundamental harmonic, and ignoring the shifting, the following equation rules:
Where ξ is the winding factor, a constant related to the construction of the windings.
If the stator winding is connected to the primary phase, the total voltage seen from the neutral (N) will be the sum of the voltages at both windings rotor and stator. Translating this to electric phasors, both phasors are connected. There is an angular shifting of α between them. Since α can be freely chosen between [0, π], both phasors can be added or subtracted, so all the values in between are attainable. The primary and secondary are not isolated. Also, the ratio of the magnitudes of voltages between rotor and stator is constant; the resultant voltage varies owing to the angular shifting of the series winding induced voltage.
Advantages
The output voltage can be continuously regulated within the nominal range. This is a clear benefit against tap transformers where output voltage takes discrete values. Also, the voltage can be easily regulated under working conditions.
Drawbacks
In comparison to tap transformers, induction regulators are expensive, with lower efficiency, high open circuit currents (due to the airgap) and limited in voltage to less than 20kV.
Applications
An induction regulator for power networks is usually designed to have a nominal voltage of 14kV and ±(10-15)% of regulation, but this use has declined. Nowadays, its main uses are in electrical laboratories and arc welding.
See also
Variable frequency transformer
Bibliography
Electric transformers
Energy conversion
Electric motors | Induction regulator | Technology,Engineering | 922 |
2,992,378 | https://en.wikipedia.org/wiki/Jena%20Observatory | Astrophysikalisches Institut und Universitäts-Sternwarte Jena (AIU Jena, Astrophysical Institute and University Observatory Jena, or simply Jena Observatory) is an astronomical observatory owned and operated by Friedrich Schiller University of Jena. It has two main locations in Jena, Germany
and the neighbouring village of Großschwabhausen.
History
The first observatory was built in 1813, and replaced by a bigger one in 1889. It was funded by local regent Karl August von Sachsen-Weimar-Eisenach and planned by Johann Wolfgang Goethe. Its most famous director in the later decades was Ernst Abbe.
The new observatory in Großschwabhausen was built in 1962, in order to avoid the light pollution from the city of Jena. The old main observatory in the city centre is the home of "Volkssternwarte Urania", a society of hobbyist astronomers. They offer public access and courses for children and adults, and host events like watching comets or lunar eclipses.
WASP-3c & TTV
Transit Timing Variation (TTV), a variation on the transit method, was used to discover an exoplanet WASP-3c by Rozhen Observatory, Jena Observatory, and Toruń Centre for Astronomy.
See also
List of astronomical observatories
References
External links
Jena Observatory
Universitäts-Sternwarte Jena
Astronomical observatories in Germany
Buildings and structures in Jena
Glass engineering and science | Jena Observatory | Materials_science,Engineering | 303 |
26,266,872 | https://en.wikipedia.org/wiki/Corium%20%28nuclear%20reactor%29 | Corium, also called fuel-containing material (FCM) or lava-like fuel-containing material (LFCM), is a material that is created in a nuclear reactor core during a nuclear meltdown accident. Resembling lava in consistency, it consists of a mixture of nuclear fuel, fission products, control rods, structural materials from the affected parts of the reactor, products of their chemical reaction with air, water, steam, and in the event that the reactor vessel is breached, molten concrete from the floor of the reactor room.
Composition and formation
The heat causing the melting of a reactor may originate from the nuclear chain reaction, but more commonly decay heat of the fission products contained in the fuel rods is the primary heat source. The heat production from radioactive decay drops quickly, as the short half-life isotopes provide most of the heat and radioactive decay, with the curve of decay heat being a sum of the decay curves of numerous isotopes of elements decaying at different exponential half-life rates. A significant additional heat source can be the chemical reaction of hot metals with oxygen or steam.
Hypothetically, the temperature of corium depends on its internal heat generation dynamics: the quantities and types of isotopes producing decay heat, dilution by other molten materials, heat losses modified by the corium physical configuration, and heat losses to the environment. An accumulated mass of corium will lose less heat than a thinly spread layer. Corium of sufficient temperature can melt concrete. A solidified mass of corium can remelt if its heat losses drop, by being covered with heat insulating debris, or if water that is cooling the corium evaporates.
Crust can form on the corium mass, acting as a thermal insulator and hindering thermal losses. Heat distribution throughout the corium mass is influenced by different thermal conductivity between the molten oxides and metals. Convection in the liquid phase significantly increases heat transfer.
The molten reactor core releases volatile elements and compounds. These may be gas phase, such as molecular iodine or noble gases, or condensed aerosol particles after leaving the high temperature region. A high proportion of aerosol particles originates from the reactor control rod materials. The gaseous compounds may be adsorbed on the surface of the aerosol particles.
Composition and reactions
The composition of corium depends on the design type of the reactor, and specifically on the materials used in the control rods, coolant and reactor vessel structural materials. There are differences between pressurized water reactor (PWR) and boiling water reactor (BWR) coriums.
In contact with water, hot boron carbide from BWR reactor control rods forms first boron oxide and methane, then boric acid. Boron may also continue to contribute to reactions by the boric acid in an emergency coolant.
Zirconium from zircaloy, together with other metals, reacts with water and produces zirconium dioxide and hydrogen. The production of hydrogen is a major danger in reactor accidents. The balance between oxidizing and reducing chemical environments and the proportion of water and hydrogen influences the formation of chemical compounds. Variations in the volatility of core materials influence the ratio of released elements to unreleased elements. For instance, in an inert atmosphere, the silver-indium-cadmium alloy of control rods releases almost only cadmium. In the presence of water, the indium forms volatile indium(I) oxide and indium(I) hydroxide, which can evaporate and form an aerosol of indium(III) oxide. The indium oxidation is inhibited by a hydrogen-rich atmosphere, resulting in lower indium releases. Caesium and iodine from the fission products can react to produce volatile caesium iodide, which condenses as an aerosol.
During a meltdown, the temperature of the fuel rods increases and they can deform, in the case of zircaloy cladding, above . If the reactor pressure is low, the pressure inside the fuel rods ruptures the control rod cladding. High-pressure conditions push the cladding onto the fuel pellets, promoting formation of uranium dioxide–zirconium eutectic with a melting point of . An exothermic reaction occurs between steam and zirconium, which may produce enough heat to be self-sustaining without the contribution of decay heat from radioactivity. Hydrogen is released in an amount of about of hydrogen (at normal temperature/pressure) per kilogram of zircaloy oxidized. Hydrogen embrittlement may also occur in the reactor materials and volatile fission products can be released from damaged fuel rods. Between , the silver-indium-cadmium alloy of control rods melts, together with the evaporation of control rod cladding. At , the cladding oxides melt and begin to flow. At the uranium oxide fuel rods melt and the reactor core structure and geometry collapses. This can occur at lower temperatures if a eutectic uranium oxide-zirconium composition is formed. At that point, the corium is virtually free of volatile constituents that are not chemically bound, resulting in correspondingly lower heat production (by about 25%) as the volatile isotopes relocate.
The temperature of corium can be as high as in the first hours after the meltdown, potentially reaching over . A large amount of heat can be released by reaction of metals (particularly zirconium) in corium with water. Flooding of the corium mass with water, or the drop of molten corium mass into a water pool, may result in a temperature spike and production of large amounts of hydrogen, which can result in a pressure spike in the containment vessel. The steam explosion resulting from such sudden corium-water contact can disperse the materials and form projectiles that may damage the containment vessel by impact. Subsequent pressure spikes can be caused by combustion of the released hydrogen. Detonation risks can be reduced by the use of catalytic hydrogen recombiners.
Brief re-criticality (resumption of neutron-induced fission) in parts of the corium is a theoretical but remote possibility with commercial reactor fuel, due to low enrichment and the loss of moderator. This condition could be detected by presence of short life fission products long after the meltdown, in amounts that are too high to remain from the pre-meltdown reactor or be due to spontaneous fission of reactor-created actinides.
Reactor vessel breaching
In the absence of adequate cooling, the materials inside of the reactor vessel overheat and deform as they undergo thermal expansion, and the reactor structure fails once the temperature reaches the melting point of its structural materials. The corium melt then accumulates at the bottom of the reactor vessel. In the case of adequate cooling of the corium, it can solidify and the damage is limited to the reactor itself. Corium may also melt through the reactor vessel and flow out or be ejected as a molten stream by the pressure inside the reactor vessel. The reactor vessel failure may be caused by heating of its vessel bottom by the corium, resulting first in creep failure and then in breach of the vessel. Cooling water from above the corium layer, in sufficient quantity, may obtain a thermal equilibrium below the metal creep temperature, without reactor vessel failure.
If the vessel is sufficiently cooled, a crust between the corium melt and the reactor wall can form. The layer of molten steel at the top of the oxide may create a zone of increased heat transfer to the reactor wall; this condition, known as "heat knife", increases the probability of formation of a localized weakening of the side of the reactor vessel and subsequent corium leak.
In the case of high pressure inside the reactor vessel, breaching of its bottom may result in high-pressure blowout of the corium mass. In the first phase, only the melt itself is ejected; later a depression may form in the center of the hole and gas is discharged together with the melt with a rapid decrease of pressure inside the reactor vessel; the high temperature of the melt also causes rapid erosion and enlargement of the vessel breach. If the hole is in the center of the bottom, nearly all corium can be ejected. A hole in the side of the vessel may lead to only partial ejection of corium, with a retained portion left inside the reactor vessel.
Melt-through of the reactor vessel may take from a few tens of minutes to several hours.
After breaching the reactor vessel, the conditions in the reactor cavity below the core govern the subsequent production of gases. If water is present, steam and hydrogen are generated; dry concrete results in production of carbon dioxide and a smaller amount of steam.
Interactions with concrete
Thermal decomposition of concrete produces water vapor and carbon dioxide, which may further react with the metals in the melt, oxidizing the metals, and reducing the gases to hydrogen and carbon monoxide. The decomposition of the concrete and volatilization of its alkali components is an endothermic process. Aerosols released during this phase are primarily based on concrete-originating silicon compounds; otherwise volatile elements, for example, caesium, can be bound in nonvolatile insoluble silicates.
Several reactions occur between the concrete and the corium melt. Free and chemically bound water is released from the concrete as steam. Calcium carbonate is decomposed, producing carbon dioxide and calcium oxide. Water and carbon dioxide penetrate the corium mass, exothermically oxidizing the non-oxidized metals present in the corium and producing gaseous hydrogen and carbon monoxide; large amounts of hydrogen can be produced. The calcium oxide, silica, and silicates melt and are mixed into the corium. The oxide phase, in which the nonvolatile fission products are concentrated, can stabilize at temperatures of for a considerable period of time. An eventually present layer of more dense molten metal, containing fewer radioisotopes (Ru, Tc, Pd, etc., initially composed of molten zircaloy, iron, chromium, nickel, manganese, silver, and other construction materials and metallic fission products and tellurium bound as zirconium telluride) than the oxide layer (which concentrates Sr, Ba, La, Sb, Sn, Nb, Mo, etc. and is initially composed primarily of zirconium dioxide and uranium dioxide, possibly with iron oxide and boron oxides), can form an interface between the oxides and the concrete farther below, slowing down the corium penetration and solidifying within a few hours. The oxide layer produces heat primarily by decay heat, while the principal heat source in the metal layer is exothermic reaction with the water released from the concrete. Decomposition of concrete and volatilization of the alkali metal compounds consumes a substantial amount of heat.
The fast erosion phase of the concrete basemat lasts for about an hour and progresses to about one meter in depth, then slows to several centimeters per hour, and stops completely when the melt cools below the decomposition temperature of concrete (about ). Complete melt-through can occur in several days even through several meters of concrete; the corium then penetrates several meters into the underlying soil, spreads around, cools and solidifies.
During the interaction between corium and concrete, very high temperatures can be achieved. Less volatile aerosols of Ba, Ce, La, Sr, and other fission products are formed during this phase and introduced into the containment building at a time when most of the early aerosols are already deposited. Tellurium is released with the progress of zirconium telluride decomposition. Bubbles of gas flowing through the melt promote aerosol formation.
The thermal hydraulics of corium-concrete interactions (CCI, or also MCCI, "molten core-concrete interactions") is sufficiently understood.
The dynamics of the movement of corium in and outside the reactor vessel is highly complex, however, and the number of possible scenarios is wide; slow drip of melt into an underlying water pool can result in complete quenching, while the fast contact of a large mass of corium with water may result in a destructive steam explosion. Corium may be completely retained by the reactor vessel, or the reactor floor or some of the instrument penetration holes can be melted through.
The thermal load of corium on the floor below the reactor vessel can be assessed by a grid of fiber optic sensors embedded in the concrete. Pure silica fibers are needed as they are more resistant to high radiation levels.
Some reactor building designs, for example, the EPR, incorporate dedicated corium spread areas (core catchers), where the melt can deposit without coming in contact with water and without excessive reaction with concrete.
Only later, when a crust is formed on the melt, limited amounts of water can be introduced to cool the mass.
Materials based on titanium dioxide and neodymium(III) oxide seem to be more resistant to corium than concrete.
Deposition of corium on the containment vessel inner surface, e.g. by high-pressure ejection from the reactor pressure vessel, can cause containment failure by direct containment heating (DCH).
Specific incidents
Three Mile Island accident
During the Three Mile Island accident, a slow partial meltdown of the reactor core occurred. About of material melted and relocated in about 2 minutes, approximately 224 minutes after the reactor scram. A pool of corium formed at the bottom of the reactor vessel, but the reactor vessel was not breached. The layer of solidified corium ranged in thickness from 5 to 45 cm.
Samples were obtained from the reactor. Two masses of corium were found, one within the fuel assembly, one on the lower head of the reactor vessel. The samples were generally dull grey, with some yellow areas.
The mass was found to be homogeneous, primarily composed of molten fuel and cladding. The elemental constitution was about 70 wt.% uranium, 13.75 wt.% zirconium, 13 wt.% oxygen, with the balance being stainless steel and Inconel incorporated into the melt; the loose debris showed somewhat lower content of uranium (about 65 wt.%) and higher content of structural metals. The decay heat of corium at 224 minutes after scram was estimated to be 0.13 W/g, falling to 0.096 W/g at scram+600 minutes. Noble gases, caesium and iodine were absent, signifying their volatilization from the hot material. The samples were fully oxidized, signifying the presence of sufficient amounts of steam to oxidize all available zirconium.
Some samples contained a small amount of metallic melt (less than 0.5%), composed of silver and indium (from the control rods). A secondary phase composed of chromium(III) oxide was found in one of the samples. Some metallic inclusions contained silver but not indium, suggesting a sufficiently high temperature to cause volatilization of both cadmium and indium. Almost all metallic components, with the exception of silver, were fully oxidized; even silver was oxidized in some regions. The inclusion of iron and chromium rich regions probably originate from a molten nozzle that did not have enough time to be distributed through the melt.
The bulk density of the samples varied between 7.45 and 9.4 g/cm3 (the densities of UO2 and ZrO2 are 10.4 and 5.6 g/cm3). The porosity of samples varied between 5.7% and 32%, averaging at 18±11%. Striated interconnected porosity was found in some samples, suggesting the corium was liquid for a sufficient time for formation of bubbles of steam or vaporized structural materials and their transport through the melt. A well-mixed (U,Zr)O2 solid solution indicates peak temperature of the melt between .
The microstructure of the solidified material shows two phases: (U,Zr)O2 and (Zr,U)O2. The zirconium-rich phase was found around the pores and on the grain boundaries and contains some iron and chromium in the form of oxides. This phase segregation suggests slow gradual cooling instead of fast quenching, estimated by the phase separation type to be between 3–72 hours.
Chernobyl accident
The largest known amounts of corium were formed during the Chernobyl disaster. The molten mass of reactor core dripped under the reactor vessel and now is solidified in forms of stalactites, stalagmites, and lava flows; the best-known formation is the "Elephant's Foot", located under the bottom of the reactor in a Steam Distribution Corridor.
The corium was formed in three phases.
The first phase lasted only several seconds, with temperatures locally exceeding , when a zirconium-uranium-oxide melt formed from no more than 30% of the core. Examination of a hot particle showed a formation of Zr-U-O and UOx-Zr phases; the 0.9-mm-thick niobium zircaloy cladding formed successive layers of UOx, UOx+Zr, Zr-U-O, metallic Zr(O), and zirconium dioxide. These phases were found individually or together in the hot particles dispersed from the core.
The second stage, lasting for six days, was characterized by interaction of the melt with silicate structural materials—sand, concrete, serpentinite. The molten mixture is enriched with silica and silicates.
The third stage followed, when lamination of the fuel occurred and the melt broke through into the floors below and solidified there.
The Chernobyl corium is composed of the reactor uranium dioxide fuel, its zircaloy cladding, molten concrete, as well as other materials in and below the reactor, and decomposed and molten serpentinite packed around the reactor as its thermal insulation. Analysis has shown that the corium was heated to at most , and remained above for at least 4 days.
The molten corium settled in the bottom of the reactor shaft, forming a layer of graphite debris on its top. Eight days after the meltdown the melt penetrated the lower biological shield and spread on the reactor room floor, releasing radionuclides. Further radioactivity was released when the melt came in contact with water.
Three different lavas are present in the basement of the reactor building: black, brown and a porous ceramic. They are silicate glasses with inclusions of other materials present within them. The porous lava is brown lava that had dropped into water thus being cooled rapidly.
During radiolysis of the Pressure Suppression Pool water below the Chernobyl reactor, hydrogen peroxide was formed. The hypothesis that the pool water was partially converted to H2O2 is confirmed by the identification of the white crystalline minerals studtite and metastudtite in the Chernobyl lavas, the only minerals that contain peroxide.
The coriums consist of a highly heterogeneous silicate glass matrix with inclusions. Distinct phases are present:
uranium oxides, from the fuel pellets
uranium oxides with zirconium (UOx+Zr)
Zr-U-O
zirconium dioxide with uranium
zirconium silicate with up to 10% of uranium as a solid solution, (Zr,U)SiO4, called chernobylite
uranium-containing glass, the glass matrix material itself; mainly a calcium aluminosilicate with small amount of magnesium oxide, sodium oxide, and zirconium dioxide
metal, present as solidified layers and as spherical inclusions of Fe-Ni-Cr alloy in the glass phase
Five types of material can be identified in Chernobyl corium:
Black ceramics, a glass-like coal-black material with a surface pitted with many cavities and pores. Usually located near the places where corium formed. Its two versions contain about 4–5 wt.% and about 7–8 wt.% of uranium.
Brown ceramics, a glass-like brown material usually glossy but also dull. Usually located on a layer of a solidified molten metal. Contains many very small metal spheres. Contains 8–10 wt.% of uranium. Multicolored ceramics contain 6–7% of fuel.
Slag-like granulated corium, slag-like irregular gray-magenta to dark-brown glassy granules with crust. Formed by prolonged contact of brown ceramics with water, located in large heaps in both levels of the Pressure Suppression Pool.
Pumice, friable pumice-like gray-brown porous formations formed from molten brown corium foamed with steam when immersed in water. Located in the pressure suppression pool in large heaps near the sink openings, where they were carried by water flow as they were light enough to float.
Metal, molten and solidified. Mostly located in the Steam Distribution Corridor. Also present as small spherical inclusions in all the oxide-based materials above. Does not contain fuel per se, but contains some metallic fission products, e.g. ruthenium-106.
The molten reactor core accumulated in room 305/2, until it reached the edges of the steam relief valves; then it migrated downward to the Steam Distribution Corridor. It also broke or burned through into room 304/3. The corium flowed from the reactor in three streams. Stream 1 was composed of brown lava and molten steel; steel formed a layer on the floor of the Steam Distribution Corridor, on the Level +6, with brown corium on its top. From this area, brown corium flowed through the Steam Distribution Channels into the Pressure Suppression Pools on the Level +3 and Level 0, forming porous and slag-like formations there. Stream 2 was composed of black lava, and entered the other side of the Steam Distribution Corridor. Stream 3, also composed of black lavas, flowed to other areas under the reactor. The well-known "Elephant's Foot" structure is composed of two metric tons of black lava, forming a multilayered structure similar to tree bark. It is said to be melted deep into the concrete. The material is dangerously radioactive and hard and strong, and using remote controlled systems was not possible due to high radiation interfering with electronics.
The Chernobyl melt was a silicate melt that contained inclusions of Zr/U phases, molten steel and high levels of uranium zirconium silicate ("chernobylite", a black and yellow technogenic mineral). The lava flow consists of more than one type of material—a brown lava and a porous ceramic material have been found. The uranium to zirconium ratio in different parts of the solid differs a lot, in the brown lava a uranium-rich phase with a U:Zr ratio of 19:3 to about 19:5 is found. The uranium-poor phase in the brown lava has a U:Zr ratio of about 1:10. It is possible from the examination of the Zr/U phases to determine the thermal history of the mixture. It can be shown that before the explosion, in part of the core the temperature was higher than 2,000 °C, while in some areas the temperature was over .
The composition of some of the corium samples is as follows:
Degradation of the lava
The corium undergoes degradation. The Elephant's Foot, hard and strong shortly after its formation, is now cracked enough that a cotton ball treated with glue can remove 1-2 centimeters of material. The structure's shape itself is changed as the material slides down and settles. The corium temperature is now just slightly different from ambient. The material is therefore subject to both day–night temperature cycling and weathering by water. The heterogeneous nature of corium and different thermal expansion coefficients of the components causes material deterioration with thermal cycling. Large amounts of residual stresses were introduced during solidification due to the uncontrolled cooling rate. The water, seeping into pores and microcracks, has frozen there. This is the same process that creates potholes on roads, accelerates cracking.
Corium (and also highly irradiated uranium fuel) has the property of spontaneous dust generation, or spontaneous self-sputtering of the surface. The alpha decay of isotopes inside the glassy structure causes Coulomb explosions, degrading the material and releasing submicron particles from its surface. The level of radioactivity is such that during 100 years, the lava's self irradiation ( α decays per gram and 2 to of β or γ) will fall short of the level required to greatly change the properties of glass (1018 α decays per gram and 108 to 109 Gy of β or γ). Also the lava's rate of dissolution in water is very low (10−7 g·cm−2·day−1), suggesting that the lava is unlikely to dissolve in water.
It is unclear how long the ceramic form will retard the release of radioactivity. From 1997 to 2002, a series of papers were published that suggested that the self irradiation of the lava would convert all 1,200 tons into a submicrometre and mobile powder within a few weeks. But it has been reported that it is likely that the degradation of the lava is to be a slow and gradual process rather than a sudden rapid process. The same paper states that the loss of uranium from the wrecked reactor is only per year. This low rate of uranium leaching suggests that the lava is resisting its environment. The paper also states that when the shelter is improved, the leaching rate of the lava will decrease.
Some of the surfaces of the lava flows have started to show new uranium minerals such as UO3·2H2O (eliantinite), (UO2)O2·4H2O (studtite), uranyl carbonate (rutherfordine), čejkaite (), and the unnamed compound Na3U(CO3)2·2H2O. These are soluble in water, allowing mobilization and transport of uranium. They look like whitish yellow patches on the surface of the solidified corium. These secondary minerals show several hundred times lower concentration of plutonium and several times higher concentration of uranium than the lava itself.
Fukushima Daiichi
The March 11, 2011, Tōhoku earthquake and tsunami caused various nuclear accidents, the worst of which was the Fukushima Daiichi nuclear disaster. At an estimated eighty minutes after the tsunami strike, the temperatures inside Unit 1 of the Fukushima Daiichi Nuclear Power Plant reached over 2,300 ˚C, causing the fuel assembly structures, control rods and nuclear fuel to melt and form corium. (The physical nature of the damaged fuel has not been fully determined but it is assumed to have become molten.) The reactor core isolation cooling system (RCIC) was successfully activated for Unit 3; the Unit 3 RCIC subsequently failed, however, and at about 09:00 on March 13, the nuclear fuel had melted into corium. Unit 2 retained RCIC functions slightly longer and corium is not believed to have started to pool on the reactor floor until around 18:00 on March 14. TEPCO believes the fuel assembly fell out of the pressure vessel to the floor of the primary containment vessel, and that it has found fuel debris on the floor of the primary containment vessel.
References
External links
INSP Chornobyl Photobook (captions)
Nuclear chemistry
Nuclear accidents and incidents
Nuclear reactor safety | Corium (nuclear reactor) | Physics,Chemistry | 5,693 |
6,300,646 | https://en.wikipedia.org/wiki/Electromagnetic%20testing | Electromagnetic testing (ET), as a form of nondestructive testing, is the process of inducing electric currents or magnetic fields or both inside a test object and observing the electromagnetic response. If the test is set up properly, a defect inside the test object creates a measurable response.
The term "electromagnetic testing" is often intended to mean simply eddy-current testing (ECT). However, with an expanding number of electromagnetic and magnetic test methods, "electromagnetic testing" is more often used to mean the whole class of electromagnetic test methods, of which eddy-current testing is just one. also useful for the testing of drill pipes.
Common methods
Eddy-current testing (ECT) is used to detect near-surface cracks and corrosion in metallic objects such as tubes and aircraft fuselage and structures. ECT is more commonly applied to nonferromagnetic materials, since in ferromagnetic materials the depth of penetration is relatively small.
Remote field testing (RFT) is used for nondestructive testing (NDT) of steel tubes and pipes.
Magnetic flux leakage testing (MFL) is also used for nondestructive testing (NDT) of steel tubes and pipes. At present RFT is more commonly used in small diameter tubes and MFL in larger diameter pipes over long travel distances.
Wire rope testing is MFL applied to steel cables, to detect broken strands of wire.
Magnetic particle inspection (MT or MPI) is a form of MFL where small magnetic particles in the form of a powder or liquid are sprayed on the magnetized steel test object and gather at surface-breaking cracks.
Alternating current field measurement (ACFM) is similar to eddy current applied to steel. Its most common application is to detect and size cracks in welds. from the company that developed it.
Pulsed eddy current enables the detection of large-volume metal loss in steel objects from a considerable stand-off, allowing steel pipes to be tested without removing insulation.
See also
Electromagnetic compatibility
References
Hugo L. Libby, Introduction to Electromagnetic Nondestructive Test Methods, New York : Wiley-Interscience, 1971.
The American Society for Nondestructive Testing, NDT Handbook, 3rd ed., Vol. 5, Electromagnetic Testing.
William Lord, "Electromagnetic NDT Techniques — A 40 Year Retrospective or Retirement for Cause" in Materials Evaluation, June 2006, p. 547 to 550.
Nondestructive testing | Electromagnetic testing | Materials_science | 496 |
7,578,771 | https://en.wikipedia.org/wiki/Engineering%20psychology | Engineering psychology, also known as Human Factors Engineering or Human Factors Psychology, is the science of human behavior and capability, applied to the design and operation of systems and technology.
As an applied field of psychology and an interdisciplinary part of ergonomics, it aims to improve the relationships between people and machines by redesigning equipment, interactions, or the environment in which they take place. The work of an engineering psychologist is often described as making the relationship more "user-friendly."
History
Engineering psychology was created from within experimental psychology. Engineering psychology started during World War I (1914). The reason why this subject was developed during this time was because many of America's weapons were failing; bombs not falling in the right place to weapons attacking normal marine life. The fault was traced back to human errors. One of the first designs to be built to restrain human error was the use of psychoacoustics by S.S. Stevens and L.L. Beranek were two of the first American psychologists called upon to help change how people and machinery worked together. One of their first assignments was to try and reduce noise levels in military aircraft. The work was directed at improving intelligibility of military communication systems and appeared to have been very successful. However it was not until after August 1945 that levels of research in engineering psychology began to increase significantly. This occurred because the research that started in 1940 now began to show.
Lillian Gilbreth combined the talents of an engineer, psychologist and mother of twelve. Her appreciation of human factors made her successful in the implementation of time and motion studies and scientific management. She went on to pioneer ergonomics in the kitchen, inventing the pedal bin, for example.
In Britain, the two world wars generated much formal study of human factors which affected the efficiency of munitions output and warfare. In World War I, the Health of Munitions Workers Committee was created in 1915. This made recommendations based upon studies of the effects of overwork on efficiency which resulted in policies of providing breaks and limiting hours of work, including avoidance of work on Sunday. The Industrial Fatigue Research Board was created in 1918 to take this work forward. In WW2, researchers at Cambridge University such as Frederic Bartlett and Kenneth Craik started work on the operation of equipment in 1939 and this resulted in the creation of the Unit for Research in Applied Psychology in 1944.
Related subjects
Cognitive ergonomics and cognitive engineering - studies cognition in work settings, in order to optimize human well-being and system performance. It is a subset of the larger field of human factors and ergonomics.
Applied psychology - The use of psychological principles to overcome problems in other domains. It has been argued that engineering psychology is separate from applied (cognitive) psychology because advances in cognitive psychology have infrequently informed engineering psychology research. Surprisingly, work in engineering psychology often seems to inform developments in cognitive psychology. For example, engineering psychology research has enabled cognitive psychologists to explain why GUIs seem easier to use than character-based computer interfaces (such as DOS).
Engineering Psychology, Ergonomics and Human Factors
Although the comparability of these terms and many others have been a topic of debate, the differences of these fields can be seen in the applications of the respective fields.
Engineering psychology is concerned with the adaptation of the equipment and environment to people, based upon their psychological capacities and limitations with the objective of improving overall system performance, involving human and machine elements Engineering psychologists strive to match equipment requirements with the capabilities of human operators by changing the design of the equipment. An example of this matching was the redesign of the mailbags used by letter carriers. Engineering psychologists discovered that mailbag with a waist-support strap, and a double bag that requires the use of both shoulders, reduces muscle fatigue. Another example involves the cumulative trauma disorders grocery checkout workers suffered as the result of repetitive wrist movements using electronic scanners. Engineering psychologists found that the optimal checkout station design would allow for workers to easily use either hand to distribute the workload between both wrists.
The field of ergonomics is based on scientific studies of ordinary people in work situations and is applied to the design of processes and machines, to the layout of work places, to methods of work, and to the control of the physical environment, in order to achieve greater efficiency of both men and machines An example of an ergonomics study is the evaluation of the effects of screwdriver handle shape, surface material and workpiece orientation on torque performance, finger force distribution and muscle activity in a maximum screwdriving torque task. Another example of an ergonomics study is the effects of shoe traction and obstacle height on friction. Similarly, many topics in ergonomics deal with the actual science of matching man to equipment and encompasses narrower fields such as engineering psychology.
At one point in time, the term human factors was used in place of ergonomics in Europe. Human factors involve interdisciplinary scientific research and studies to seek to realize greater recognition and understanding of the worker's characteristics, needs, abilities, and limitations when the procedures and products of technology are being designed. This field utilizes knowledge from several fields such as mechanical engineering, psychology, and industrial engineering to design instruments.
Human factors is broader than engineering psychology, which is focused specifically on designing systems that accommodate the information-processing capabilities of the brain.
Although the work in the respective fields differ, there are some similarities between these. These fields share the same objectives which are to optimize the effectiveness and efficiency with which human activities are conducted as well as to improve the general quality of life through increased safety, reduced fatigue and stress, increased comfort, and satisfaction.
Importance of Engineering Psychologists
Engineering psychologists contribute to the design of a variety of products, including dental and surgical tools, cameras, toothbrushes and car-seats. They have been involved in the re-design of the mailbags used by letter carriers. More than 20% of letter carriers suffer from musculoskeletal injury such as lower back pain from carrying mailbags slung over their shoulders. A mailbag with a waist-support strap, and a double bag that requires the use of both shoulders, has been shown to reduce muscle fatigue.
Research by engineering psychologists has demonstrated that using cell-phones while driving degrades performance by increasing driver reaction time, particularly among older drivers, and can lead to higher accident risk among drivers of all ages. Research findings such as these have supported governmental regulation of cell-phone use.
References
Bibliography
Journal of Engineering Psychology
Ergonomics
Systems psychology
Engineering disciplines | Engineering psychology | Engineering | 1,317 |
1,687,906 | https://en.wikipedia.org/wiki/Morphogen | A morphogen is a substance whose non-uniform distribution governs the pattern of tissue development in the process of morphogenesis or pattern formation, one of the core processes of developmental biology, establishing positions of the various specialized cell types within a tissue. More specifically, a morphogen is a signaling molecule that acts directly on cells to produce specific cellular responses depending on its local concentration.
Typically, morphogens are produced by source cells and diffuse through surrounding tissues in an embryo during early development, such that concentration gradients are set up. These gradients drive the process of differentiation of unspecialised stem cells into different cell types, ultimately forming all the tissues and organs of the body. The control of morphogenesis is a central element in evolutionary developmental biology (evo-devo).
History
The term was coined by Alan Turing in the paper "The Chemical Basis of Morphogenesis", where he predicted a chemical mechanism for biological pattern formation, decades before the formation of such patterns was demonstrated.
The concept of the morphogen has a long history in developmental biology, dating back to the work of the pioneering Drosophila (fruit fly) geneticist, Thomas Hunt Morgan, in the early 20th century. Lewis Wolpert refined the morphogen concept in the 1960s with the French flag model, which described how a morphogen could subdivide a tissue into domains of different target gene expression (corresponding to the colours of the French flag). This model was championed by the leading Drosophila biologist, Peter Lawrence. Christiane Nüsslein-Volhard was the first to identify a morphogen, Bicoid, one of the transcription factors present in a gradient in the Drosophila syncitial embryo. She was awarded the 1995 Nobel Prize in Physiology and Medicine for her work explaining the morphogenic embryology of the common fruit fly. Groups led by Gary Struhl and Stephen Cohen then demonstrated that a secreted signalling protein, decapentaplegic (the Drosophila homologue of transforming growth factor beta), acted as a morphogen during the later stages of Drosophila development.
Mechanism
During early development, morphogen gradients result in the differentiation of specific cell types in a distinct spatial order. The morphogen provides spatial information by forming a concentration gradient that subdivides a field of cells by inducing or maintaining the expression of different target genes at distinct concentration thresholds. Thus, cells far from the source of the morphogen will receive low levels of morphogen and express only low-threshold target genes. In contrast, cells close to the source of morphogen will receive high levels of morphogen and will express both low- and high-threshold target genes. Distinct cell types emerge as a consequence of the different combination of target gene expression. In this way, the field of cells is subdivided into different types according to their position relative to the source of the morphogen. This model is assumed to be a general mechanism by which cell type diversity can be generated in embryonic development in animals.
Some of the earliest and best-studied morphogens are transcription factors that diffuse within early Drosophila melanogaster (fruit fly) embryos. However, most morphogens are secreted proteins that signal between cells.
Genes and signals
A morphogen spreads from a localized source and forms a concentration gradient across a developing tissue. In developmental biology, 'morphogen' is rigorously used to mean a signalling molecule that acts directly on cells (not through serial induction) to produce specific cellular responses that depend on morphogen concentration. This definition concerns the mechanism, not any specific chemical formula, so simple compounds such as retinoic acid (the active metabolite of retinol or vitamin A) may also act as morphogens. The model is not universally accepted due to specific issues with setting up a gradient in the tissue outlined in the French flag model and subsequent work showing that the morphogen gradient of the Drosophila embryo is more complex than the simple gradient model would indicate.
Examples
Proposed mammalian morphogens include retinoic acid, sonic hedgehog (SHH), transforming growth factor beta (TGF-β)/bone morphogenic protein (BMP), and Wnt/beta-catenin. Morphogens in Drosophila include decapentaplegic and hedgehog.
During development, retinoic acid, a metabolite of vitamin A, is used to stimulate the growth of the posterior end of the organism. Retinoic acid binds to retinoic acid receptors that acts as transcription factors to regulate the expression of Hox genes. Exposure of embryos to exogenous retinoids especially in the first trimester results in birth defects.
TGF-β family members are involved in dorsoventral patterning and the formation of some organs. Binding to TGF-β to type II TGF beta receptors recruits type I receptors causing the latter to be transphosphorylated. The type I receptors activate Smad proteins that in turn act as transcription factors that regulate gene transcription.
Sonic hedgehog (SHH) are morphogens that are essential to early patterning in the developing embryo. SHH binds to the Patched receptor which in the absence of SHH inhibits the Smoothened receptor. Activated smoothened in turn causes Gli1, Gli2, and Gli3 to be translocated into the nucleus where they activate target genes such at PTCH1 and Engrailed.
Fruit fly
Drosophila melanogaster has an unusual developmental system, in which the first thirteen cell divisions of the embryo occur within a syncytium prior to cellularization. Essentially the embryo remains a single cell with over 8000 nuclei evenly spaced near the membrane until the fourteenth cell division, when independent membranes furrow between the nuclei, separating them into independent cells. As a result, in fly embryos transcription factors such as Bicoid or Hunchback can act as morphogens because they can freely diffuse between nuclei to produce smooth gradients of concentration without relying on specialized intercellular signalling mechanisms. Although there is some evidence that homeobox transcription factors similar to these can pass directly through cell membranes, this mechanism is not believed to contribute greatly to morphogenesis in cellularized systems.
In most developmental systems, such as human embryos or later Drosophila development, syncytia occur only rarely (such as in skeletal muscle), and morphogens are generally secreted signalling proteins. These proteins bind to the extracellular domains of transmembrane receptor proteins, which use an elaborate process of signal transduction to communicate the level of morphogen to the nucleus. The nuclear targets of signal transduction pathways are usually transcription factors, whose activity is regulated in a manner that reflects the level of morphogen received at the cell surface. Thus, secreted morphogens act to generate gradients of transcription factor activity just like those that are generated in the syncitial Drosophila embryo.
Discrete target genes respond to different thresholds of morphogen activity. The expression of target genes is controlled by segments of DNA called 'enhancers' to which transcription factors bind directly. Once bound, the transcription factor then stimulates or inhibits the transcription of the gene and thus controls the level of expression of the gene product (usually a protein). 'Low-threshold' target genes require only low levels of morphogen activity to be regulated and feature enhancers that contain many high-affinity binding sites for the transcription factor. 'High-threshold' target genes have relatively fewer binding sites or low-affinity binding sites that require much greater levels of transcription factor activity to be regulated.
The general mechanism by which the morphogen model works, can explain the subdivision of tissues into patterns of distinct cell types, assuming it is possible to create and maintain a gradient. However, the morphogen model is often invoked for additional activities such as controlling the growth of the tissue or orienting the polarity of cells within it (for example, the hairs on your forearm point in one direction) which cannot be explained by model.
Eponyms
The organizing role that morphogens play during animal development was acknowledged in the 2014 naming of a new beetle genus, Morphogenia. The type species, Morphogenia struhli, was named in honour of Gary Struhl, the US developmental biologist who was instrumental in demonstrating that the decapentaplegic and wingless genes encode proteins that function as morphogens during Drosophila development.
References
Further reading
Morphogens | Morphogen | Biology | 1,777 |
24,355,173 | https://en.wikipedia.org/wiki/C20H32O5 | {{DISPLAYTITLE:C20H32O5}}
The molecular formula C20H32O5 (molar mass: 352.465 g/mol) may refer to:
Levuglandin D2
Levuglandin E2
Lipoxin
Prostacyclin
Prostaglandin D2
Prostaglandin E2, an abortifacient
Prostaglandin H2
Thromboxane A2 | C20H32O5 | Chemistry | 96 |
36,968,797 | https://en.wikipedia.org/wiki/Theta%20Crateris | Theta Crateris (θ Crateris) is a solitary star in the southern constellation of Crater. It is a photometric-standard star that is faintly visible to the naked eye with an apparent visual magnitude of 4.70. With an annual parallax shift of 11.63 mas as seen from Earth, it is located around 280 light years from the Sun. At that distance, the visual magnitude of the star is diminished by an extinction factor of 0.07 because of interstellar dust.
This is a B-type main sequence star with a stellar classification of B9.5 Vn, where the 'n' suffix indicates "nebulous" absorption lines due to rapid rotation. It is spinning with a projected rotational velocity of 212 km/s, giving the star an oblate shape with an equatorial bulge that is an estimated 7% larger than the polar radius. The star has 2.79 times the mass of the Sun and around 3.1 times the Sun's radius. With an age of about 117 million years, it is radiating 107 times the solar luminosity from its outer atmosphere at an effective temperature of 11,524 K.
References
External links
B-type main-sequence stars
Crater (constellation)
Crateris, Theta
Crateris, 21
100889
056633
4468
Durchmusterung objects | Theta Crateris | Astronomy | 274 |
24,017,146 | https://en.wikipedia.org/wiki/C11H16N2 | {{DISPLAYTITLE:C11H16N2}}
The molecular formula C11H16N2 (molar mass: 176.263 g/mol) may refer to:
Benzylpiperazine (BZP)
ortho-Methylphenylpiperazine (oMPP)
Molecular formulas | C11H16N2 | Physics,Chemistry | 69 |
3,092,530 | https://en.wikipedia.org/wiki/Mother%20Box | Mother Boxes are fictional devices in Jack Kirby's Fourth World setting in the DC Universe.
The Mother Boxes appeared in the feature films Justice League and Zack Snyder's Justice League of the DC Extended Universe.
History
Created by Apokoliptian scientist Himon using the mysterious Element X, they are generally thought to be sentient, miniaturized, portable supercomputers, although their true nature and origins are unknown. They possess various powers, including teleportation, energy manipulation, and healing. Despite their name, Mother Boxes are not always box-shaped.
Additionally, the New Gods of Apokolips use equivalents of Mother Boxes called Father Boxes.
Interpretation
In a 2008 article, John Hodgman observed: "Mister Miracle, a warrior of Apokolips who flees to Earth to become a 'super escape artist', keeps a 'Mother Box' up his sleeve — a small, living computer that can enable its user to do almost anything, so long as it is sufficiently loved. In Kirby's world, all machines are totems: weapons and strange vehicles fuse technology and magic, and the Mother Box in particular uncannily anticipates the gadget fetishism that infects our lives today. The Bluetooth headset may well be a Kirby creation". Similarly, Mike Cecchini of Den of Geek described the Mother Box as "an alien smartphone that can do anything from heal the injured to teleport you across time and space", and Christian Holub in Entertainment Weekly called it "basically a smartphone, as designed by gods". Mother Boxes have also been interpreted as a symbol of the "ideal mother" and an example of the role of motherhood in Jack Kirby's Fourth World stories.
In other media
Television
Mother Boxes appear in series set in the DC Animated Universe (DCAU).
Mother Boxes and Father Boxes appear in Young Justice.
Mother Boxes appear in Justice League Action.
A Mother Box appears in DC Super Hero Girls: Super Hero High.
A Mother Box appears in the Harley Quinn episode "Inner (Para) Demons".
Film
DC Extended Universe
In Batman v Superman: Dawn of Justice, a Mother Box appears briefly in footage that Batman obtained from Lex Luthor. The Box is the final component that transforms Victor Stone into Cyborg, thus saving his life in the process. Additionally, Steppenwolf and his Mother Boxes appear in a post-credits scene in the Ultimate Edition of the film.
In Justice League, Steppenwolf is in search of three Mother Boxes hidden away on Earth. Two are located in Themyscira and Atlantis, while the third is the one that had been seen in Batman v Superman and was used to activate Cyborg. Previously, Steppenwolf had used the Boxes in his original invasion of Earth, intending to use them to terraform the planet before being driven off by the combined force of the Olympian Gods, Atlanteans, Amazons, humans, and Yalan Gur of the Green Lantern Corps. After the war, the boxes were left on Earth, and the Amazons, Atlanteans, and humans each took custody of one of them. When all three boxes awaken after years of dormancy, Steppenwolf returns seeking to use them to finish what he had started. Eventually, after the Justice League defeat Steppenwolf, the first two boxes are each returned to their respective custodies, while Silas Stone begins researching the third box with his son to explore the extent of its powers.
Zack Snyder's Justice League depicts the Mother Boxes generally the same as in the theatrical version. After a failed invasion of Earth by Darkseid thousands of years ago, the Mother Boxes are separated and hidden away as in the theatrical release. The Amazonian Mother Box "awakens" upon Superman's death at the end of Batman v Superman, and alerts Steppenwolf to its location. He escapes with it after a short battle with the Amazonians and proceeds to search for the other two by capturing and interrogating Atlanteans and S.T.A.R. Labs scientists. Steppenwolf seizes the Atlantean Mother Box after a fight with Aquaman and Mera. The protagonists resurrect Superman with the third Mother Box, and Steppenwolf is able to claim it after an amnesiac Superman attacks the other superheroes. The superheroes locate Steppenwolf's fortress in Russia thanks to Silas Stone's self-sacrifice which allows them to detect the third Mother Box's location. They launch an attack on the fortress so Cyborg can interface with the Boxes and prevent the Unity. After they fail and Earth is destroyed, the Flash travels back in time to enable Cyborg to successfully deactivate the Boxes, preventing the Unity and defeating Steppenwolf, who is subsequently killed through the combined efforts of Aquaman, Superman, and Wonder Woman. In the aftermath, DeSaad informs Darkseid that the Mother Boxes are now destroyed, forcing Darkseid to conquer Earth using "the old ways", through military conquest.
In the Blu-Ray release of Wonder Woman, the epilogue Etta's Mission is included as an additional detailing of the events that transpired after the events of the film's story. Etta Candy's titular mission involves her, Diana Prince, and Steve Trevor retrieving one of the three Mother Boxes.
Animation
Two Mother Boxes appear in Superman/Batman: Apocalypse.
Numerous Mother Boxes appear in Justice League: War, being used to transport Parademons to Earth. When the Mother Boxes were activated, one of them was in Victor Stone's possession and badly wounded him, leading to his transformation into Cyborg. His newfound cybernetics gave him an intimate link to machinery that allowed him to communicate with Mother Boxes. Ultimately, he uses several Boom Tubes to repel the Apokoliptian invasion forces.
In Reign of the Supermen, Lex Luthor uses the Mother Box to free the Justice League, who were imprisoned in another dimension, and help Steel and Superboy defeat the drones.
Video games
A Mother Box is central in the plot of Justice League Heroes as it is coveted by Brainiac and used as a way to transform Earth into a "New Apokolips" by Darkseid.
In Injustice 2, Mother Boxes serve as the game's loot box rewards system, offering differing rewards depending on the rarity. Additionally, Cyborg utilizes them in gameplay to create drones that can target the opponent from multiple directions.
In Lego DC Super-Villains, a Mother Box is stolen from Wayne Tech and owned by Harley Quinn, who names it "Boxy". It is also revealed that the Mother Box contains a fragment of the Anti-Life Equation, which is then absorbed by the Rookie.
References
Fictional computers
Fictional elements introduced in 1971
Fourth World (comics) | Mother Box | Technology | 1,407 |
1,904,373 | https://en.wikipedia.org/wiki/Transformation%20%28function%29 | In mathematics, a transformation, transform, or self-map is a function f, usually with some geometrical underpinning, that maps a set X to itself, i.e. .
Examples include linear transformations of vector spaces and geometric transformations, which include projective transformations, affine transformations, and specific affine transformations, such as rotations, reflections and translations.
Partial transformations
While it is common to use the term transformation for any function of a set into itself (especially in terms like "transformation semigroup" and similar), there exists an alternative form of terminological convention in which the term "transformation" is reserved only for bijections. When such a narrow notion of transformation is generalized to partial functions, then a partial transformation is a function f: A → B, where both A and B are subsets of some set X.
Algebraic structures
The set of all transformations on a given base set, together with function composition, forms a regular semigroup.
Combinatorics
For a finite set of cardinality n, there are nn transformations and (n+1)n partial transformations.
See also
Coordinate transformation
Data transformation (statistics)
Geometric transformation
Infinitesimal transformation
Linear transformation
List of transforms
Rigid transformation
Transformation geometry
Transformation semigroup
Transformation group
Transformation matrix
References
External links
Functions and mappings | Transformation (function) | Mathematics | 262 |
63,236,535 | https://en.wikipedia.org/wiki/Accelerator%20neutrino | An accelerator neutrino is a human-generated neutrino or antineutrino obtained using particle accelerators, in which beam of protons is accelerated and collided with a fixed target, producing mesons (mainly pions) which then decay into neutrinos. Depending on the energy of the accelerated protons and whether mesons decay in flight or at rest it is possible to generate neutrinos of a different flavour, energy and angular distribution. Accelerator neutrinos are used to study neutrino interactions and neutrino oscillations taking advantage of high intensity of neutrino beams, as well as a possibility to control and understand their type and kinematic properties to a much greater extent than for neutrinos from other sources.
Muon neutrino beam production
The process of the muon neutrino or muon antineutrino beam production consists of the following steps:
Acceleration of a primary proton beam in a particle accelerator.
Proton beam collision with a fixed target. In such a collision secondary particles, mainly pions and kaons, are produced.
Focusing, by a set of magnetic horns, the secondary particles with a selected charge: positive to produce the muon neutrino beam, negative to produce the muon anti-neutrino beam.
Decay of the secondary particles in flight in a long (of the order of hundreds meters) decay tunnel. Charged pions decay in more than 99.98% into a muon and the corresponding neutrino according to the principle of preserving electric charge and lepton number:
→ + , → +
It is usually intended to have a pure beam, containing only one type of neutrino: either or . Thus, the length of the decay tunnel is optimised to maximise the number of pion decays and simultaneously minimise the number of muon decays, in which undesirable types of neutrinos are produced:
→ + + , → + +
In most of kaon decays the appropriate type of neutrinos (muon neutrinos for positive kaons and muon antineutrinos for negative kaons) are produced:
→ + , → + , (63.56% of decays),
→ + + , → + + , (3.35% of decays),
however, decays into electron (anti)neutrinos, is also a significant fraction:
→ + + , → + + , (5.07% of decays).
Absorption of the remaining hadrons and charged leptons in a beam dump (usually a block of graphite) and in the ground. At the same time neutrinos unimpeded travel farther, close the direction of their parent particles.
Neutrino beam kinematic properties
Neutrinos do not have an electric charge, so they cannot be focused or accelerated using electric and magnetic fields, and thus it is not possible to create a parallel, mono-energetic beam of neutrinos, as is done for charged particles beams in accelerators. To some extent, it is possible to control the direction and energy of neutrinos by properly selecting energy of the primary proton beam and focusing secondary pions and kaons, because the neutrinos take over part of their kinetic energy and move in a direction close to the parent particles.
Off-axis beam
A method that allows to further narrow the energy distribution of the produced neutrinos is the usage of the so-called off-axis beam. The accelerator neutrino beam is a wide beam that has no clear boundaries, because the neutrinos in it do not move in parallel, but have a certain angular distribution. However, the farther from the axis (centre) of the beam, the smaller is the number of neutrinos, but also the distribution of energy changes. The energy spectrum becomes narrower and its maximum shifts towards lower energies. The off-axis angle, and thus the neutrino energy spectrum, can be optimised to maximize neutrino oscillation probability or to select the energy range in which the desired type of neutrino interaction is dominant.
The first experiment in which the off-axis neutrino beam was used was the T2K experiment
Monitored and tagged neutrino beams
A high level of control of neutrinos at the source can be achieved by monitoring the production of charged leptons (positrons, muons) in the decay tunnel of the neutrino beam. Facilities that employ this method are called monitored neutrino beams. If the lepton rate is sufficiently small, modern particle detectors can time-tag the charged lepton produced in the decay tunnel and associate this lepton to the neutrino observed in the neutrino detector. This idea, which dates back to the 1960s, has been developed in the framework of the tagged neutrino beam concept but it has not been demonstrated, yet. Monitored neutrino beams produce neutrinos in a narrow energy range and, therefore, can employ the off-axis technique to predict the neutrino energy by measuring the interaction vertex, that is the distance of the neutrino interaction from the nominal beam axis. An energy resolution in the 10-20% range has been demonstrated in 2021 by the ENUBET Collaboration.
Neutrino beams in physics experiments
Below is the list of muon (anti)neutrino beams used in past or current physics experiments:
CERN Neutrinos to Gran Sasso (CNGS) beam produced by Super Proton Synchrotron at CERN used in OPERA and ICARUS experiments.
Booster Neutrino Beam (BNB) produced by the Booster synchrotron at Fermilab used in SciBooNE, MiniBooNE and MicroBooNE experiments.
Neutrinos at the Main Injector (NuMI) beam produced by the Main Injector synchrotron at Fermilab used in MINOS, MINERνA and NOνA experiments.
K2K neutrino beam produced by a 12 GeV proton synchrotron at KEK in Tsukuba used in K2K experiment.
T2K neutrino beam produced by the Main Ring synchrotron at J-PARC in Tokai used in T2K experiment.
Notes
Further reading
External links
Accelerator neutrinos - Fermilab
Accelerator physics
Neutrinos | Accelerator neutrino | Physics | 1,322 |
5,995,723 | https://en.wikipedia.org/wiki/Jurkat%20cells | Jurkat cells are an immortalized line of human T lymphocyte cells that are used to study acute T cell leukemia, T cell signaling, and the expression of various chemokine receptors susceptible to viral entry, particularly HIV. Jurkat cells can, upon stimulation by phytohaemagglutinin (PHA) or other stimulants such as phorbol 12-myristate 13-acetate (PMA or simply phorbol), express interleukin 2, and are used in research involving the susceptibility of cancers to drugs and radiation. However in the general case chronic phytohaemagglutinin kills Jurkat cells, though Jurkat clones can be devised which resist PHA-induced killing. The object of the system is that Jurkat cells can react to a signal and their expression can be measured. Jurkat cells with elements missing or knocked out can then provide a basis for examining the importance of that element on the expression of interleukin 2.
History
The Jurkat cell line (originally called JM) was established in the mid-1970s from the peripheral blood of a 14-year-old boy with T cell leukemia. Different derivatives of the Jurkat cell line that have been mutated to lack certain genes can now be obtained from cell culture banks.
Examples of derivatives
The JCaM1.6 cell line is deficient in Lck activity due to the deletion of part of the LCK gene (exon 7) from the LCK transcript.
J.RT3-T3.5 cells have a mutation in the T cell receptor beta chain locus precluding expression of this chain. This affects the cells in several ways; they do not express surface CD3 or produce the T cell receptor alpha/beta heterodimer. Since they are deficient in the TCR complex, these cells are a useful tool for transfection studies using T cell receptor alpha and beta chain genes and are widely used in labs in which T cell receptor gene transfer technologies are studied.
The I 9.2 and I 2.1 cell lines. The I 2.1 cell line is functionally defective for FADD and the I 9.2 cell line is functionally defective for caspase-8, both defective molecules being essential to apoptosis or necroptosis of cells.
The D1.1 cell line does not express the CD4 molecule, an important co-receptor in the activation pathway of helper T cells.
The J.gamma1 subline contains no detectable phospholipase C-gamma1 (PLC-γ1) protein and therefore has profound defects in T cell receptor (TCR) calcium mobilization and activation of nuclear factor of activated T cells (NFAT, an important transcription factor in T cells).
J-Lat contains integrated but transcriptionally latent HIV proviruses, in which GFP replaces nef coding sequence, and a frameshift mutation in env.
E6-1 cells express large amounts of interleukin 2 after stimulation with phorbol esters and either lectins or monoclonal antibodies against the T3 antigen (both types of stimulants are needed to induce interleukin 2 expression.
Cell line contamination
Jurkat E6-1 cells have been found to produce a xenotropic murine leukemia virus (X-MLV) (referred to as XMRV) that could potentially affect experimental outcomes. There is no evidence that this virus can infect humans. This infection may also change the virulence and tropism of the virus by way of phenotypic mixing and/or recombination.
References
External links
Cellosaurus entry for Jurkat
Human cell lines
Cellular senescence | Jurkat cells | Biology | 788 |
1,324,681 | https://en.wikipedia.org/wiki/Operating%20margin | In business, operating margin—also known as operating income margin, operating profit margin, EBIT margin and return on sales (ROS)—is the ratio of operating income ("operating profit" in the UK) to net sales, usually expressed in percent.
Net profit measures the profitability of ventures after accounting for all costs.
Return on sales (ROS) is net profit as a percentage of sales revenue. ROS is an indicator of profitability and is often used to compare the profitability of companies and industries of differing sizes. Significantly, ROS does not account for the capital (investment) used to generate the profit. In a survey of nearly 200 senior marketing managers, 69 percent responded that they found the "return on sales" metric very useful.
Unlike Earnings before interest, taxes, depreciation, and amortization (EBITDA) margin, operating margin takes into account depreciation and amortization expenses. {NNP = GNP- depreciation /GNP = GDP- depreciation
Purpose
These financial metrics measure levels and rates of profitability. Probably the most common way to determine the successfulness of a company is to look at the net profits of the business. Companies are collections of projects and markets, individual areas can be judged on how successful they are at adding to the corporate net profit. Not all projects are of equal size, however, and one way to adjust for size is to divide the profit by sales revenue. The resulting ratio is return on sales (ROS), the percentage of sales revenue that gets 'returned' to the company as net profits after all the related costs of the activity are deducted.
Construction
Net profit measures the fundamental profitability of the business. It is the revenues of the activity less the costs of the activity. The main complication is in more complex businesses when overhead needs to be allocated across divisions of the company. Almost by definition, overheads are costs that cannot be directly tied to any specific product or division. The classic example would be the cost of headquarters staff.
Net profit: To calculate net profit for a unit (such as a company or division), subtract all costs, including a fair share of total corporate overheads, from the gross revenues.Return on sales (ROS): Net profit as a percentage of sales revenue.
Earnings Before Interest, Taxes, Depreciation, and Amortization (EBITDA) is a very popular measure of financial performance. It is used to assess the 'operating' profit of the business. It is a rough way of calculating how much cash the business is generating and is even sometimes called the 'operating cash flow'. It can be useful because it removes factors that change the view of performance depending upon the accounting and financing policies of the business. Supporters argue it reduces management's ability to change the profits they report by their choice of accounting rules and the way they generate financial backing for the company. This metric excludes from consideration expenses related to decisions such as how to finance the business (debt or equity) and over what period they depreciate fixed assets. EBITDA is typically closer to actual cash flow than is NOPAT. ... EBITDA can be calculated by adding back the costs of interest, depreciation, and amortization charges and any taxes incurred.
Example: The Coca-Cola Company
It is a measurement of what proportion of a company's revenue is left over, before taxes and other indirect costs (such as rent, bonus, interest, etc.), after paying for variable costs of production as wages, raw materials, etc. A good operating margin is needed for a company to be able to pay for its fixed costs, such as interest on debt. A higher operating margin means that the company has less financial risk.
Operating margin can be considered total revenue from product sales less all costs before adjustment for taxes, dividends to shareholders, and interest on debt.
See also
Efficiency ratio
Incremental operating margin
Profit margin
References
Farris, Paul W.; Neil T. Bendle; Phillip E. Pfeifer; David J. Reibstein (2010). Marketing Metrics: The Definitive Guide to Measuring Marketing Performance.''
Financial ratios
Business economics
Pricing | Operating margin | Mathematics | 865 |
27,008,909 | https://en.wikipedia.org/wiki/IkeGPS | ikeGPS Limited, sometimes stylized as IKE, is an American-based New Zealand company that provides services for measuring, modeling, and managing power and telecommunications assets.
History
ikeGPS Limited's main business activity is selling systems aimed at assessing and deploying communications and electric utility networks. IKE is headquartered in Broomfield, Colorado, with offices in Wellington, New Zealand.
Products
ikeGPS Limited provides products that records geodata of multiple targets along with corresponding photographs. This is intended to be done from a distance, which can be of assistance if the user is dealing with any hard-to-reach or dangerous target.
The Spike product, which is no longer available, used a phone camera, a laser-based system and mobile app software to capture location, height, width, and distance of any object with 1% accuracy.
Applications
ikeGPS Limited's software and hardware are used by communications companies, such as AT&T and over 400 North American utilities for pole loading analysis, make-ready engineering, and associated networks. Its mobile products are deployed by transportation departments, local governments, by intelligence and defense groups and other organizations for emergency management and enterprise asset management.
See also
Geotagging
Utility Pole Inspections
Asset Management
Remote Data Collection
Joint Use Pole Audit
References
Electronics companies of New Zealand
Data collection | IkeGPS | Technology | 264 |
22,899,208 | https://en.wikipedia.org/wiki/Vaadin | Vaadin () is an open-source web application development platform for Java. Vaadin includes a set of Web Components, a Java web framework, and a set of tools that enable developers to implement modern web graphical user interfaces (GUI) using the Java programming language only (instead of HTML and JavaScript), TypeScript only, or a combination of both.
History
Development was first started as an adapter on top of the Millstone 3 open-source web framework released in the year 2002. It introduced an Ajax-based client communication and rendering engine. During 2006 this concept was then developed separately as a commercial product. As a consequence of this, a large part of Vaadin's server-side API is still compatible with Millstone's Swing-like APIs.
In early 2007 the product name was changed to IT Mill Toolkit and version 4 was released. It used a proprietary JavaScript Ajax-implementation for the client-side rendering, which made it rather complicated to implement new widgets. By the end of the year 2007 the proprietary client-side implementation was abandoned and GWT was integrated on top of the server-side components. At the same time, the product license was changed to the open source Apache License 2.0. The first production-ready release of IT Mill Toolkit 5 was made on March 4, 2009, after an over one year beta period.
On September 11, 2008, it was publicly announced that Michael Widenius–the main author of the original version of MySQL–invested in IT Mill, the Finnish developer of Vaadin. The size of the investment is undisclosed.
On May 20, 2009, IT Mill Toolkit changed its name to Vaadin Framework. The name originates from the Finnish word for doe, more precisely put, a female reindeer. It can also be translated from Finnish as "I insist". In addition to the name change, a pre-release of version 6 along with a community website was launched. Later, IT Mill Ltd, the company behind the open source Vaadin Framework, changed its name to Vaadin Ltd.
On March 30, 2010, Vaadin Directory was opened. It added a channel for distributing add-on components to the core Vaadin Framework, both for free or commercially. On launch date, there were 95 add-ons already available for download.
Vaadin Flow (Java API)
Vaadin Flow (formerly Vaadin Framework) is a Java web framework for building web applications and websites. Vaadin Flow's programming model allows developers to use Java as the programming language for implementing User Interfaces (UIs) without having to directly use HTML or JavaScript. Vaadin Flow features a server-side architecture which means that most of the UI logic runs securely on the server reducing the exposure to attackers. On the client-side, Vaadin Flow is built on top of Web Component standards. The client/server communication is automatically handled through WebSocket or HTTP with light JSON messages that update both, the UI in the browser and the UI state in the server.
Vaadin Flow's Java API includes classes such as TextField, Button, ComboBox, Grid, and many others that can be configured, styled, and added into layout objects instances of classes such as VerticalLayout, HorizontalLayout, SplitLayout, and others. Behaviour is implemented by adding listeners to events such as clicks, input value changes, and others. Views are created by custom Java classes that implement another UI component (custom or provided by the framework). This view classes are annotated with @Route to expose them to the browser with a specific URL. The following example illustrates these concepts: @Route("hello-world") // exposes the view through http://localhost:8080/hello-world
public class MainView extends VerticalLayout { // extends an existing UI component
public MainView() {
// creates a text field
TextField textField = new TextField("Enter your name");
// creates a button
Button button = new Button("Send");
// adds behaviour to the button using the click event
button.addClickListener(event ->
add(new Paragraph("Hello, " + textField.getValue()))
);
// adds the UI components to the view (VerticalLayout)
add(textField, button);
}
}The following is a screenshot of the previous example:
Hilla (TypeScript API)
Hilla (formerly Vaadin Fusion) is a web framework that integrates Spring Boot Java backends with reactive front ends implemented in TypeScript. This combination offers a fully type-safe development platform by combining server-side business logic in Java and type-safety in the client side with the TypeScript programming language. Views are implemented using Lit—a lightweight library for creating Web Components. The following is an example of a basic view implemented with Hilla:@customElement('hello-world-view')
export class HelloWorldView extends LitElement {
render() {
return html`
<div>
<vaadin-text-field label="Your name"></vaadin-text-field>
<vaadin-button @click="${this.sayHello}">Say hello</vaadin-button>
</div>
`;
}
sayHello() {
showNotification('Hello!');
}
}
Vaadin's UI components
Vaadin includes a set of User Interface (UI) components implemented as Web Components. These components include a server-side Java API (Vaadin Flow) but can also be used directly in HTML documents as well. Vaadin's UI components work with mouse and touch events, can be customized with CSS, are compatible with WAI-ARIA, include keyboard and screen readers support, and support right-to-left languages.
The following table shows a list of the UI components included in Vaadin:
Certifications
Vaadin offers two certification tracks to prove that a developer is proficient with Vaadin Flow:
Certified Vaadin 14 Developer
Certified Vaadin 14 Professional
To pass the certification, a developer should go through the documentation, follow the training videos, and take an online test.
Previous (now unavailable) certifications included:
Vaadin Online Exam for Vaadin 7 Certified Developer
Vaadin Online Exam for Vaadin 8 Certified Developer
See also
List of rich web application frameworks
References
Further reading
Duarte, A. (2021) Practical Vaadin: Developing Web Applications in Java. Apress.Duarte, A. (2018) Data-Centric Applications with Vaadin 8. Packt Publishing.Frankel, N. (2013) Learning Vaadin 7, Second Edition. Packt Publishing.Duarte, A. (2013) Vaadin 7 UI Design by Example: Beginner's Guide. Packt Publishing.Holan, J., & Kvasnovsky, O. (2013) Vaadin 7 Cookbook. Packt Publishing.Taylor C. (2012) Vaadin Recipes. Packt Publishing.Frankel, N. (2011) Learning Vaadin. Packt Publishing.Grönroos, M. (2010) Book of Vaadin. Vaadin Ltd.
External links
Vaadin on GitHub
Java (programming language) libraries
Web frameworks
Java enterprise platform
Software developed in Finland | Vaadin | Technology | 1,560 |
67,543,732 | https://en.wikipedia.org/wiki/Francisco%20Ernesto%20Baralle | Francisco Ernesto (Tito) Baralle (born 26 October 1943, in Buenos Aires) is an Argentinian geneticist best known for his innovations in molecular biology and in particular the discovery of how genes are processed and mechanisms in mRNA splicing.
Biography
Francisco Ernesto (a.k.a. Tito) Baralle was born in Buenos Aires, Argentina on 26 October 1943. After completing his Ph.D. studies at the Department of Organic Chemistry, he transferred to the Instituto de Investigaciones Bioquimicas Fundacion Campomar directed by Prof. Luis F. Leloir, now the Leloir Institute. In 1974, he moved to the MRC Laboratory of Molecular Biology, Cambridge University, UK, where he worked in the Division directed by Dr. Frederick Sanger. From (1980 to 1990), he was University Lecturer of Pathology at Oxford University and Fellow of Magdalen College. In 1993, he was awarded the Platinum Konex Award for Science and Technology (Argentina) as the best scientist of the decade in Genetics and Cytology.
In September 1990, he was appointed Director of the Trieste Component of International Centre for Genetic Engineering and Biotechnology (ICGEB) an autonomous, intergovernmental organisation originally established under UNIDO. From 2004-2014 he was the Director-General of the institute, overseeing laboratories in 4 continents, spanning 63 countries championing collaboration, scientific education and dissemination of Science and Biotechnology worldwide.
During his 10 year tenure as Director-General, and as well as expanding his medical and scientific research, he was responsible for the establishment of a Biotechnology Development Group serving as a training hub for researchers of developing countries, transferring biopharmaceutical know-how locally.
He was a strong supporter of the internationalization of science and took the 2-component (Italy and India) International Centre for Genetic Engineering and Biotechnology, and expanded it to 4 component institutions, establishing, and opening new centres in Africa and Argentina, giving opportunities and access to young scientists from the developing world.
Scientific activity
In 1977, and as a staff scientist at the laboratory of molecular biology, Cambridge, Tito published the sequence of the messenger RNA coding for beta-globin, the first complete primary structure of a eukaryotic mRNA. In 1979, his research group isolated the gene for epsilon-globin (HBE1), a component of human embryonic hemoglobin. He was one of the first to describe the pre-mRNA alternative splicing process in the 1980s.
His studies on how genes are processed described the first sequences within exons that control splicing, exonic splicing enhancer (ESE) and has since made critical contributions to understanding the molecular mechanisms involved in this important cellular process in health and disease. He first identified the protein called TDP 43, that is now known to play a central role in certain neurodegenerative disorders (Frontotemporal lobar degeneration, Amyotrophic lateral sclerosis and Alzheimer disease). Tito Baralle is a leader and innovator in molecular biology and in particular the discovery of how genes are processed and mechanisms in mRNA splicing.
Honors and awards
2014 Doctor Honoris Causae of the Faculty of Medicine Universidad de la Republica, Montevideo, Uruguay
2010 University of Nova Gorica, Slovenia Golden Plate Award for his work on international scientific collaboration. Full Professor of Molecular Biology
2010 Premio Raices, granted very selectively by the Minister of Science and Technology of Argentina for promotion of scientific education in Argentina
2010 Fellow of the Academy of Sciences for the Developing World-TWAS for the advancement of science in the developing world
2001 Fellow National Academy of Sciences of Argentina
1999 Visiting Professor University of Trieste
1993 Premio Konex de Platino, Konex Foundation Platinum prize in Science and technology, awarded to best scientist of the decade in Genetic and Cytology.
1993 Konex Foundation Merit Diploma in Science and Technology
1993 Honorary Professor of Biochemistry at the Faculty of Sciences, University of Buenos Aires
1980 Member of the European Molecular Biology Organization (EMBO)
Scientific publications
Tito has over 200 scientific publications, some of his most cited are:
"The Structure and evolution of the human b-globin gene family"
“TDP-43 Mutations in Familial and Sporadic Amyotrophic Lateral Sclerosis”
"Primary structure of human fibronectin: Differential splicing may generate at least 10 polypeptides from a single gene"
“Genomic variants in exons and introns: identifying the splicing spoilers”
“Characterisation and functional implications of the RNA binding properties of nuclear factor TDP-43, a novel splicing regulator of CFTR Exon 9”
References
Living people
1943 births
21st-century Argentine biologists
Geneticists
Molecular biologists
Fellows of Magdalen College, Oxford | Francisco Ernesto Baralle | Chemistry | 983 |
30,872,739 | https://en.wikipedia.org/wiki/New%20media%20art | New media art includes artworks designed and produced by means of electronic media technologies. It comprises virtual art, computer graphics, computer animation, digital art, interactive art, sound art, Internet art, video games, robotics, 3D printing, immersive installation and cyborg art. The term defines itself by the thereby created artwork, which differentiates itself from that deriving from conventional visual arts such as architecture, painting or sculpture.
New Media art has origins in the worlds of science, art, and performance. Some common themes found in new media art include databases, political and social activism, Afrofuturism, feminism, and identity, a ubiquitous theme found throughout is the incorporation of new technology into the work. The emphasis on medium is a defining feature of much contemporary art and many art schools and major universities now offer majors in "New Genres" or "New Media" and a growing number of graduate programs have emerged internationally.
New media art may involve degrees of interaction between artwork and observer or between the artist and the public, as is the case in performance art. Several theorists and curators have noted that such forms of interaction do not distinguish new media art but rather serve as a common ground that has parallels in other strands of contemporary art practice. Such insights emphasize the forms of cultural practice that arise concurrently with emerging technological platforms, and question the focus on technological media per se. New Media art involves complex curation and preservation practices that make collecting, installing, and exhibiting the works harder than most other mediums. Many cultural centers and museums have been established to cater to the advanced needs of new media art.
History
The origins of new media art can be traced to the moving image inventions of the 19th century such as the phenakistiscope (1833), the praxinoscope (1877) and Eadweard Muybridge's zoopraxiscope (1879). From the 1900s through the 1960s, various forms of kinetic and light art, from Thomas Wilfred's 'Lumia' (1919) and 'Clavilux' light organs to Jean Tinguely's self-destructing sculpture Homage to New York (1960) can be seen as progenitors of new media art.
Steve Dixon in his book Digital Performance: New Technologies in Theatre, Dance and Performance Art argues that the early twentieth century avant-garde art movement Futurism was the birthplace of the merging of technology and performance art. Some early examples of performance artists who experimented with then state-of-the-art lighting, film, and projection include dancers Loïe Fuller and Valentine de Saint-Point. Cartoonist Winsor McCay performed in sync with an animated Gertie the Dinosaur on tour in 1914. By the 1920s many Cabaret acts began incorporating film projection into performances.
Robert Rauschenberg's piece Broadcast (1959), composed of three interactive re-tunable radios and a painting, is considered one of the first examples of interactive art. German artist Wolf Vostell experimented with television sets in his (1958) installation TV De-collages. Vostell's work influenced Nam June Paik, who created sculptural installations featuring hundreds of television sets that displayed distorted and abstract footage.
Beginning in Chicago during the 1970s, there was a surge of artists experimenting with video art and combining recent computer technology with their traditional mediums, including sculpture, photography, and graphic design. Many of the artists involved were grad students at The School of the Art Institute of Chicago, including Kate Horsfield and Lyn Blumenthal, who co-founded the Video Data Bank in 1976. Another artists involved was Donna Cox, she collaborated with mathematician George Francis and computer scientist Ray Idaszak on the project Venus in Time which depicted mathematical data as 3D digital sculptures named for their similarities to paleolithic Venus statues. In 1982 artist Ellen Sandor and her team called (art)n Laboratory created the medium called PHSCologram, which stands for photography, holography, sculpture, and computer graphics. Her visualization of the AIDS virus was depicted on the cover of IEEE Computer Graphics and Applications in November 1988. At the University of Illinois in 1989, members of the Electronic Visualization Laboratory Carolina Cruz-Neira, Thomas DeFanti, and Daniel J. Sandin collaborated to create what is known as CAVE or Cave Automatic Virtual Environment an early virtual reality immersion using rear projection.
In 1983, Roy Ascott introduced the concept of "distributed authorship" in his worldwide telematic project La Plissure du Texte for Frank Popper's "Electra" at the Musée d'Art Moderne de la Ville de Paris. The development of computer graphics at the end of the 1980s and real time technologies in the 1990s combined with the spreading of the Web and the Internet favored the emergence of new and various forms of interactive art by Ken Feingold, Lynn Hershman Leeson, David Rokeby, Ken Rinaldo, Perry Hoberman, Tamas Waliczky; telematic art by Roy Ascott, Paul Sermon, Michael Bielický; Internet art by Vuk Ćosić, Jodi; virtual and immersive art by Jeffrey Shaw, Maurice Benayoun, Monika Fleischmann, and large scale urban installation by Rafael Lozano-Hemmer. In Geneva, the Centre pour l'Image Contemporaine or CIC coproduced with Centre Georges Pompidou from Paris and the Museum Ludwig in Cologne the first internet video archive of new media art.
Simultaneously advances in biotechnology have also allowed artists like Eduardo Kac to begin exploring DNA and genetics as a new art medium.
Influences on new media art have been the theories developed around interaction, hypertext, databases, and networks. Important thinkers in this regard have been Vannevar Bush and Theodor Nelson, whereas comparable ideas can be found in the literary works of Jorge Luis Borges, Italo Calvino, and Julio Cortázar.
Themes
In the book New Media Art, Mark Tribe and Reena Jana named several themes that contemporary new media art addresses, including computer art, collaboration, identity, appropriation, open sourcing, telepresence, surveillance, corporate parody, as well as intervention and hacktivism. In the book Postdigitale, Maurizio Bolognini suggested that new media artists have one common denominator, which is a self-referential relationship with the new technologies, the result of finding oneself inside an epoch-making transformation determined by technological development.
New media art does not appear as a set of homogeneous practices, but as a complex field converging around three main elements: 1) the art system, 2) scientific and industrial research, and 3) political-cultural media activism. There are significant differences between scientist-artists, activist-artists and technological artists closer to the art system, who not only have different training and technocultures, but have different artistic production. This should be taken into account in examining the several themes addressed by new media art.
Non-linearity can be seen as an important topic to new media art by artists developing interactive, generative, collaborative, immersive artworks like Jeffrey Shaw or Maurice Benayoun who explored the term as an approach to looking at varying forms of digital projects where the content relays on the user's experience. This is a key concept since people acquired the notion that they were conditioned to view everything in a linear and clear-cut fashion. Now, art is stepping out of that form and allowing for people to build their own experiences with the piece. Non-linearity describes a project that escape from the conventional linear narrative coming from novels, theater plays and movies. Non-linear art usually requires audience participation or at least, the fact that the "visitor" is taken into consideration by the representation, altering the displayed content. The participatory aspect of new media art, which for some artists has become integral, emerged from Allan Kaprow's Happenings and became with Internet, a significant component of contemporary art.
The inter-connectivity and interactivity of the internet, as well as the fight between corporate interests, governmental interests, and public interests that gave birth to the web today, inspire a lot of current new media art.
Databases
One of the key themes in new media art is to create visual views of databases. Pioneers in this area include Lisa Strausfeld, Martin Wattenberg and Alberto Frigo. From 2004 to 2014 George Legrady's piece "Making Visible the Invisible" displayed the normally unseen library metadata of items recently checked out at the Seattle Public Library on six LCD monitors behind the circulation desk. Database aesthetics holds at least two attractions to new media artists: formally, as a new variation on non-linear narratives; and politically as a means to subvert what is fast becoming a form of control and authority.
Political and social activism
Many new media art projects also work with themes like politics and social consciousness, allowing for social activism through the interactive nature of the media. New media art includes "explorations of code and user interface; interrogations of archives, databases, and networks; production via automated scraping, filtering, cloning, and recombinatory techniques; applications of user-generated content (UGC) layers; crowdsourcing ideas on social- media platforms; narrowcasting digital selves on "free" websites that claim copyright; and provocative performances that implicate audiences as participants".
Afrofuturism
Afrofuturism is an interdisciplinary genre that explores the African diaspora experience, predominantly in the United States, by deconstructing the past and imagining the future through the themes of technology, science fiction, and fantasy. Musician Sun Ra, believed to be one of the founders of Afrofuturism, thought a blend of technology and music could help humanity overcome the ills of society. His band, The Sun Ra Arkestra, combined traditional Jazz with sound and performance art and were among the first musicians to perform with a synthesizer. The twenty-first century has seen a resurgence of Afrofuturism aesthetics and themes with artists and cooperation's like Jessi Jumanji and Black Quantum Futurism and art educational centers like Black Space in Durham, North Carolina.
Feminism and the female experience
Japanese artist Mariko Mori's multimedia installation piece Wave UFO (1999–2003) sought to examine the science and perceptions behind the study of consciousness and neuroscience. Exploring the ways that these fields undertake research in a materially reductionist manner. Mori's work emphasized the need for these fields to become more holistic and incorporate insights and understanding of the world from philosophy and the humanities. Swiss artist Pipilotti Rist's (2008) immersive video installation Pour Your Body Out explores the dichotomy of beauty and the grotesque in the natural world and their relation to the female experience. The large-scale 360-degree installation featured breast-shaped projectors and circular pink pillows that invited viewers to relax and immerse themselves in the vibrant colors, psychedelic music, and partake in meditation and yoga. American filmmaker and artist Lynn Hershman Leeson explores in her films the themes of identity, technology and the erasure of women's roles and contributions to technology. Her (1999) film Conceiving Ada depicts a computer scientist and new media artist named Emmy as she attempts and succeeds at creating a way to communicate through cyberspace with Ada Lovelace, an Englishwoman who created the first computer program in the 1840s via a form of artificial intelligence.
Identity
With its roots in outsider art, New Media has been an ideal medium for an artist to explore the topics of identity and representation. In Canada, Indigenous multidisciplinary artists like Cheryl L'Hirondelle and Kent Monkman have incorporated themes about gender, identity, activism, and colonization in their work. Monkman, a Cree artist, performs and appears as their alter ego Miss Chief Eagle Testickle, in film, photography, painting, installation, and performance art. Monkman describes Miss Chief as a representation of a two-spirit or non-binary persona that does not fall under the traditional description of drag.
Future of new media art
The emergence of 3D printing has introduced a new bridge to new media art, joining the virtual and the physical worlds. The rise of this technology has allowed artists to blend the computational base of new media art with the traditional physical form of sculpture. A pioneer in this field was artist Jonty Hurwitz who created the first known anamorphosis sculpture using this technique.
Longevity
As the technologies used to deliver works of new media art such as film, tapes, web browsers, software and operating systems become obsolete, New Media art faces serious issues around the challenge to preserve artwork beyond the time of its contemporary production. Currently, research projects into New media art preservation are underway to improve the preservation and documentation of the fragile media arts heritage (see DOCAM – Documentation and Conservation of the Media Arts Heritage).
Methods of preservation exist, including the translation of a work from an obsolete medium into a related new medium, the digital archiving of media (see the Rhizome ArtBase, which holds over 2000 works, and the Internet Archive), and the use of emulators to preserve work dependent on obsolete software or operating system environments.
Around the mid-90s, the issue of storing works in digital form became a concern. Digital art such as moving images, multimedia, interactive programs, and computer-generated art has different properties than physical artwork such as oil paintings and sculptures. Unlike analog technologies, a digital file can be recopied onto a new medium without any deterioration of content. One of the problems with preserving digital art is that the formats continuously change over time. Former examples of transitions include that from 8-inch floppy disks to 5.25-inch floppies, 3-inch diskettes to CD-ROMs, and DVDs to flash drives. On the horizon is the obsolescence of flash drives and portable hard drives, as data is increasingly held in online cloud storage.
Museums and galleries thrive off of being able to accommodate the presentation and preservation of physical artwork. New media art challenges the original methods of the art world when it comes to documentation, its approach to collection and preservation. Technology continues to advance, and the nature and structure of art organizations and institutions will remain in jeopardy. The traditional roles of curators and artist are continually changing, and a shift to new collaborative models of production and presentation is needed.
Preservation
see also Conservation and restoration of new media art
New media art encompasses various mediums all which require their own preservation approaches. Due to the vast technical aspects involved no established digital preservation guidelines fully encompass the spectrum of new media art. New media art falls under the category of "complex digital object" in the Digital Curation Centre's digital curation lifecycle model which involves specialized or totally unique preservation techniques. Complex digital objects preservation has an emphasis on the inherent connection of the components of the piece.
Education
In New Media programs, students are able to get acquainted with the newest forms of creation and communication. New Media students learn to identify what is or isn't "new" about certain technologies. Science and the market will always present new tools and platforms for artists and designers. Students learn how to sort through new emerging technological platforms and place them in a larger context of sensation, communication, production, and consumption.
When obtaining a bachelor's degree in New Media, students will primarily work through practice of building experiences that utilize new and old technologies and narrative. Through the construction of projects in various media, they acquire technical skills, practice vocabularies of critique and analysis, and gain familiarity with historical and contemporary precedents.
In the United States, many Bachelor's and Master's level programs exist with concentrations on Media Art, New Media, Media Design, Digital Media and Interactive Arts.
Theorists and historians
Notable art theorists and historians working in this field include:
Roy Ascott
Maurice Benayoun
Christine Buci-Glucksmann
Jack Burnham
Mario Costa
Edmond Couchot
Fred Forest
Oliver Grau
Margot Lovejoy
Lev Manovich
Robert C. Morgan
Dominique Moulon
Christiane Paul
Catherine Perret
Frank Popper
Edward A. Shanken
Types
The term New Media Art is generally applied to disciplines such as:
Artistic computer game modification
ASCII art
Bio Art
Cyberformance
Computer art
Critical making
Digital art
Demoscene
Digital poetry
Electronic art
Experimental musical instrument building
Evolutionary art
Fax art
Generative art
Glitch art
Hypertext
Information art
Interactive art
Kinetic art
Light art
Motion graphics
Net art
Performance art
Radio art
Robotic art
Software art
Sound art
Systems art
Telematic art
Video art
Video games
Virtual art
Artists
Cultural centres
Australian Network for Art and Technology
Center for Art and Media Karlsruhe
Centre pour l'Image Contemporaine
Daniel Langlois Foundation
Eyebeam Art and Technology Center
Foundation for Art and Creative Technology
Gray Area Foundation for the Arts
Harvestworks
InterAccess
Los Angeles Center for Digital Art (LACDA)
Netherlands Media Art Institute
NTT InterCommunication Center
Rhizome (organization)
RIXC
School for Poetic Computation (SFPC)
School of the Art Institute of Chicago
Squeaky Wheel: Film and Media Arts Center
V2 Institute for the Unstable Media
WORM
See also
ART/MEDIA
Artmedia
Aspect magazine
Culture jamming
Digital media
Digital puppetry
Electronic Language International Festival
Expanded Cinema
Experiments in Art and Technology
Interactive film
Interactive media
Intermedia
LA Freewaves
Net.art
New media art festivals
New media artist
New media art journals
New media art preservation
Remix culture
VJing
References
Further reading
Jorge Luis Borges (1941). "The Garden of Forking Paths." Editorial Sur.
Graham, Philip Mitchell, New Epoch Art, InterACTA: Journal of the Art Teachers Association of Victoria, Published by ACTA, Parkville, Victoria, No 4, 1990, , Cited In APAIS. This database is available on the Informit Online Internet Service or on CD-ROM, or on Australian Public Affairs – Full Text
Lopes, Dominic McIver. (2009). A Philosophy of Computer Art. London: Routledge
Robert C. Morgan, Commentaries on the New Media Arts Pasadena, CA: Umbrella Associates,1992
Janet Murray (2003). "Inventing the Medium", The New Media Reader. MIT Press.
Frank Popper (2007) From Technological to Virtual Art, MIT Press/Leonardo Books
Frank Popper (1997) Art of the Electronic Age, Thames & Hudson
Rainer Usselmann, (2002) "About Interface: Actualisation and Totality", University of Southampton
Youngblood, Gene (1970). Expanded Cinema. New York. E.P. Dutton & Company.
New Media Faculty, (2011). "New Media", University of Illinois at Urbana-Champaign
Mass media technology
Visual arts genres
Digital art | New media art | Technology | 3,804 |
17,787,631 | https://en.wikipedia.org/wiki/Bi-isotropic%20material | In physics, engineering and materials science, bi-isotropic materials have the special optical property that they can rotate the polarization of light in either refraction or transmission. This does not mean all materials with twist effect fall in the bi-isotropic class. The twist effect of the class of bi-isotropic materials is caused by the chirality and non-reciprocity of the structure of the media, in which the electric and magnetic field of an electromagnetic wave (or simply, light) interact in an unusual way.
Definition
For most materials, the electric field E and electric displacement field D (as well as the magnetic field B and inductive magnetic field H) are parallel to one another. These simple mediums are called isotropic, and the relationships between the fields can be expressed using constants. For more complex materials, such as crystals and many metamaterials, these fields are not necessarily parallel. When one set of the fields are parallel, and one set are not, the material is called anisotropic. Crystals typically have D fields which are not aligned with the E fields, while the B and H fields remain related by a constant. Materials where either pair of fields is not parallel are called anisotropic.
In bi-isotropic media, the electric and magnetic fields are coupled. The constitutive relations are
D, E, B, H, ε and μ are corresponding to usual electromagnetic qualities. ξ and ζ are the coupling constants, which is the intrinsic constant of each media.
This can be generalized to the case where ε, μ, ξ and ζ are tensors (i.e. they depend on the direction within the material), in which case the media is referred to as bi-anisotropic.
Coupling constant
ξ and ζ can be further related to the Tellegen (referred to as reciprocity) χ and chirality κ parameter
after substitution of the above equations into the constitutive relations, gives
Classification
Examples
Pasteur media can be made by mixing metal helices of one handedness into a resin. Care must be exercised to secure isotropy: the helices must be randomly oriented so that there is no special direction.
The magnetoelectric effect can be understood from the helix as it is exposed to the electromagnetic field. The helix geometry can be considered as an inductor. For such a structure the magnetic component of an EM wave induces a current on the wire and further influences the electric component of the same EM wave.
From the constitutive relations, for Pasteur media, χ = 0,
Hence, the D field is delayed by a phase i due to the response from the H field.
Tellegen media is the opposite of Pasteur media, which is electromagnetic: the electric component will cause the magnetic component to change. Such a medium is not as straightforward as the concept of handedness. Electric dipoles bonded with magnets belong to this kind of media. When the dipoles align themselves to the electric field component of the EM wave, the magnets will also respond, as they are bounded together. The change in direction of the magnets will therefore change the magnetic component of the EM wave, and so on.
From the constitutive relations, for Tellegen media, κ = 0,
This implies that the B field responds in phase with the H field.
See also
Anisotropy
Chirality (electromagnetism)
Metamaterial
Reciprocity (electromagnetism)
Maxwell's_equations#Constitutive_relations
References
Orientation (geometry)
Materials science | Bi-isotropic material | Physics,Materials_science,Mathematics,Engineering | 742 |
3,133,272 | https://en.wikipedia.org/wiki/Retinite | Retinite is resin, particularly from beds of brown coal which are near amber in appearance, but contain little or no succinic acid. It may conveniently serve as a generic name, since no two independent occurrences prove to be alike, and the indefinite multiplication of names, no one of them properly specific, is not to be desired.
Retinite resins contain no succinic acid and oxygen from 6% to 15%.
References
Resins | Retinite | Physics | 94 |
7,620,646 | https://en.wikipedia.org/wiki/Ell%20%28architecture%29 | In architecture, an ell is a wing of a building perpendicular (at a right angle) to the length of the main portion (main range).
It takes its name from the shape of the letter L. Ells are often additions to a building. Unless sub-wings or a non-rectangular outline floor plan exists such a wing makes the building L-shaped or T-shaped "in plan" (shape from above/below), though if not central nor at one end of the building the T-shape will be an offset T. Where a building is aligned closely to cardinal compass points, such a wing may be more informatively described by its related side of the building (such as "south wing of the building").
Connected farms and large rural homes
In connected farm architecture and homes that were the economic hubs of large grounds including in Mediterranean and northern European traditions, one or more ells (wings) will usually be extended to attach the main house or range to another building, such as a barn or stables, or a tower or chapel or defensive range in the case of a castle or palace. In formal and early modern settings it may take the form of a well-sunlit long gallery; or it may be a plant-growing section or open-sided walkway, if outdoors, a colonnade, plant-covered walkway (pergola) or the indoor analog, a gallery conservatory.
See also
Hyphen (architecture)
References
External links
This Old House Carlisle project (connected farm)
Architectural elements | Ell (architecture) | Technology,Engineering | 312 |
5,277,630 | https://en.wikipedia.org/wiki/Rhizopogon%20villosulus | Rhizopogon villosulus is an ectomycorrhizal fungus used as a soil inoculant in agriculture and horticulture. It was first described scientifically by mycologist Sanford Myron Zeller in 1941.
References
Rhizopogonaceae
Fungi described in 1941
Fungus species
Taxa named by Sanford Myron Zeller | Rhizopogon villosulus | Biology | 75 |
59,385 | https://en.wikipedia.org/wiki/Amanita%20muscaria | Amanita muscaria, commonly known as the fly agaric or fly amanita, is a basidiomycete of the genus Amanita. It is a large white-gilled, white-spotted, and usually red mushroom.
Despite its easily distinguishable features, A.muscaria is a fungus with several known variations, or subspecies. These subspecies are slightly different, some having yellow or white caps, but are all usually called fly agarics, most often recognizable by their notable white spots. Recent DNA fungi research, however, has shown that some mushrooms called "fly agaric" are in fact unique species, such as A.persicina (the peach-colored fly agaric).
Native throughout the temperate and boreal regions of the Northern Hemisphere, A.muscaria has been unintentionally introduced to many countries in the Southern Hemisphere, generally as a symbiont with pine and birch plantations, and is now a true cosmopolitan species. It associates with various deciduous and coniferous trees.
Although poisonous, death due to poisoning from A.muscaria ingestion is quite rare. Parboiling twice with water weakens its toxicity and breaks down the mushroom's psychoactive substances; it is eaten in parts of Europe, Asia, and North America. All A.muscaria varieties, but in particular A.muscaria var. muscaria, are noted for their hallucinogenic properties, with the main psychoactive constituents being muscimol and its neurotoxic precursor ibotenic acid. A local variety of the mushroom was used as an intoxicant and entheogen by the indigenous peoples of Siberia.
Arguably the most iconic toadstool species, the fly agaric is one of the most recognizable fungi in the world, and is widely encountered in popular culture, including in video games—for example, the frequent use of a recognizable A.muscaria in the Mario franchise (e.g. its Super Mushroom power-up)—and television—for example, the houses in The Smurfs franchise. There have been cases of children admitted to hospitals after consuming this poisonous mushroom; the children may have been attracted to it because of its pop-culture associations.
Taxonomy
The name of the mushroom in many European languages is thought to derive from its use as an insecticide when sprinkled in milk. This practice has been recorded from Germanic- and Slavic-speaking parts of Europe, as well as the Vosges region and pockets elsewhere in France, and Romania. Albertus Magnus was the first to record it in his work De vegetabilibus some time before 1256, commenting "vocatur fungus muscarum, eo quod in lacte pulverizatus interficit muscas" ("it is called the fly mushroom because it is powdered in milk to kill flies").
The 16th-century Flemish botanist Carolus Clusius traced the practice of sprinkling it into milk to Frankfurt in Germany, while Carl Linnaeus, the "father of taxonomy", reported it from Småland in southern Sweden, where he had lived as a child. He described it in volume two of his Species Plantarum in 1753, giving it the name Agaricus muscarius, the specific epithet deriving from Latin musca meaning "fly". It gained its current name in 1783, when placed in the genus Amanita by Jean-Baptiste Lamarck, a name sanctioned in 1821 by the "father of mycology", Swedish naturalist Elias Magnus Fries. The starting date for all the mycota had been set by general agreement as January 1, 1821, the date of Fries's work, and so the full name was then Amanita muscaria (L.:Fr.) Hook. The 1987 edition of the International Code of Botanical Nomenclature changed the rules on the starting date and primary work for names of fungi, and names can now be considered valid as far back as May 1, 1753, the date of publication of Linnaeus's work. Hence, Linnaeus and Lamarck are now taken as the namers of Amanita muscaria (L.) Lam..
The English mycologist John Ramsbottom reported that Amanita muscaria was used for getting rid of bugs in England and Sweden, and bug agaric was an old alternative name for the species. French mycologist Pierre Bulliard reported having tried without success to replicate its fly-killing properties in his work (1784), and proposed a new binomial name Agaricus pseudo-aurantiacus because of this. One compound isolated from the fungus is 1,3-diolein (1,3-di(cis-9-octadecenoyl)glycerol), which attracts insects.
It has been hypothesised that the flies intentionally seek out the fly agaric for its intoxicating properties.
An alternative derivation proposes that the term fly- refers not to insects as such but rather the delirium resulting from consumption of the fungus. This is based on the medieval belief that flies could enter a person's head and cause mental illness. Several regional names appear to be linked with this connotation, meaning the "mad" or "fool's" version of the highly regarded edible mushroom Amanita caesarea. Hence there is "mad oriol" in Catalan, mujolo folo from Toulouse, from the Aveyron department in Southern France, from Trentino in Italy. A local dialect name in Fribourg in Switzerland is tsapi de diablhou, which translates as "Devil's hat".
Classification
Amanita muscaria is the type species of the genus. By extension, it is also the type species of Amanita subgenus Amanita, as well as section Amanita within this subgenus. Amanita subgenus Amanita includes all Amanita with inamyloid spores. Amanita section Amanita includes the species with patchy universal veil remnants, including a volva that is reduced to a series of concentric rings, and the veil remnants on the cap to a series of patches or warts. Most species in this group also have a bulbous base. Amanita section Amanita consists of A. muscaria and its close relatives, including A. pantherina (the panther cap), A. gemmata, A. farinosa, and A. xanthocephala. Modern fungal taxonomists have classified Amanita muscaria and its allies this way based on gross morphology and spore inamyloidy. Two recent molecular phylogenetic studies have confirmed this classification as natural.
Description
A large, conspicuous mushroom, Amanita muscaria is generally common and numerous where it grows, and is often found in groups with basidiocarps in all stages of development.
Fly agaric fruiting bodies emerge from the soil looking like white eggs. After emerging from the ground, the cap is covered with numerous small white to yellow pyramid-shaped warts. These are remnants of the universal veil, a membrane that encloses the entire mushroom when it is still very young. Dissecting the mushroom at this stage reveals a characteristic yellowish layer of skin under the veil, which helps identification. As the fungus grows, the red colour appears through the broken veil and the warts become less prominent; they do not change in size, but are reduced relative to the expanding skin area. The cap changes from globose to hemispherical, and finally to plate-like and flat in mature specimens. Fully grown, the bright red cap is usually around in diameter, although larger specimens have been found. The red colour may fade after rain and in older mushrooms.
The free gills are white, as is the spore print. The oval spores measure 9–13 by 6.5–9 μm; they do not turn blue with the application of iodine. The stipe is white, high by wide, and has the slightly brittle, fibrous texture typical of many large mushrooms. At the base is a bulb that bears universal veil remnants in the form of two to four distinct rings or ruffs. Between the basal universal veil remnants and gills are remnants of the partial veil (which covers the gills during development) in the form of a white ring. It can be quite wide and flaccid with age. There is generally no associated smell other than a mild earthiness.
Although very distinctive in appearance, the fly agaric has been mistaken for other yellow to red mushroom species in the Americas, such as Armillaria cf. mellea and the edible A. basii—a Mexican species similar to A. caesarea of Europe. Poison control centres in the U.S. and Canada have become aware that (Spanish for 'yellow') is a common name for the A. caesarea-like species in Mexico. A. caesarea is distinguished by its entirely orange to red cap, which lacks the numerous white warty spots of the fly agaric (though these sometimes wash away during heavy rain). Furthermore, the stem, gills and ring of A. caesarea are bright yellow, not white. The volva is a distinct white bag, not broken into scales. In Australia, the introduced fly agaric may be confused with the native vermilion grisette (Amanita xanthocephala), which grows in association with eucalypts. The latter species generally lacks the white warts of A. muscaria and bears no ring. Additionally, immature button forms resemble puffballs.
Controversy
Amanita muscaria varies considerably in its morphology, and many authorities recognize several subspecies or varieties within the species. In The Agaricales in Modern Taxonomy, German mycologist Rolf Singer listed three subspecies, though without description: A. muscaria ssp. muscaria, A. muscaria ssp. americana, and A. muscaria ssp. flavivolvata.
However, a 2006 molecular phylogenetic study of different regional populations of A. muscaria by mycologist József Geml and colleagues found three distinct clades within this species representing, roughly, Eurasian, Eurasian "subalpine", and North American populations. Specimens belonging to all three clades have been found in Alaska; this has led to the hypothesis that this was the centre of diversification for this species. The study also looked at four named varieties of the species: var. alba, var. flavivolvata, var. formosa (including var. guessowii), and var. regalis from both areas. All four varieties were found within both the Eurasian and North American clades, evidence that these morphological forms are polymorphisms rather than distinct subspecies or varieties. Further molecular study by Geml and colleagues published in 2008 show that these three genetic groups, plus a fourth associated with oak–hickory–pine forest in the southeastern United States and two more on Santa Cruz Island in California, are delineated from each other enough genetically to be considered separate species. Thus A. muscaria as it stands currently is, evidently, a species complex. The complex also includes at least three other closely related taxa that are currently regarded as species: A. breckonii is a buff-capped mushroom associated with conifers from the Pacific Northwest, and the brown-capped A. gioiosa and A. heterochroma from the Mediterranean Basin and from Sardinia respectively. Both of these last two are found with Eucalyptus and Cistus trees, and it is unclear whether they are native or introduced from Australia.
Amanitaceae.org lists four varieties , but says that they will be segregated into their own taxa "in the near future". They are:
Distribution and habitat
A. muscaria is a cosmopolitan mushroom, native to conifer and deciduous woodlands throughout the temperate and boreal regions of the Northern Hemisphere, including higher elevations of warmer latitudes in regions such as Hindu Kush, the Mediterranean and also Central America. A recent molecular study proposes that it had an ancestral origin in the Siberian–Beringian region in the Tertiary period, before radiating outwards across Asia, Europe and North America. The season for fruiting varies in different climates: fruiting occurs in summer and autumn across most of North America, but later in autumn and early winter on the Pacific coast. This species is often found in similar locations to Boletus edulis, and may appear in fairy rings. Conveyed with pine seedlings, it has been widely transported into the southern hemisphere, including Australia, New Zealand, South Africa and South America, where it can be found in the Brazilian states of Paraná, São Paulo, Minas Gerais, Rio Grande do Sul.
Ectomycorrhizal, A. muscaria forms symbiotic relationships with many trees, including pine, oak, spruce, fir, birch, and cedar. Commonly seen under introduced trees, A. muscaria is the fungal equivalent of a weed in New Zealand, Tasmania and Victoria, forming new associations with southern beech (Nothofagus). The species is also invading a rainforest in Australia, where it may be displacing the native species. It appears to be spreading northwards, with recent reports placing it near Port Macquarie on the New South Wales north coast. It was recorded under silver birch (Betula pendula) in Manjimup, Western Australia in 2010. Although it has apparently not spread to eucalypts in Australia, it has been recorded associating with them in Portugal. Commonly found throughout the great Southern region of western Australia, it is regularly found growing on Pinus radiata.
Toxicity
A. muscaria poisoning has occurred in young children and in people who ingested the mushrooms for a hallucinogenic experience, or who confused it with an edible species.
A. muscaria contains several biologically active agents, at least one of which, muscimol, is known to be psychoactive. Ibotenic acid, a neurotoxin, serves as a prodrug to muscimol, with a small amount likely converting to muscimol after ingestion. An active dose in adults is approximately 6 mg muscimol or 30 to 60 mg ibotenic acid; this is typically about the amount found in one cap of Amanita muscaria. The amount and ratio of chemical compounds per mushroom varies widely from region to region and season to season, which can further confuse the issue. Spring and summer mushrooms have been reported to contain up to 10 times more ibotenic acid and muscimol than autumn fruitings.
Deaths from A. muscaria have been reported in historical journal articles and newspaper reports, but with modern medical treatment, fatal poisoning from ingesting this mushroom is extremely rare. Many books list A. muscaria as deadly, but according to David Arora, this is an error that implies the mushroom is far more toxic than it is. Furthermore, The North American Mycological Association has stated that there were "no reliably documented cases of death from toxins in these mushrooms in the past 100 years".
The active constituents of this species are water-soluble, and boiling and then discarding the cooking water at least partly detoxifies A. muscaria. Drying may increase potency, as the process facilitates the conversion of ibotenic acid to the more potent muscimol. According to some sources, once detoxified, the mushroom becomes edible. Patrick Harding describes the Sami custom of processing the fly agaric through reindeer.
Pharmacology
Muscarine, discovered in 1869, was long thought to be the active hallucinogenic agent in A. muscaria. Muscarine binds with muscarinic acetylcholine receptors leading to the excitation of neurons bearing these receptors. The levels of muscarine in Amanita muscaria are minute when compared with other poisonous fungi such as Inosperma erubescens, the small white Clitocybe species C. dealbata and C. rivulosa. The level of muscarine in A. muscaria is too low to play a role in the symptoms of poisoning.
The major toxins involved in A. muscaria poisoning are muscimol (3-hydroxy-5-aminomethyl-1-isoxazole, an unsaturated cyclic hydroxamic acid) and the related amino acid ibotenic acid. Muscimol is the product of the decarboxylation (usually by drying) of ibotenic acid. Muscimol and ibotenic acid were discovered in the mid-20th century. Researchers in England, Japan, and Switzerland showed that the effects produced were due mainly to ibotenic acid and muscimol, not muscarine. These toxins are not distributed uniformly in the mushroom. Most are detected in the cap of the fruit, a moderate amount in the base, with the smallest amount in the stalk. Quite rapidly, between 20 and 90 minutes after ingestion, a substantial fraction of ibotenic acid is excreted unmetabolised in the urine of the consumer. Almost no muscimol is excreted when pure ibotenic acid is eaten, but muscimol is detectable in the urine after eating A. muscaria, which contains both ibotenic acid and muscimol.
Ibotenic acid and muscimol are structurally related to each other and to two major neurotransmitters of the central nervous system: glutamic acid and GABA respectively. Ibotenic acid and muscimol act like these neurotransmitters, muscimol being a potent GABAA agonist, while ibotenic acid is an agonist of NMDA glutamate receptors and certain metabotropic glutamate receptors which are involved in the control of neuronal activity. It is these interactions which are thought to cause the psychoactive effects found in intoxication.
Muscazone is another compound that has more recently been isolated from European specimens of the fly agaric. It is a product of the breakdown of ibotenic acid by ultraviolet radiation. Muscazone is of minor pharmacological activity compared with the other agents. Amanita muscaria and related species are known as effective bioaccumulators of vanadium; some species concentrate vanadium to levels of up to 400 times those typically found in plants. Vanadium is present in fruit-bodies as an organometallic compound called amavadine. The biological importance of the accumulation process is unknown.
Symptoms
Fly agarics are best known for the unpredictability of their effects. Depending on habitat and the amount ingested per body weight, effects can range from mild nausea and twitching to drowsiness, cholinergic crisis-like effects (low blood pressure, sweating and salivation), auditory and visual distortions, mood changes, euphoria, relaxation, ataxia, and loss of equilibrium (like with tetanus.)
In cases of serious poisoning the mushroom causes delirium, somewhat similar in effect to anticholinergic poisoning (such as that caused by Datura stramonium), characterised by bouts of marked agitation with confusion, hallucinations, and irritability followed by periods of central nervous system depression. Seizures and coma may also occur in severe poisonings. Symptoms typically appear after around 30 to 90 minutes and peak within three hours, but certain effects can last for several days. In the majority of cases recovery is complete within 12 to 24 hours. The effect is highly variable between individuals, with similar doses potentially causing quite different reactions. Some people suffering intoxication have exhibited headaches up to ten hours afterwards. Retrograde amnesia and somnolence can result following recovery.
Treatment
Medical attention should be sought in cases of suspected poisoning. If the delay between ingestion and treatment is less than four hours, activated charcoal is given. Gastric lavage can be considered if the patient presents within one hour of ingestion. Inducing vomiting with syrup of ipecac is no longer recommended in any poisoning situation.
There is no antidote, and supportive care is the mainstay of further treatment for intoxication. Though sometimes referred to as a deliriant and while muscarine was first isolated from A. muscaria and as such is its namesake, muscimol does not have action, either as an agonist or antagonist, at the muscarinic acetylcholine receptor site, and therefore atropine or physostigmine as an antidote is not recommended. If a patient is delirious or agitated, this can usually be treated by reassurance and, if necessary, physical restraints. A benzodiazepine such as diazepam or lorazepam can be used to control combativeness, agitation, muscular overactivity, and seizures. Only small doses should be used, as they may worsen the respiratory depressant effects of muscimol. Recurrent vomiting is rare, but if present may lead to fluid and electrolyte imbalances; intravenous rehydration or electrolyte replacement may be required. Serious cases may develop loss of consciousness or coma, and may need intubation and artificial ventilation. Hemodialysis can remove the toxins, although this intervention is generally considered unnecessary. With modern medical treatment the prognosis is typically good following supportive treatment.
Uses
Psychoactive
The wide range of psychoactive effects have been variously described as depressant, sedative-hypnotic, psychedelic, dissociative, or deliriant; paradoxical effects such as stimulation may occur however. Perceptual phenomena such as synesthesia, macropsia, and micropsia may occur; the latter two effects may occur either simultaneously or alternatingly, as part of Alice in Wonderland syndrome, collectively known as dysmetropsia, along with related distortions pelopsia and teleopsia. Some users report lucid dreaming under the influence of its hypnotic effects. Unlike Psilocybe cubensis, A. muscaria cannot be commercially cultivated, due to its mycorrhizal relationship with the roots of pine trees. However, following the outlawing of psilocybin mushrooms in the United Kingdom in 2006, the sale of the still legal A. muscaria began increasing.
Marija Gimbutas reported to R. Gordon Wasson that in remote areas of Lithuania, A. muscaria has been consumed at wedding feasts, in which mushrooms were mixed with vodka. She also reported that the Lithuanians used to export A. muscaria to the Sami in the Far North for use in shamanic rituals. The Lithuanian festivities are the only report that Wasson received of ingestion of fly agaric for religious use in Eastern Europe.
Siberia
A. muscaria was widely used as an entheogen by many of the indigenous peoples of Siberia. Its use was known among almost all of the Uralic-speaking peoples of western Siberia and the Paleosiberian-speaking peoples of the Russian Far East. There are only isolated reports of A. muscaria use among the Tungusic and Turkic peoples of central Siberia and it is believed that on the whole entheogenic use of A. muscaria was not practised by these peoples. In western Siberia, the use of A. muscaria was restricted to shamans, who used it as an alternative method of achieving a trance state. (Normally, Siberian shamans achieve trance by prolonged drumming and dancing.) In eastern Siberia, A. muscaria was used by both shamans and laypeople alike, and was used recreationally as well as religiously. In eastern Siberia, the shaman would take the mushrooms, and others would drink his urine. This urine, still containing psychoactive elements, may be more potent than the A. muscaria mushrooms with fewer negative effects such as sweating and twitching, suggesting that the initial user may act as a screening filter for other components in the mushroom.
The Koryak of eastern Siberia have a story about the fly agaric (wapaq) which enabled Big Raven to carry a whale to its home. In the story, the deity Vahiyinin ("Existence") spat onto earth, and his spittle became the wapaq, and his saliva becomes the warts. After experiencing the power of the wapaq, Raven was so exhilarated that he told it to grow forever on earth so his children, the people, could learn from it. Among the Koryaks, one report said that the poor would consume the urine of the wealthy, who could afford to buy the mushrooms. It was reported that the local reindeer would often follow an individual intoxicated by the muscimol mushroom, and if said individual were to urinate in snow the reindeer would become similarly intoxicated and the Koryak people's would use the drunken state of the reindeer to more easily rope and hunt them.
Recent rise in popularity
As a result of a lack of regulation, the use of Amanita muscaria as a popular legal alternative to hallucinogens has grown exponentially in recent years. In 2024, Google searches for Amanita muscaria rose nearly 200% from the previous year, a trend that an article published in the American Journal of Preventative Medicine correlated with the sudden commercialization of Amanita muscaria products on the internet.
While Amanita mushrooms are unscheduled in the United States, the sale of Amanita products exists in a legal gray area as they are listed as a poison by the FDA and are not approved to be used in dietary supplements, with some drawing comparisons to the controversial legal status of hemp-derived cannabinoids.
A recent outbreak of poisonings and at least one death associated with products containing Amanita muscaria extracts has sparked debates regarding the regulatory status of Amanita mushrooms and their psychoactive constituents. These products often use misleading advertising, such as erroneous comparisons to Psilocybin mushrooms or simply not disclosing the inclusion of Amanita mushrooms on the packaging.
Other reports and theories
The Finnish historian T. I. Itkonen mentions that A. muscaria was once used among the Sámi peoples. Sorcerers in Inari would consume fly agarics with seven spots. In 1979, Said Gholam Mochtar and Hartmut Geerken published an article in which they claimed to have discovered a tradition of medicinal and recreational use of this mushroom among a Parachi-speaking group in Afghanistan. There are also unconfirmed reports of religious use of A. muscaria among two Subarctic Native American tribes. Ojibwa ethnobotanist Keewaydinoquay Peschel reported its use among her people, where it was known as (an abbreviation of the name (= "red-top mushroom"). This information was enthusiastically received by Wasson, although evidence from other sources was lacking. There is also one account of a Euro-American who claims to have been initiated into traditional Tlicho use of Amanita muscaria.
Mycophilosopher Martijn Benders has proposed a novel evolutionary theory involving Amanita muscaria. In his book Amanita Muscaria – the Book of the Empress, Benders argues that a precursor of ibotenic acid, a compound found in the mushroom, was present in ancient seaweed and played a significant role in the evolution of life. According to this hypothesis, the compound influenced the twitching movements of early aquatic organisms, leading to the development of behaviors such as jumping onto land—a crucial step in the evolution of terrestrial species.
The flying reindeer of Santa Claus, who is called Joulupukki in Finland, could symbolize the use of A. muscaria by Sámi shamans. However, Sámi scholars and the Sámi peoples themselves refute any connection between Santa Claus and Sámi history or culture."The story of Santa emerging from a Sámi shamanic tradition has a critical number of flaws," asserts Tim Frandy, assistant professor of Nordic Studies at the University of British Columbia and a member of the Sámi descendent community in North America. "The theory has been widely criticized by Sámi people as a stereotypical and problematic romanticized misreading of actual Sámi culture."
Vikings
The notion that Vikings used A. muscaria to produce their berserker rages was first suggested by the Swedish professor Samuel Ödmann in 1784. Ödmann based his theories on reports about the use of fly agaric among Siberian shamans. The notion has become widespread since the 19th century, but no contemporary sources mention this use or anything similar in their description of berserkers. Muscimol is generally a mild relaxant, but it can create a range of different reactions within a group of people. It is possible that it could make a person angry, or cause them to be "very jolly or sad, jump about, dance, sing or give way to great fright". Comparative analysis of symptoms have, however, since shown Hyoscyamus niger to be a better fit to the state that characterises the berserker rage.
Soma
In 1968, R. Gordon Wasson proposed that A. muscaria was the soma talked about in the Rigveda of India, a claim which received widespread publicity and popular support at the time. He noted that descriptions of Soma omitted any description of roots, stems or seeds, which suggested a mushroom, and used the adjective hári "dazzling" or "flaming" which the author interprets as meaning red. One line described men urinating Soma; this recalled the practice of recycling urine in Siberia. Soma is mentioned as coming "from the mountains", which Wasson interpreted as the mushroom having been brought in with the Aryan migrants from the north. Indian scholars Santosh Kumar Dash and Sachinanda Padhy pointed out that both eating of mushrooms and drinking of urine were proscribed, using as a source the Manusmṛti.
In 1971, Vedic scholar John Brough from Cambridge University rejected Wasson's theory and noted that the language was too vague to determine a description of Soma. In his 1976 survey, Hallucinogens and Culture, anthropologist Peter T. Furst evaluated the evidence for and against the identification of the fly agaric mushroom as the Vedic Soma, concluding cautiously in its favour. Kevin Feeney and Trent Austin compared the references in the Vedas with the filtering mechanisms in the preparation of Amanita muscaria and published findings supporting the proposal that fly-agaric mushrooms could be a likely candidate for the sacrament. Other proposed candidates include Psilocybe cubensis, Peganum harmala, and Ephedra.
Christianity
Philologist, archaeologist, and Dead Sea Scrolls scholar John Marco Allegro postulated that early Christian theology was derived from a fertility cult revolving around the entheogenic consumption of A. muscaria in his 1970 book The Sacred Mushroom and the Cross. This theory has found little support by scholars outside the field of ethnomycology. The book was widely criticized by academics and theologians, including Sir Godfrey Driver, emeritus Professor of Semitic Philology at Oxford University and Henry Chadwick, the Dean of Christ Church, Oxford. Christian author John C. King wrote a detailed rebuttal of Allegro's theory in the 1970 book A Christian View of the Mushroom Myth; he notes that neither fly agarics nor their host trees are found in the Middle East, even though cedars and pines are found there, and highlights the tenuous nature of the links between biblical and Sumerian names coined by Allegro. He concludes that if the theory were true, the use of the mushroom must have been "the best kept secret in the world" as it was so well concealed for two thousand years.
Fly trap
Amanita muscaria is traditionally used for catching flies possibly due to its content of ibotenic acid and muscimol, which lead to its common name "fly agaric". Recently, an analysis of nine different methods for preparing A. muscaria for catching flies in Slovenia have shown that the release of ibotenic acid and muscimol did not depend on the solvent (milk or water) and that thermal and mechanical processing led to faster extraction of ibotenic acid and muscimol.
Culinary
The toxins in A. muscaria are water-soluble: parboiling A. muscaria fruit bodies can detoxify them and render them edible, although consumption of the mushroom as a food has never been widespread. The consumption of detoxified A. muscaria has been practiced in some parts of Europe (notably by Russian settlers in Siberia) since at least the 19th century, and likely earlier. The German physician and naturalist Georg Heinrich von Langsdorff wrote the earliest published account on how to detoxify this mushroom in 1823. In the late 19th century, the French physician Félix Archimède Pouchet was a populariser and advocate of A. muscaria consumption, comparing it to manioc, an important food source in tropical South America that must also be detoxified before consumption.
Use of this mushroom as a food source also seems to have existed in North America. A classic description of this use of A. muscaria by an African-American mushroom seller in Washington, D.C., in the late 19th century is described by American botanist Frederick Vernon Coville. In this case, the mushroom, after parboiling, and soaking in vinegar, is made into a mushroom sauce for steak. It is also consumed as a food in parts of Japan. The most well-known current use as an edible mushroom is in Nagano Prefecture, Japan. There, it is primarily salted and pickled.
A 2008 paper by food historian William Rubel and mycologist David Arora gives a history of consumption of A. muscaria as a food and describes detoxification methods. They advocate that Amanita muscaria be described in field guides as an edible mushroom, though accompanied by a description on how to detoxify it. The authors state that the widespread descriptions in field guides of this mushroom as poisonous is a reflection of cultural bias, as several other popular edible species, notably morels, are also toxic unless properly cooked.
In culture
The red-and-white spotted toadstool is a common image in many aspects of popular culture. Garden ornaments and children's picture books depicting gnomes and fairies, such as the Smurfs, often show fly agarics used as seats, or homes. Fly agarics have been featured in paintings since the Renaissance, albeit in a subtle manner. For instance, in Hieronymus Bosch's painting, The Garden of Earthly Delights, the mushroom can be seen on the left-hand panel of the work. In the Victorian era they became more visible, becoming the main topic of some fairy paintings. Two of the most famous uses of the mushroom are in the Mario franchise (specifically two of the Super Mushroom power-up items and the platforms in several stages which are based on a fly agaric), and the dancing mushroom sequence in the 1940 Disney film Fantasia.
An account of the journeys of Philip von Strahlenberg to Siberia and his descriptions of the use of the mukhomor there was published in English in 1736. The drinking of urine of those who had consumed the mushroom was commented on by Anglo-Irish writer Oliver Goldsmith in his widely read 1762 novel, Citizen of the World. The mushroom had been identified as the fly agaric by this time. Other authors recorded the distortions of the size of perceived objects while intoxicated by the fungus, including naturalist Mordecai Cubitt Cooke in his books The Seven Sisters of Sleep and A Plain and Easy Account of British Fungi. This observation is thought to have formed the basis of the effects of eating the mushroom in the 1865 popular story Alice's Adventures in Wonderland. A hallucinogenic "scarlet toadstool" from Lappland is featured as a plot element in Charles Kingsley's 1866 novel Hereward the Wake based on the medieval figure of the same name. Thomas Pynchon's 1973 novel Gravity's Rainbow describes the fungus as a "relative of the poisonous Destroying angel" and presents a detailed description of a character preparing a cookie bake mixture from harvested Amanita muscaria. Fly agaric shamanism—in the context of a surviving Dionysian cult in the Peak District—is also explored in the 2003 novel Thursbitch by Alan Garner.
See also
List of Amanita species
Legal status of psychoactive Amanita mushrooms
References
Works cited
External links
Webpages on Amanita species by Tulloss and Yang Zhuliang
Amanita on erowid.org
Aminita muscaria, Amanita pantherina and others (Group PIM G026) by IPCS INCHEM
muscaria
Entheogens
Fungi of Asia
Fungi of Europe
Fungi of North America
Oneirogens
Poisonous fungi
Psychoactive fungi
Fungi described in 1753
Soma (drink)
Taxa named by Carl Linnaeus
Fungi of the United Kingdom
Fungus species | Amanita muscaria | Biology,Environmental_science | 7,731 |
30,557,715 | https://en.wikipedia.org/wiki/Bulletin%20of%20the%20Astronomical%20Society%20of%20India | The Bulletin of the Astronomical Society of India is the official quarterly peer-reviewed scientific journal of the Astronomical Society of India established in 1973 and published until the end of 2014. It covers all areas of astrophysics and astronomy. The editor-in-chief is D. J. Saikia (National Centre for Radio Astrophysics).
Editors
The following persons have been editor-in-chief of the journal:
D. J. Saikia (2010–2014)
G. C. Anupama (2004–2010)
H. C. Bhatt (2001–2004)
Vinod Krishnan (1995–2001)
K. D. Abhyankar (1992–1995)
S. K. Trehan (1981–1992)
M. S. Vardya (1974–1981)
References
External links
Academic journals established in 1973
Astronomy journals
English-language journals
Quarterly journals
Academic journals published by learned and professional societies of India | Bulletin of the Astronomical Society of India | Astronomy | 188 |
4,959,339 | https://en.wikipedia.org/wiki/Anti-flash%20white | Anti-flash white is a white colour commonly seen on British, Soviet, and U.S. nuclear bombers. The purpose of the colour is to reflect some of the thermal radiation from a nuclear explosion, protecting the aircraft and its occupants.
China
Some variants of the Xian H-6 had the underside of the fuselage painted anti-flash white.
Soviet Union/Russia
Some nuclear bombers had the underside of the fuselage painted anti-flash white with the upper surfaces painted light silver-gray. This was true for the specially fitted, single Soviet Tu-95V bomber that test-deployed the most powerful bomb of any kind – the 50+ MT-rating Tsar Bomba on 30 October 1961 – as it had the anti-flash white on all its undersurfaces and sides.
The Tupolev Tu-160 of the 1980s was the first series-built Soviet/Russian bomber aircraft to be painted anti-flash white all over, leading to its Beliy Lebed ("White Swan") Russian nickname.
United Kingdom
Anti-flash white was used on the Royal Air Force V bombers force and the Royal Navy Blackburn Buccaneer when used in the nuclear strike role. Nuclear bombers were given – though not at first, until the problem was considered – salmon pink and baby blue roundels and fin flash rather than the traditional dark red, white and blue.
Anti-flash white was applied to several prototype aircraft, including the British Aircraft Corporation TSR-2. Paint used on the Avro Vulcan was manufactured by Cellon, and that on the Handley Page Victor by Titanine Ltd.
United States
Many Strategic Air Command nuclear bombers carried anti-flash white without insignia on the underside of the fuselage with light silver-gray or natural metal (later light camouflage) on the upper surfaces.
United States Navy E-6 Mercury remain painted in anti-flash white, as of October 2023.
Other aircraft
In addition to these military aircraft, Concorde was painted white to reduce the additional heating effect on the aluminium skin caused by the sun whilst the aircraft was flying at high altitudes, the skin temperature already being raised to over at Mach 2 by aerodynamic heating.
Aircraft with at least part of the fuselage painted anti-flash white on nuclear delivery variants:
CF-105 Arrow prototypes
Xian H-6
/
Myasishchev M-4
Tupolev Tu-16
Tupolev Tu-95
Tupolev Tu-22M
Tupolev Tu-160
V bombers
Avro Vulcan
Handley Page Victor
Vickers Valiant
Blackburn Buccaneer
English Electric Canberra (experimental)
BAC TSR-2 prototype
Saunders-Roe SR.53 interceptor prototype
Convair B-36
Boeing B-47 Stratojet
Boeing B-52 Stratofortress
North American A-5 Vigilante
North American XB-70 Valkyrie prototype
Rockwell B-1 Lancer prototype
See also
Royal Air Force roundels
List of colors
The House in the Middle – film that demonstrates the thermal flash protective effects of the related white wash paint
References
Explosion protection
Shades of white | Anti-flash white | Chemistry,Engineering | 613 |
8,046,943 | https://en.wikipedia.org/wiki/Microtransaction | Microtransactions (mtx) refers to a business model where users can purchase in-game virtual goods with micropayments. Microtransactions are often used in free-to-play games to provide a revenue source for the developers. While microtransactions are a staple of the mobile app market, they are also seen on PC software such as Valve's Steam digital distribution platform, as well as console gaming.
Free-to-play games that include a microtransaction model are sometimes referred to as "freemium". Another term, "pay-to-win", is sometimes used pejoratively to refer to games where purchasing items in-game can give a player an advantage over other players, particularly if the items cannot be obtained through free means. The objective with a free-to-play microtransaction model is to involve more players in the game by providing desirable items or features that players can purchase if they lack the skill or available time to earn these through regular game play. Also, presumably the game developer's marketing strategy is that in the long term, the revenue from a micro transaction system will outweigh the revenue from a one-time-purchase game.
Loot boxes are another form of microtransactions. Through purchasing a loot box, the player acquires a seemingly random assortment of items. Loot boxes result in high revenues because instead of a one-time purchase for the desired item, users may have to buy multiple boxes. This method has also been called a form of underage gambling. Items and features available by microtransaction can range from cosmetic (such as decorative character attire) to functional (such as weapons and items). Some games allow players to purchase items that can be acquired through normal means, but some games include items that can only be obtained through microtransaction. Some developers ensure that only cosmetic items are accessible this way to keep gameplay fair and stable.
The reasons why people, especially children, continue to pay for microtransactions are embedded in human psychology. There has been considerable discussion over microtransactions and their effects on children, as well as regulation and legislation efforts. Microtransactions are most commonly provided through a custom store interface placed inside the app for which the items are being sold. Apple and Google both provide frameworks for initiating and processing transactions, and both take 30 percent of all revenue generated by microtransactions sold through in-app purchases in their respective app stores.
History
Initially, microtransactions in games took the form of exchanging real-life money for the virtual currency used in the game that the user was playing. The arcade game Double Dragon 3: The Rosetta Stone (1990) was infamous for its use of microtransactions to purchase items in the game. It had shops where players would insert coins into arcade machines to purchase upgrades, power-ups, health, weapons, special moves, and player characters. The microtransaction revenue model gained popularity in South Korea with the success of Nexon's online free-to-play games, starting with QuizQuiz (1999), followed by games such as MapleStory (2003), Mabinogi (2004), and Dungeon Fighter Online (2004).
Notable examples of games that used this model in the early 2000s include the social networking site Habbo Hotel (2001), developed by the Finnish company Sulake, and Linden Lab's 2003 virtual world game Second Life. Both free games allow users to customize the clothing and style of their characters; buy and collect furniture; and purchase special, "flashy" animations to show off to others using some type of virtual currency. Habbo Hotel uses three different kinds of currency: Credits (or coins), Duckets (which are earned through accomplishing specific achievements during gameplay), and Diamonds. Diamonds are only obtained through buying Credits with real-life money. In Second Life, the Linden Dollar (L$) is the virtual currency used to power the game's internal economy. L$ can be bought with real money through a marketplace developed by Linden Lab themselves, LindeX. Second Life in particular has generated massive amounts of economic activity and profits for both Linden Lab and Second Lifes users. In September 2005, $3,596,674 worth of transactions were processed on the platform. Both games are still active today.
The Elder Scrolls IV: Oblivion was released in March 2006 by Bethesda Softworks. From April 2006 onwards, Bethesda began releasing small, downloadable packages of content from their website and over the Xbox Live Marketplace, for the equivalent of between one and three US dollars. The first package, a set of horse armor (barding) for Oblivions steeds, was released on April 3, 2006, costing 200 Marketplace points, equivalent to US$2.50 or £1.50; the corresponding PC release cost was US$1.99. Bethesda offered no rationale for the price discrepancy. These were not the first Oblivion-related Marketplace releases (the first was a series of dashboard themes and picture packs released prior to Oblivion's publication, in February 2006, for a nominal fee) nor were they entirely unexpected: Bethesda had previously announced their desire to support the Xbox release with downloadable Marketplace content, and other publishers had already begun to release similar packages for their games, at similar prices. A November 2005-release of a "Winter Warrior Pack" for Kameo: Elements of Power was also priced at 200 Marketplace points, and similar content additions had been scheduled for Project Gotham Racing 3 and Perfect Dark Zero. Indeed, Marketplace content additions formed a significant part of a March 2006 Microsoft announcement regarding the future of Xbox Live. "Downloadable in-game content is a main focus of Microsoft's strategy heading into the next-gen console war", stated one GameSpot reporter. "With more consoles on their way to retail, 80 games available by June, and new content and experiences coming to Xbox Live all the time, there has never been a better time to own an Xbox 360", announced Peter Moore. Nonetheless, although Xbox Live Arcade games, picture packs, dashboards and profile themes continued to be a Marketplace success for Microsoft, the aforementioned in-game content remained sparse. Pete Hines asserted,
"We were the first ones to do downloadable content like that – some people had done similar things, but no one had really done additions where you add new stuff to your existing game." There was no pressure from Microsoft to make the move.
The horse armor content sold relatively poorly, ranking ninth out of ten in DLC sales for Oblivion by 2009. Despite this, Oblivion horse armor became a model for many games that followed for implementing microtransactions in video games, and is considered the first primary example and often synonymous for microtransactions.
In June 2008 Electronic Arts introduced an online Store for The Sims 2. It allowed players to purchase points that can be spent on in-game items. The Store has also been a part of The Sims 3 since the game's release. In The Sims 4 Electronic Arts removed the ability to buy single items, instead downloadable content is provided exclusively via expansion packs.
In the late 2000s and early 2010s, games like Facebook's FarmVille (2009), Electronic Arts's The Simpsons: Tapped Out (2012) and Supercell's Clash of Clans (2012) pioneered a new approach to implanting microtransactions into games. In conjunction with having virtual currency be used to purchase items, tools, furniture, and animals, these mobile games made it so users can purchase currency and then use that currency to reduce or eliminate the wait times attached to certain actions, like planting and growing carrots or collecting taxes from the townspeople.
In March 2009 the Ultimate Team game mode was introduced in FIFA 09 in which gamers can buy "packs" containing items such as players, stadiums and contract extensions with currency earned by playing the game or real world money. EA followed this success by introducing the game mode to Madden NFL beginning with Madden NFL 10 in January 2010. In March 2014 EA marked the fifth anniversary of Ultimate Team and shared statistics showing the explosive growth in popularity of the game mode. By the late 2010s, Ultimate Team was generating billions of dollars every year.
From around 2017, another major transition in how microtransactions are implemented in games occurred. "Live-service" games, like Epic Games's Fortnite, with constantly changing and updating content, became more prevalent in the video game market. These types of games heavily employ the use of the loot box microtransaction type. According to the September 2019 report by the UK Parliament's House of Commons and the Digital, Culture, Media and Sport Committee, they define loot boxes as "... items in video games that may be bought for real-world money, but which provide players with a randomised reward of uncertain value." The widespread usage of loot boxes by game developers and publishers have garnered a great amount of criticism from gamers in the past decade. Game developing corporations, like Electronic Arts (EA) and Activision Blizzard, make billions of dollars through the purchase of their microtransactions. In FY2017, EA raked in around $1.68 billion and Activision Blizzard earned over $4 billion respectively.
The aforementioned Fortnite is an example of a microtransaction model in which all purchases are solely cosmetic: players can choose to purchase "skins" (cosmetic changes to the way characters, weapons, and vehicles look) to show off to other players. However, a player can experience all the content of the game and be on an even playing field without purchasing any microtransactions because no feature or gameplay-affecting piece of content is locked behind a payment. These games still occasionally take accusations of being "pay-to-win" as combat-focused video games, such as Apex Legends or Call of Duty: Warzone, offer skins that are inspired by real-world military equipment – often including camouflage – which technically can give players an advantage by obscuring them to human opponents.
Impact
Mobile web analytics company Flurry reported on July 7, 2011, that based on its research, the revenue from free-to-play games had overtaken revenue from premium games that earn revenue through traditional means in Apple's App Store, for the top 100 grossing games when comparing the results for the months of January and June 2011. It used data that it analyzed through 90,000 apps that installed the company's software in order to roughly determine the amount of revenue generated by other popular apps. They discovered that free games represented 39% of the total revenue from January, and that the number jumped to 65% by June, helped in part by the fact that over 75% of the 100 top grossing apps are games. This makes free-to-play the single most dominant business model in the mobile apps industry. They also learned that the number of people that spend money on in-game items in free-to-play games ranges from 0.5% to 6%, depending on a game's quality and mechanics. Even though this means that a large number of people will never spend money in a game, it also means that the people that do spend money could amount to a sizeable number because the game was given away for free.
A later study found that over 92% of revenue generated on Android and iOS in 2013 came from free-to-play games such as Candy Crush Saga.
Electronic Arts Corporate Vice-President Peter Moore speculated in June 2012 that within 5 to 10 years, all games will have transitioned to the microtransaction model. Tommy Palm of King (Candy Crush Saga) expressed in 2014 his belief that all games will eventually be free-to-play. According to Ex-BioWare developer Manveer Heir in a 2017 interview, microtransactions have become a factor in what types of games are planned for production.
Free-to-play coupled with microtransactions may be used as a response to piracy. An example of this is the mobile game Dead Trigger switching to a free-to-play model due to a high rate of piracy. While microtransactions are considered a more robust and difficult to circumnavigate than digital rights management, in some cases they can be circumvented. In 2012, a server was created by a Russian developer, which provided falsified authentication for iOS in-app purchases. This allowed users to obtain features requiring a microtransaction without paying.
Consumer organizations have criticized that some video games do not describe adequately that these purchases are made with real currency rather than virtual currency. Also, some platforms do not require passwords to finalize a microtransaction. This has resulted in consumers getting unexpectedly high bills, often referred to as a "bill shock".
Criticism and regulation
In the mid and late 2010s, people became increasingly aware of how microtransactions operate in games, and they have become much more critical of their usage. The commonly cited issues of microtransactions from gamers are:
Loot box rewards are determined by random chance and percentages, plus they can directly influence gameplay via the items they bestow.
They sometimes cost too much money for what they are worth. For example, a bundle of 50 loot boxes in Blizzard's first-person shooter game Overwatch costs $39.99.
They may facilitate gambling behaviours in people already suffering from gambling issues. Plus, they can make people overspend money on the game, whether or not they are able to do so.
Games with loot boxes, like FIFA, can become "pay-to-win" (in order to advance past certain points, or to become the best in the game, it is virtually required to pay real money to receive in-game currency to purchase items or to pay for bigger and better items directly).
Microtransactions in games that are not free-to-play means that gamers are paying more money after already paying to experience the full game.
Legislative efforts to regulate microtransactions
The implementation of microtransactions and the subsequent backlash from gamers and the gaming media have caused governments from all around the world to look into these games and their microtransaction mechanics. In April 2018, the Netherlands and Belgium banned the sale of microtransactions in games sold in their countries. The specific games Belgium looked closer at were EA's Star Wars Battlefront II (2017) and FIFA 18, Overwatch, and Counter-Strike: Global Offensive developed by Valve. All of those game's microtransactions, except for Star Wars Battlefront II which removed their gameplay-altering microtransactions in March 2018 (but kept cosmetic microtransactions) due to players calculating that it would take over 48 hours of playing to obtain Luke Skywalker and then complaining about this extreme threshold to unlock popular characters to EA, have been determined to be "games of chance" by the Belgian government. As such, they are highly regulated by the Belgian government under their gambling laws.
Games would have to remove their microtransactions in order for it to be sold in their country. If the game companies don't comply, then Belgium's government said that it will enact "a prison sentence of up to 5 years and a fine of up to 800,000 euros". While most game publishers agreed to modify their games' loot boxes in accordance with governmental laws, or otherwise as a result of negative reactions, others, such as Electronic Arts, have contested that they do not constitute as gambling. However, EA eventually complied with the Belgian government's declaration and made it so players in Belgium can not purchase FIFA Points, the premium (obtained by buying it with real money) in-game currency used in FIFAs "Ultimate Team" game mode. Professional FIFA players in Belgium were disappointed because not being able to buy FIFA Points makes it harder for them to compete and succeed in the FIFA Global Series and the EA-sponsored e-sports competition for FIFA games, showing just how "pay-to-win" they feel FIFA Ultimate Team is.
In the United States, there have been some calls to introduce legislation to regulate microtransactions in video games, whether on mobile, consoles, or PC, and numerous attempts have been made recently to pass such legislation. In November 2017, Hawaii representatives Chris Lee and Sean Quinlan, during a news conference, explained how loot boxes and microtransactions prey on children and that they are working to introduce bills into their state's house and senate. A few months later, in February 2018, they successfully put four bills onto the floor of Hawaii State Legislature. Two of those bills would make it so games containing loot boxes can not be sold to people under the age of 21, and the other two would force game publishers to put labels on the case of their games that have loot boxes in them, as well as make them be transparent about the item drop rates for the rewards in their game's loot boxes. However, all four bills failed to pass through the Hawaii State Legislature in March 2018. In May 2019, Republican Senator Josh Hawley of Missouri introduced a bill named "The Protecting Children from Abusive Games Act" to ban loot boxes and pay-to-win microtransactions in games played by minors, using similar conditions previously outlined in the Children's Online Privacy Protection Act. The bill received some bi-partisan support in the form of two co-sponsors from Democrats Richard Blumenthal of Connecticut and Ed Markey of Massachusetts.
The United Kingdom has also been closely observing microtransactions, especially loot boxes, and their effects on children. A major report by the UK Parliament's House of Commons and the Digital, Culture, Media, and Sport Committee, released in September 2019, called for the banning or regulation of microtransactions and loot boxes to children as well as having the games industry to take up more responsibility with regards to protecting players from the harms of microtransactions that simulate gambling. Specifically, the committee's conclusion is that microtransactions should be classified as gambling in the UK and therefore subject to current gambling and age-restriction laws. In October 2019, the Children's Commissioner for England, which promotes and protects children's rights, released a report describing the experiences, thoughts, and effects, positive and negative, of gaming on children ages 10–16. Within the report, some of the children directly stated to the interviewers that the microtransactions and loot boxes that they encounter and subsequently buy, are just like gambling. The report concludes that showing the odds and percentages of certain microtransactions to players does not go far enough and does not actually solve the problem.
Instead, they suggest that certain new features to protect children should be implemented in all games featuring microtransactions, like showing the all-time spending on a child's in-game account and having limits on the amount someone can spend daily. Additionally, they push for game developers and publishers to stop pressuring children to spend money on microtransactions in their games in order to progress through the game and for Parliament to change their current gambling laws to declare loot boxes as gambling and subject to gambling laws.
Psychology and ethics
Alongside questioning the legality of the extensive use of microtransactions, some gamers have also questioned the morality and ethics of selling microtransactions, especially to children. Researchers have studied the natural psychology behind both the selling and purchase of microtransactions.
Psychology
According to a post made by Gabe Duverge on the Touro University Worldwide (TUW) website, impulse buying is a significant part of the psychology behind people buying microtransactions. Essentially, many games, especially in the realm of mobile games and the "free-to-play" market, force a decision from the player to keep playing or not via a limited time pop-up on the screen that tells them that if they pay a certain amount of money (usually about 99 cents or a dollar), they can keep playing where they left off. This is another type of microtransaction and it has become increasingly common in the mobile games sphere as of late. Another psychological aspect that is used to get players to continue playing and buying microtransactions is called loss aversion. When a player continues to lose over and over again, they begin to crave the dopamine-filled, positive feelings that they feel when they win. As such, they become more inclined to spend money for items that will help them achieve that elusive win. Then when they do win, the player attributes their win with the item that they just bought, making it more likely that they will spend money whenever the player gets on a losing streak, and so the cycle continues on.
Ethics of selling microtransactions to children
During the past two decades, gaming has emerged as one of the top options children go to as a means of alleviating boredom. In an August 2019 report conducted by Parent Zone in the UK, they studied and gathered data directly from children between the ages of 10–16 years old about their experiences with online gaming and the microtransactions that the games that they play hold, as well as ask about how the microtransactions in these games have affected them (and/or their parents) socially and financially. A growing number of parents of children aged 5 to 15 years old are now concerned that their children could be pressured to perform microtransactions online.
Statistics
According to the Parent Zone study, in the United Kingdom, 97% of males and 90% of females aged 10–16 years old play video games. About 93% of people 10–16 years old in the United Kingdom have played video games, with many playing games that utilize an Internet connection. Many online games targeting younger audiences may include microtransactions. The primary items bought by children in these games are largely cosmetic items, specifically "skins", which modify the appearance of the in-game player.
In the case of Fortnite, many of the outfits and other cosmetics are locked behind a "battle pass" that the player must pay for. A "battle pass" is a tiered system where the player buys the pass and must unlock the tiers on their own. By completing challenges and other missions, they earn in-game items like outfits, emotes (special animations used to taunt opponents, celebrate victories, dance, and show-off), and other cosmetics. It is about $9.50 (or 950 of Fortnites in-game virtual currency, V-Bucks), but the player can also pay about $28 (or 2,800 V-Bucks) instead to unlock the battle pass and they automatically complete the first 25 tiers (out of 100 tiers) of it.
A majority of the children surveyed feel that these kinds of microtransaction systems in their online games are complete rip-offs. 76% of them also believe that these online games purposely try to squeeze as much money as possible from them when they play the game. About half of the children expressed that they need to spend money on the game in order for it to be fun to them; this is due to many of these games' features, which are modes that the children want to play and experience, being locked behind microtransaction paywalls. As such, there is a large gap between the gaming experiences that non-paying players have and the experiences that paying players have.
Some other statistics and thoughts regarding loot boxes specifically were also collected from the children. Out of the 60% of children that know about loot boxes, a majority (91%) stated that the online games they play contain loot boxes in them, 59% of them would rather pay for in-game content individually and directly instead of through a collective and randomized loot box, and 44% believe that if loot boxes were eliminated from their online games the games would actually be a lot better. Plus, 40% of the children who played a game with loot boxes in them paid for one, too. Overall, the report stated that of the children who were generally unhappy with the games they paid for or were gifted, 18% felt that way because certain features had to be bought after paying for the game already, effectively making is so they had to pay more than the normal, full-price of the game in order to play the full game. The game was simply just not worth paying for to 35% of the unsatisfied children and 18% of them also felt that in-game microtransactions were not worth paying for either. Ultimately, children feel that spending money on microtransactions has become the norm and an expectation when playing modern video games.
Social
For many children, the excitement felt when opening loot boxes, packs, and obtaining items through microtransactions is very addictive. Opening these random boxes without knowing what is inside, to them, is similar to opening a present. The excitement and suspense is amplified when the experience is felt alongside friends. In the UK Children's Commissioner's report, the children who played FIFA feel that opening player packs are a game within the game. To them, opening packs creates variety because they can play some football games in the Ultimate Team game mode and then open some packs when they get bored of playing normal football matches.
Children might want to fit-in by paying for microtransactions and loot boxes and obtaining very rare items in front of their friends, creating a lot of hype and excitement among them. This makes paying for microtransactions a very positive experience for them. However, when children buy items in front of their friends, peer pressure often set in. Friends pressuring the player to continue buying packs hoping that they will be able to see them get a rare item can cause the player to spend more than they may actually be able to. The pressure to spend money on in-game content also stems from children seeing their friends have these special, rare items, and them wanting to have it themselves. Essentially, when everyone around them has it, they will want it too in order to feel like a part of the group.
Peer pressure is not the only way children's social gaming experiences are altered. As noted in both the Parent Zone report and the Children's Commissioner's assessment, children who play Fortnite, explained that classism, as in discrimination against people of different economic and social classes, exists among the players of the game. Some children fear that if they have the free 'default' skin in Fortnite, no one, friends nor random strangers, will want to play with them as the default skin is seen as a symbol of a player being bad at the game. The default skin is used as judgement and an insult against the player whose in-game avatar wears it, too. Players wearing default skins are considered 'financially poor' and very 'uncool' by their peers and the game's community, so children spend money on microtransactions in order to avoid having that 'tag' or target on them.
The media that children consume outside of the game can also affect their social interactions and in-game spending. A popular mode of entertainment for children is watching YouTube videos of other people playing their favorite games. In the case of FIFA, children may watch a popular YouTubers constantly open player packs in the Ultimate Team game mode; unlike the children, the said content creators have the money to pay for the packs, due to YouTube being their major source of income.
Financial
The amount of money that children spend on microtransactions has continued to grow because the design of these online games, as well as other outside influences, have made spending money for in-game content an essential aspect of the game itself. In the UK, different cases of children unknowingly spending money from their parents and their own in order to get what they want or need to progress through the game have surfaced. In one instance, the father of a family with four children all under 10 years old, bought an £8 FIFA Ultimate Team player pack for them. In the span of three weeks, the children kept spending money on packs, eventually spending £550 ($709.91) altogether, completely emptying their parents' bank account, but never receiving one of the best players in the game as well as the children's favorite player: Lionel Messi.
The children apologized to their parents and explained that at the time they did not understand how much they were impacting the family's financial situation. There have been other situations where UK children spent £700 ($903.53), £1,000 ($1290.75), £2,000 ($2581.50), and even £3,160 ($4078.77) on microtransactions in various mobile games, usually as a result of them getting tricked by the game to pay for something in-game or just not understanding that real money was being taken out of their, or their parents', bank accounts when they bought items in-game. Spending such large amounts of money on microtransactions have devastated some families financially, including some who had to pay a bill full of microtransaction payments with college savings and even money in life savings accounts.
In the Children's Commissioner's study, children reported spending more and more money with each coming year, despite also feeling that because they are rewarded completely unknown items, they feel like they may be wasting money, too. One of the children that played FIFA in the study said that they spend anywhere from £10 ($12.91) a day to upwards of £300 ($387.23) in one year, sometimes even buying multiple player packs at one time. Some children have also stated that they have seen friends, their siblings, and acquaintances who have mental disorders spend all of their birthday money on in-game microtransactions, all while feeling like spending that money has not been a waste despite them not receiving any valuable items.
Data
Microtransactions have become increasingly common in many types of video games. Smartphone, console, and PC games all have conformed to the use of microtransactions due to its high profitability. Many companies and games, especially smartphone games, have taken on a business model that offer their games for free and then relying purely on the success of microtransactions to turn a profit.
Ethics
The collection of this data on consumers, although technically legal in the United States, or outside the EU, can be considered unethical. Companies can sell data about consumers, involving their spending, bank information, and preferences, to understand the consumer better overall, making business models for gaming companies safer and more profitable. With microtransactions under a negative spotlight from the gaming community, there may be displeasure from those who are aware that their data is being shared to make microtransactions possible.
Revenue
Data from a variety of sources show that microtransactions can vastly increase a companies' profits. Three free-to-play mobile games that made heavy use of the practice, Clash Royale, Clash of Clans, and Game of War, were all in the top five most profitable mobile games of 2016. Microtransactions are also used in larger budget games as well, such as Grand Theft Auto V (2013) generating more revenue through them than retail sales by the end of 2017. This trend was consistent with many other popular games at the time, leading to the practice being widespread in the 2010s. In its 2021 fiscal year, Activision Blizzard earned $8.8 billion, with the majority of profit generated from microtransactions.
See also
Pay to play
References
Micropayment
Video game controversies
Video game terminology | Microtransaction | Technology | 6,598 |
577,296 | https://en.wikipedia.org/wiki/Mass%20concentration%20%28astronomy%29 | In astronomy, astrophysics and geophysics, a mass concentration (or mascon) is a region of a planet's or moon's crust that contains a large positive gravity anomaly. In general, the word "mascon" can be used as a noun to refer to an excess distribution of mass on or beneath the surface of an astronomical body (compared to some suitable average), such as is found around Hawaii on Earth. However, this term is most often used to describe a geologic structure that has a positive gravitational anomaly associated with a feature (e.g. depressed basin) that might otherwise have been expected to have a negative anomaly, such as the "mascon basins" on the Moon.
Lunar mascons
The Moon is the most gravitationally "lumpy" major body known in the Solar System. Its largest mascons can cause a plumb bob to hang about a third of a degree off vertical, pointing toward the mascon, and increase the force of gravity by one-half percent.
Typical examples of mascon basins on the Moon are the Imbrium, Serenitatis, Crisium and Orientale impact basins, all of which exhibit significant topographic depressions and positive gravitational anomalies. Examples of mascon basins on Mars are the Argyre, Isidis, and Utopia basins. Theoretical considerations imply that a topographic low in isostatic equilibrium would exhibit a slight negative gravitational anomaly. Thus, the positive gravitational anomalies associated with these impact basins indicate that some form of positive density anomaly must exist within the crust or upper mantle that is currently supported by the lithosphere. One possibility is that these anomalies are due to dense mare basaltic lavas, which might reach up to 6 kilometers in thickness for the Moon. While these lavas certainly contribute to the observed gravitational anomalies, uplift of the crust-mantle interface is also required to account for their magnitude. Indeed, some mascon basins on the Moon do not appear to be associated with any signs of volcanic activity. Theoretical considerations in either case indicate that all the lunar mascons are super-isostatic (that is, supported above their isostatic positions). The huge expanse of mare basaltic volcanism associated with Oceanus Procellarum does not possess a positive gravitational anomaly.
Origin of lunar mascons
Since their identification in 1968 by Doppler tracking of the five Lunar Orbiter spacecraft, the origin of the mascons beneath the surface of the Moon has been subject to much debate, but they are now regarded as being the result of the impact of asteroids during the Late Heavy Bombardment.
Effect of lunar mascons on satellite orbits
Lunar mascons alter the local gravity above and around them sufficiently that low and uncorrected lunar orbits of satellites around the Moon are unstable on a timescale of months or years. The small perturbations in the orbits accumulate and eventually distort the orbit enough for the satellite to impact the surface.
Because of its mascons, the Moon has only four "frozen orbit" inclination zones where a lunar satellite can stay in a low orbit indefinitely. Lunar subsatellites were released on two of the last three Apollo crewed lunar landing missions in 1971 and 1972; the subsatellite PFS-2 released from Apollo 16 was expected to stay in orbit for one and a half years, but lasted only 35 days before crashing into the lunar surface since it had to be deployed in a much lower orbit than initially planned. It was only in 2001 that the mascons were mapped and the frozen orbits were discovered.
The Luna 10 orbiter was the first artificial object to orbit the Moon, and it returned tracking data indicating that the lunar gravitational field caused larger than expected perturbations, presumably due to "roughness" of the lunar gravitational field. The Lunar mascons were discovered by Paul M. Muller and William L. Sjogren of the NASA Jet Propulsion Laboratory (JPL) in 1968 from a new analytic method applied to the highly precise navigation data from the uncrewed pre-Apollo Lunar Orbiter spacecraft. This discovery observed the consistent 1:1 correlation between very large positive gravity anomalies and depressed circular basins on the Moon. This fact places key limits on models attempting to follow the history of the Moon's geological development and explain the current lunar internal structures.
At that time, one of NASA's highest priority "tiger team" projects was to explain why the Lunar Orbiter spacecraft being used to test the accuracy of Project Apollo navigation were experiencing errors in predicted position of ten times the mission specification (2 kilometers instead of 200 meters). This meant that the predicted landing areas were 100 times as large as those being carefully defined for reasons of safety. Lunar orbital effects principally resulting from the strong gravitational perturbations of the mascons were ultimately revealed as the cause. William Wollenhaupt and Emil Schiesser of the NASA Manned Spacecraft Center in Houston then worked out the "fix" that was first applied to Apollo 12 and permitted its landing within 163 m (535 ft) of the target, the previously landed Surveyor 3 spacecraft.
Mapping
In May 2013 a NASA study was published with results from the twin GRAIL probes, that mapped mass concentrations on Earth's Moon.
China's Chang’e 5T1 mission also mapped Moon's mascons.
Earth's mascons
Mascons on Earth are often measured by means of satellite gravimetry, such as the GRACE satellites.
Mascons are often reported in terms of a derived physical quantity called "equivalent water thickness", "equivalent water height", or "water equivalent height", obtained dividing the surface mass density redistribution by the density of water.
Mercurian mascons
Mascons exist on Mercury. They were mapped by the MESSENGER spacecraft which orbited the planet from 2011 to 2015. Two are shown in the image at right, at Caloris Planitia and at Sobkou Planitia.
See also
References
Further reading
Gravimetry
Geophysics
Lunar science | Mass concentration (astronomy) | Physics | 1,234 |
22,947,099 | https://en.wikipedia.org/wiki/Service-oriented%20software%20engineering | Service-oriented Software Engineering (SOSE), also referred to as service engineering, is a software engineering methodology focused on the development of software systems by composition of reusable services (service-orientation) often provided by other service providers. Since it involves composition, it shares many characteristics of component-based software engineering, the composition of software systems from reusable components, but it adds the ability to dynamically locate necessary services at run-time. These services may be provided by others as web services, but the essential element is the dynamic nature of the connection between the service users and the service providers.
Service-oriented interaction pattern
There are three types of actors in a service-oriented interaction: service providers, service users and service registries. They participate in a dynamic collaboration which can vary from time to time. Service providers are software services that publish their capabilities and availability with service registries. Service users are software systems (which may be services themselves) that accomplish some task through the use of services provided by service providers. Service users use service registries to discover and locate the service providers they can use. This discovery and location occurs dynamically when the service user requests them from a service registry.
See also
Service-oriented architecture (SOA)
Service-oriented analysis and design
Separation of concerns
Component-based software engineering
Web services
References
Further reading
Breivold, H.P. and Larsson, M. "Component-Based and Service-Oriented Software Engineering: Key Concepts and Principles" in Software Engineering and Advanced Applications, 2007. 33rd EUROMICRO Conference on, .
Stojanović, Zoran, A Method for Component-Based and Service-Oriented Software Systems Engineering. Doctoral Dissertation, Delft University of Technology, The Netherlands.
External links
University of Notre Dame Service-oriented Software Engineering Group homepage
Lancaster University Component & Service-oriented Software Engineering project homepage
Software engineering | Service-oriented software engineering | Technology,Engineering | 382 |
5,495,025 | https://en.wikipedia.org/wiki/280%20%28number%29 | 280 (two hundred [and] eighty) is the natural number after 279 and before 281.
In mathematics
The denominator of the eighth harmonic number, 280 is an octagonal number. 280 is the smallest octagonal number that is a half of another octagonal number.
There are 280 plane trees with ten nodes.
As a consequence of this, 18 people around a round table can shake hands with each other in non-crossing ways, in 280 different ways (this includes rotations).
References
Integers | 280 (number) | Mathematics | 100 |
10,632,889 | https://en.wikipedia.org/wiki/Worm%27s-eye%20view | A worm's-eye view is a description of the view of a scene from below that a worm might have if it could see. It is the opposite of a bird's-eye view.
It can give the impression that an object is tall and strong while the viewer is childlike or powerless.
A worm's-eye view commonly uses three-point perspective, with one vanishing point on top, one on the left, and one on the right.
See also
Bird's-eye view
Plan (drawing)
References
Methods of representation
Technical drawing
Cartography | Worm's-eye view | Engineering | 116 |
5,630,800 | https://en.wikipedia.org/wiki/Distributed%20amplifier | Distributed amplifiers are circuit designs that incorporate transmission line theory into traditional amplifier design to obtain a larger gain-bandwidth product than is realizable by conventional circuits.
History
The design of the distributed amplifiers was first formulated by William S. Percival in 1936. In that year Percival proposed a design by which the transconductances of individual vacuum tubes could be added linearly without lumping their element capacitances at the input and output, thus arriving at a circuit that achieved a gain-bandwidth product greater than that of an individual tube. Percival's design did not gain widespread awareness however, until a publication on the subject was authored by Ginzton, Hewlett, Jasberg, and Noe in 1948. It is to this later paper that the term distributed amplifier can actually be traced. Traditionally, DA design architectures were realized using vacuum tube technology.
Current technology
More recently, III-V semiconductor technologies, such as GaAs and InP have been used. These have superior performance resulting from higher bandgaps (higher electron mobility), higher saturated electron velocity, higher breakdown voltages and higher-resistivity substrates. The latter contributes much to the availability of higher quality-factor (Q-factor or simply Q) integrated passive devices in the III-V semiconductor technologies.
To meet the marketplace demands on cost, size, and power consumption of monolithic microwave integrated circuits (MMICs), research continues in the development of mainstream digital bulk-CMOS processes for such purposes. The continuous scaling of feature sizes in current IC technologies has enabled microwave and mm-wave CMOS circuits to directly benefit from the resulting increased unity-gain frequencies of the scaled technology. This device scaling, along with the advanced process control available in today's technologies, has recently made it possible to reach a transition frequency (ft) of 170 GHz and a maximum oscillation frequency (fmax) of 240 GHz in a 90 nm CMOS process.
Theory of operation
The operation of the DA can perhaps be most easily understood when explained in terms of the traveling-wave tube amplifier (TWTA). The DA consists of a pair of transmission lines with characteristic impedances of Z0 independently connecting the inputs and outputs of several active devices. An RF signal is thus supplied to the section of transmission line connected to the input of the first device. As the input signal propagates down the input line, the individual devices respond to the forward traveling input step by inducing an amplified complementary forward traveling wave on the output line. This assumes the delays of the input and output lines are made equal through selection of propagation constants and lengths of the two lines and as such the output signals from each individual device sum in phase. Terminating resistors Zg and Zd are placed to minimize destructive reflections.
The transconductive gain of each device is gm and the output impedance seen by each transistor is half the characteristic impedance of the transmission line. So that the overall voltage gain of the DA is:
Av = ½ n·gm·Z0, where n is the number of stages.
Neglecting losses, the gain demonstrates a linear dependence on the number of devices (stages). Unlike the multiplicative nature of a cascade of conventional amplifiers, the DA demonstrates an additive quality. It is this synergistic property of the DA architecture that makes it possible for it to provide gain at frequencies beyond that of the unity-gain frequency of the individual stages. In practice, the number of stages is limited by the diminishing input signal resulting from attenuation on the input line. Means of determining the optimal number of stages are discussed below. Bandwidth is typically limited by impedance mismatches brought about by frequency dependent device parasitics.
The DA architecture introduces delay in order to achieve its broadband gain characteristics. This delay is a desired feature in the design of another distributive system called the distributed oscillator.
Lumped elements
Delay lines are made of lumped elements of L and C. The parasitic L and the C from the transistors are used for this and usually some L is added to raise the line impedance. Because of the Miller effect in the common source amplifier the input and the output transmission line are coupled. For example, for voltage inverting and current amplifying the input and the output form a shielded balanced line. The current is increasing in the output transmission line with every subsequent transistor, and therefore less and less L is added to keep the voltage constant and more and more extra C is added to keep the velocity constant. This C can come from parasitics of a second stage. These delay lines do not have a flat dispersion near their cut off, so it is important to use the same L-C periodicity in the input and the output. If inserting transmission lines, input and output will disperse away from each other.
For a distributed amplifier the input is fed in series into the amplifiers and parallel out of them. To avoid losses in the input, no input signal is allowed to leak through. This is avoided by using a balanced input and output also known as push–pull amplifier. Then all signals which leak through the parasitic capacitances cancel. The output is combined in a delay line with decreasing impedance. For narrow band operation other methods of phase-matching are possible, which avoid feeding the signal through multiple coils and capacitors. This may be useful for power-amplifiers.
The single amplifiers can be of any class. There may be some synergy between distributed class E/F amplifiers and some phase-matching methods. Only the fundamental frequency is used in the end, so this is the only frequency, which travels through the delay line version.
Because of the Miller effect a common source transistor acts as a capacitor (non inverting) at high frequencies and has an inverting transconductance at low frequencies. The channel of the transistor has three dimensions. One dimension, the width, is chosen depending on the current needed. The trouble is for a single transistor parasitic capacitance and gain both scale linearly with the width. For the distributed amplifier the capacitance – that is the width – of the single transistor is chosen based on the highest frequency and the width needed for the current is split across all transistors.
Applications
Note that those termination resistors are usually not used in CMOS, but the losses due to these are small in typical applications. In solid state power amplifiers often multiple discrete transistors are used for power reasons anyway. If all transistors are driven in a synchronized fashion a very high gate drive power is needed. For frequencies at which small and efficient coils are available distributed amplifiers are more efficient.
Voltage can be amplified by a common gate transistor, which shows no miller effect and no unit gain frequency cut off. Adding this yields the cascode configuration. The common gate configuration is incompatible with CMOS; it adds a resistor, that means loss, and is more suited for broadband than for high efficiency applications.
Radio
Acousto-optic modulator
time to digital converter
See also
Gunn diode is a device without any parasitic C or L very suitable for broadband applications
Regenerative circuit is circuit using the parasitics of a single transistor for a high frequency narrow band amplifier
Armstrong oscillator is circuit using the parasitics of a single transistor for a high frequency narrow band oscillator
References
External links
Microwaves101.com – Distributed amplifiers
Electronic amplifiers
Distributed element circuits | Distributed amplifier | Technology,Engineering | 1,539 |
413,430 | https://en.wikipedia.org/wiki/Jan%20Oort | Jan Hendrik Oort ( or ; 28 April 1900 – 5 November 1992) was a Dutch astronomer who made significant contributions to the understanding of the Milky Way and who was a pioneer in the field of radio astronomy. The New York Times called him "one of the century's foremost explorers of the universe"; the European Space Agency website describes him as "one of the greatest astronomers of the 20th century" and states that he "revolutionised astronomy through his ground-breaking discoveries." In 1955, Oort's name appeared in Life magazine's list of the 100 most famous living people. He has been described as "putting the Netherlands in the forefront of postwar astronomy".
Oort determined that the Milky Way rotates and overturned the idea that the Sun was at its center. He also postulated the existence of the mysterious invisible dark matter in 1932, which is believed to make up roughly 84.5% of the total mass in the Universe and whose gravitational pull causes "the clustering of stars into galaxies and galaxies into connecting strings of galaxies". He discovered the galactic halo, a group of stars orbiting the Milky Way but outside the main disk. Additionally Oort is responsible for a number of important insights about comets, including the realization that their orbits "implied there was a lot more solar system than the region occupied by the planets."
The Oort cloud, the Oort constants, Oort Limit, an impact crater on Pluto (Oort), and the asteroid 1691 Oort were all named after him.
Early life and education
Oort was born in Franeker, a small town in the Dutch province of Friesland, on April 28, 1900. He was the second son of Abraham Hermanus Oort, a physician, who died on May 12, 1941, and Ruth Hannah Faber, who was the daughter of Jan Faber and Henrietta Sophia Susanna Schaaii, and who died on November 20, 1957. Both of his parents came from families of clergymen, with his paternal grandfather, a Protestant clergyman with liberal ideas, who "was one of the founders of the more liberal Church in Holland" and who "was one of the three people who made a new translation of the Bible into Dutch." The reference is to Henricus Oort (1836–1927), who was the grandson of a famous Rotterdam preacher and, through his mother, Dina Maria Blom, the grandson of theologian Abraham Hermanus Blom, a "pioneer of modern biblical research". Several of Oort's uncles were pastors, as was his maternal grandfather. "My mother kept up her interests in that, at least in the early years of her marriage", he recalled. "But my father was less interested in Church matters."
In 1903 Oort's parents moved to Oegstgeest, near Leiden, where his father took charge of the Endegeest Psychiatric Clinic. Oort's father, "was a medical director in a sanitorium for nervous illnesses. We lived in the director's house of the sanitorium, in a small forest which was very nice for the children, of course, to grow up in." Oort's younger brother, John, became a professor of plant diseases at the University of Wageningen. In addition to John, Oort had two younger sisters and an elder brother who died of diabetes when he was a student.
Oort attended primary school in Oegstgeest and secondary school in Leiden, and in 1917 went to Groningen University to study physics. He later said that he had become interested in science and astronomy during his high-school years, and conjectured that his interest was stimulated by reading Jules Verne. His one hesitation about studying pure science was the concern that it "might alienate one a bit from people in general", as a result of which "one might not develop the human factor sufficiently." But he overcame this concern and ended up discovering that his later academic positions, which involved considerable administrative responsibilities, afforded a good deal of opportunity for social contact.
Oort chose Groningen partly because a well known astronomer, Jacobus Cornelius Kapteyn, was teaching there, although Oort was unsure whether he wanted to specialize in physics or astronomy. After studying with Kapteyn, Oort decided on astronomy. "It was the personality of Professor Kapteyn which decided me entirely", he later recalled. "He was quite an inspiring teacher and especially his elementary astronomy lectures were fascinating." Oort began working on research with Kapteyn early in his third year. According to Oort one professor at Groningen who had considerable influence on his education was physicist Frits Zernike.
After taking his final exam in 1921, Oort was appointed assistant at Groningen, but in September 1922, he went to the United States to do graduate work at Yale and to serve as an assistant to Frank Schlesinger of the Yale Observatory.
Career
At Yale, Oort was responsible for making observations with the Observatory's zenith telescope. "I worked on the problem of latitude variation", he later recalled, "which is quite far away from the subjects I had so far been studying." He later considered his experience at Yale useful as he became interested in "problems of fundamental astronomy that [he] felt was capitalized on later, and which certainly influenced [his] future lectures in Leiden." Personally, he "felt somewhat lonesome in Yale", but also said that "some of my very best friends were made in these years in New Haven."
Early discoveries
In 1924, Oort returned to the Netherlands to work at Leiden University, where he served as a research assistant, becoming Conservator in 1926, Lecturer in 1930, and Professor Extraordinary in 1935. In 1926, he received his doctorate from Groningen with a thesis on the properties of high-velocity stars. The next year, Swedish astronomer Bertil Lindblad proposed that the rate of rotation of stars in the outer part of the galaxy decreased with distance from the galactic core, and Oort, who later said that he believed it was his colleague Willem de Sitter who had first drawn his attention to Lindblad's work, realized that Lindblad was correct and that the truth of his proposition could be demonstrated observationally. Oort provided two formulae that described galactic rotation; the two constants that figured in these formulae are now known as "Oort's constants". Oort "argued that just as the outer planets appear to us to be overtaken and passed by the less distant ones in the solar system, so too with the stars if the Galaxy really rotated", according to the Oxford Dictionary of Scientists. He "was finally able to calculate, on the basis of the various stellar motions, that the Sun was some 30,000 light-years from the center of the Galaxy and took about 225 million years to complete its orbit. He also showed that stars lying in the outer regions of the galactic disk rotated more slowly than those nearer the center. The Galaxy does not therefore rotate as a uniform whole but exhibits what is known as 'differential rotation'."
These early discoveries by Oort about the Milky Way overthrew the Kapteyn system, named after his mentor, which had envisioned a galaxy that was symmetrical around the Sun. As Oort later noted, "Kapteyn and his co-workers had not realized that the absorption in the galactic plane was as bad as it turned out to be." Until Oort began his work, he later recalled, "the Leiden Observatory had been concentrating entirely on positional astronomy, meridian circle work and some proper motion work. But no astrophysics or anything that looked like that. No structure of the galaxy, no dynamics of the galaxy. There was no one else in Leiden who was interested in these problems in which I was principally interested, so the first years I worked more or less by myself in these projects. De Sitter was interested, but his main line of research was celestial mechanics; at that time the expanding universe had moved away from his direct interest." As the European Space Agency states, Oort "sh[ook] the scientific world by demonstrating that the Milky Way rotates like a giant 'Catherine Wheel'." He showed that all the stars in the galaxy were "travelling independently through space, with those nearer the center rotating much faster than those further away."
This breakthrough made Oort famous in the world of astronomy. In the early 1930s he received job offers from Harvard and Columbia University, but chose to stay at Leiden, although he did spend half of 1932 at the Perkins Observatory, in Delaware, Ohio.
In 1934, Oort became assistant to the director of Leiden Observatory; the next year he became General Secretary of the International Astronomical Union (IAU), a post he held until 1948; in 1937 he was elected to the Royal Academy. In 1939, he spent half a year in the U.S., and became interested in the Crab Nebula, concluding in a paper, written with American astronomer Nicholas Mayall, that it was the result of a supernova explosion.
Nazi invasion of Netherlands
In 1940, Nazi Germany invaded the Netherlands. Soon after, the occupying regime dismissed all Jewish professors from Leiden University and other universities. "Among the professors who were dismissed", Oort later recalled, "was a very famous … professor of law by the name of Meyers. On the day when he got the letter from the authorities that he could no longer teach his classes, the dean of the faculty of law went into his class … and delivered a speech in which he started by saying, 'I won't talk about his dismissal and I shall leave the people who did this, below us, but will concentrate on the greatness of the man dismissed by our aggressors.'"
This speech (26 November 1940) made such an impression on all his students that on leaving the auditorium they defiantly sang the anthem of the Netherlands and went on strike. Oort was present for the lecture and was greatly impressed. This occasion formed the beginning of the active resistance in Holland. The speech by Rudolph Cleveringa, the dean of the faculty of Law and former graduate student of professor Meijers, was widely circulated during the rest of the war by the resistance groups. Oort was in a little group of professors in Leiden who came together regularly and discussed the problems the university faced in view of the German occupation. Most of the members of this group were put in hostage camps soon after the speech by Cleveringa. Oort refused to collaborate with the occupiers, "and so we went down to live in the country for the rest of the war." Resigning from the Royal Academy, from his professorial post at Leiden, and from his position at the Observatory, Oort took his family to Hulshorst, a quiet village in the province of Gelderland, where they sat out the war. In Hulshorst, he began writing a book on stellar dynamics.
Oort's radio astronomy
Before the war was over, he initiated, in collaboration with a Utrecht University student, Hendrik van de Hulst, a project that eventually succeeded, in 1951, in detecting the 21-centimeter radio emission from interstellar hydrogen spectral line at radio frequencies. Oort and his colleagues also made the first investigation of the central region of the Galaxy, and discovered that "the 21-centimeter radio emission passed un-absorbed through the gas clouds that had hidden the center from optical observation. They found a huge concentration of mass there, later identified as mainly stars, and also discovered that much of the gas in the region was moving rapidly outward away from the center." In June 1945, after the end of the war, Oort returned to Leiden, took over as director of the Observatory, and became Full Professor of Astronomy. During this immediate postwar period, he led the Dutch group that built radio telescopes at Radio Kootwijk, Dwingeloo, and Westerbork and used the 21-centimeter line to map the Milky Way, including the large-scale spiral structure, the Galactic Center, and gas cloud motions. Oort was helped in this project by the Dutch telecommunications company, PTT, which, he later explained, "had under their care all the radar equipment that was left behind by the Germans on the coast of Holland. This radar equipment consisted in part of reflecting telescopes of 7 1/2 meter aperture.... Our radio astronomy was really started with the aid of one of these instruments… it was in Kootwijk that the first map of the Galaxy was made." For a brief period, before the completion of the Jodrell Bank telescope, the Dwingeloo instrument was the largest of its kind on Earth.
It has been written that "Oort was probably the first astronomer to realize the importance" of radio astronomy. "In the days before radio telescopes," one source notes, "Oort was one of the few scientists to realise the potential significance of using radio waves to search the heavens. His theoretical research suggested that vast clouds of hydrogen lingered in the spiral arms of the Galaxy. These molecular clouds, he predicted, were the birthplaces of stars." These predictions were confirmed by measurements made at the new radio observatories at Dwingeloo and Westerbork. Oort later said that "it was Grote Reber's work which first impressed me and convinced me of the unique importance of radio observations for surveying the galaxy." Just before the war, Reber had published a study of galactic radio emissions. Oort later commented, "The work of Grote Reber made it quite clear [radio astronomy] would be a very important tool for investigating the Galaxy, just because it could investigate the whole disc of the galactic system unimpeded by absorption." Oort's work in radio astronomy is credited by colleagues with putting the Netherlands in the forefront of postwar astronomy. Oort also investigated the source of the light from the Crab Nebula, finding that it was polarized, and probably produced by synchrotron radiation, confirming a hypothesis by Iosif Shklovsky.
Comet studies
Oort went on to study comets, which he formulated a number of revolutionary hypotheses. He hypothesized that the Solar System is surrounded by a massive cloud consisting of billions of comets, many of them "long-period" comets that originate in a cloud far beyond the orbits of Neptune and Pluto. This cloud is now known as the Oort Cloud. He also realized that these external comets, from beyond Pluto, can "become trapped into tighter orbits by Jupiter, and become periodic comets, like Halley's comet." According to one source, "Oort was one of the few people to have seen Comet Halley on two separate apparitions. At the age of 10, he was with his father on the shore at Noordwijk, Netherlands, when he first saw the comet. In 1986, 76 years later, he went up in a plane and was able to see the famous comet once more."
In 1951 Oort and his wife spent several months in Princeton and Pasadena, an interlude that led to a paper by Oort and Lyman Spitzer on the acceleration of interstellar clouds by O-type stars. He went on to study high-velocity clouds. Oort served as director of the Leiden Observatory until 1970. After his retirement, he wrote comprehensive articles on the galactic center and on superclusters and published several papers on the quasar absorption lines, supporting Yakov Zel'dovich's pancake model of the universe. He also continued researching the Milky Way and other galaxies and their distribution until shortly before his death at 92.
One of Oort's strengths, according to one source, was his ability to "translate abstruse mathematical papers into physical terms," as exemplified by his translation of the difficult mathematical terms of Lindblad's theory of differential galactic rotation into a physical model. Similarly, he "derived the existence of the comet cloud on the outskirts of the Solar System from the observations, using the mathematics needed in dynamics, but then deduced the origin of this cloud using general physical arguments and a minimum of mathematics."
Personal life
In 1927, Oort married Johanna Maria (Mieke) Graadt van Roggen (1906–1993). They had met at a university celebration at Utrecht, where Oort's brother was studying biology at the time. Oort and his wife had two sons, Coenraad (Coen) and Abraham, and a daughter, Marijke. Abraham became a professor of climatology at Princeton University.
According to the website of Leiden University, Oort was very interested in and knowledgeable about art. "[W]hen visiting another country he would always try to take some time off to visit the local museums and exhibitions…and in the fifties served for some years as chairman of the pictorial arts committee of the Leiden Academical Arts Centre, which had among other things the task of organizing expositions".
"Colleagues remembered him as a tall, lean and courtly man with a genial manner," reported his New York Times obituary.
Writings
An incomplete list:
Oort, J.H., "Some Peculiarities in the Motion of Stars of High Velocity," Bull. Astron. Inst. Neth. 1, 133–37 (1922).
Oort, J.H., "The Stars of High Velocity," (Thesis, Groningen University) Publ. Kapteyn Astr. Lab, Groningen, 40, 1–75 (1926).
Oort, Jan H., "Asymmetry in the Distribution of Stellar Velocities," Observatory 49, 302–04 (1926).
Oort, J.H., "Non-Light-Emitting Matter in the Stellar System," public lecture of 1926, reprinted in The Legacy of J. C. Kapteyn, ed. by P. C. van der Kruit and K. van Berkel (Kluwer, Dordrecht, 2000) [abstract].
Oort, J.H., "Observational Evidence Confirming Lindblad's Hypothesis of a Rotation of the Galactic System," Bull. Astron. Inst. Neth. 3, 275–82 (1927).
Oort, J.H., "Investigations Concerning the Rotational Motion of the Galactic System together with New Determinations of Secular Parallaxes, Precession and Motion of the Equinox (Errata: 4, 94)," Bull. Astron. Inst. Neth. 4, 79–89 (1927).
Oort, J.H., "Dynamics of the Galactic System in the Vicinity of the Sun," Bull. Astron. Inst. Neth. 4, 269–84 (1928).
Oort, J.H., "Some Problems Concerning the Distribution of Luminosities and Peculiar Velocities of Extragalactic Nebulae," Bull. Astron. Inst. Neth. 6, 155–59 (1931).
Oort, J.H., "The Force Exerted by the Stellar System in the Direction Perpendicular to the Galactic Plane and Some Related Problems," Bull. Astron. Inst. Neth. 6, 249–87 (1932).
Oort, J.H., "A Redetermination of the Constant of Precession, the Motion of the Equinox and the Rotation of the Galaxy from Faint Stars Observed at the McCormick Observatory," 4, 94)," Bull. Astron. Inst. Neth. 8, 149–55 (1937).
Oort, J.H., "Absorption and Density Distribution in the Galactic System," Bull. Astron. Inst. Neth. 8, 233–64 (1938).
Oort, J.H., "Stellar Motions," MNRAS 99, 369–84 (1939).
Oort, J.H. "Some Problems Concerning the Structure and Dynamics of the Galactic System and the Elliptical Nebulae NGC 3115 and 4494," Ap.J. 91, 273–306 (1940).
Mayall, N.U. & J.H. Oort, "Further Data Bearing on the Identification of the Crab Nebula with the Supernova of 1054 A.D. Part II: The Astronomical Aspects," PASP 54, 95–104 (1942).
Oort, J. H., & H.C. van de Hulst, "Gas and Smoke in Interstellar Space," Bull. Astr. Inst. Neth. 10, 187–204 (1946).
Oort, J.H., "Some Phenomena Connected with Interstellar Matter (1946 George Darwin Lecture)," MNRAS 106, 159–79 (1946) [George Darwin]. Lecture.
Oort, J.H., "The Structure of the Cloud of Comets Surrounding the Solar System and a Hypothesis Concerning its Origin," Bull. Astron. Inst. Neth. 11, 91–110 (1950).
Oort, J.H., "Origin and Development of Comets (1951 Halley Lecture)," Observatory 71, 129–44 (1951) Halley Lecture.
Oort, J.H. & M. Schmidt, "Differences between New and Old Comets," Bull. Astron. Inst. Neth. 11, 259–70 (1951).
Westerhout, G. & J.H. Oort, "A Comparison of the Intensity Distribution of Radio-frequency Radiation with a Model of the Galactic System," Bull. Astron. Inst. Neth. 11, 323–33 (1951).
Morgan, H.R. & J.H. Oort, "A New Determination of the Precession and the Constants of Galactic Rotation," Bull. Astron. Inst. Neth. 11, 379–84 (1951).
Oort, J.H. "Problems of Galactic Structure," Ap.J. 116, 233–250 (1952) [Henry Norris Russell Lecture, 1951].
Oort, J. H., "Outline of a Theory on the Origin and Acceleration of Interstellar Clouds and O Associations," Bull. Astr. Inst. Neth. 12, 177–86 (1954).
van de Hulst, H.C., C.A. Muller, & J.H. Oort, "The spiral structure of the outer part of the Galactic System derived from the hydrogen mission at 21 cm wavelength," Bull. Astr. Inst. Neth. 12, 117–49 (1954).
van Houten, C.J., J.H. Oort, & W.A. Hiltner, "Photoelectric Measurements of Extragalactic Nebulae," Ap.J. 120, 439–53 (1954).
Oort, Jan H. & Lyman Spitzer, Jr., "Acceleration of Interstellar Clouds by O-Type Stars," Ap.J. 121, 6–23 (1955).
Oort, J.H., "Measures of the 21-cm Line Emitted by Interstellar Hydrogen," Vistas in Astronomy. 1, 607–16 (1955).
Oort, J.H., "A New Southern Hemisphere Observatory," Sky & Telescope 15, 163 (1956).
Oort, J. H. & Th. Walraven, "Polarization and Composition of the Crab Nebula," Bull. Astr. Inst. Neth. 12, 285–308 (1956).
Oort, J.H., "Die Spiralstruktur des Milchstraßensystems," Mitt. Astr. Ges. 7, 83–87 (1956).
Oort, J.H., F.J. Kerr, & G. Westerhout, "The Galactic System as a Spiral Nebula," MNRAS 118, 379–89 (1958).
Oort, J.H., "Summary – From the Astronomical Point of View," in Ricerche Astronomiche, Vol. 5, Specola Vaticana, Proceedings of a Conference at Vatican Observatory, Castel Gandolfo, May 20–28, 1957, ed. by D.J.K. O'Connell (North Holland, Amsterdam & Interscience, NY, 1958), 507–29.
Oort, Jan H., "Radio-frequency Studies of Galactic Structure," Handbuch der Physik vol. 53, 100–28 (1959).
Oort, J.H., "A Summary and Assessment of Current 21-cm Results Concerning Spiral and Disk Structures in Our Galaxy," in Paris Symposium on Radio Astronomy, IAU Symposium no. 9 and URSI Symposium no. 1, held 30 July – 6 August 1958, ed. by R.N. Bracewell (Stanford University Press, Stanford, CA, 1959), 409–15.
Rougoor, G. W. & J.H. Oort, "Neutral Hydrogen in the Central Part of the Galactic System," in Paris Symposium on Radio Astronomy, IAU Symposium no. 9 and URSI Symposium no. 1, held 30 July – 6 August 1958, ed. by R.N. Bracewell (Stanford University Press, Stanford, CA, 1959), pp. 416–22.
Oort, J. H. & G. van Herk, "Structure and dynamics of Messier 3," Bull. Astr. Inst. Neth. 14, 299–321 (1960).
Oort, J. H., "Note on the Determination of Kz and on the Mass Density Near the Sun," Bull. Astr. Inst. Neth. 15, 45–53 (1960).
Rougoor, G.W. & J.H. Oort, "Distribution and Motion of Interstellar Hydrogen in the Galactic System with Particular Reference to the Region within 3 Kiloparsecs of the Center," Proc. Natl. Acad. Sci. 46, 1–13 (1960).
Oort, J.H. & G.W. Rougoor, "The Position of the Galactic Centre," MNRAS 121, 171–73 (1960).
Oort, J.H., "The Galaxy," IAU Symposium 20, 1–9 (1964).
Oort, J.H. "Stellar Dynamics," in A. Blaauw & M. Schmidt, eds., Galactic Structure (Univ. of Chicago Press, Chicago, 1965), pp. 455–512.
Oort, J. H., "Possible Interpretations of the High-Velocity Clouds," Bull. Astr. Inst. Neth. 18, 421–38 (1966).
Oort, J. H., "Infall of Gas from Intergalactic Space," Nature 224, 1158–63 (1969).
Oort, J.H., "The Formation of Galaxies and the Origin of the High-Velocity Hydrogen.," Astronomy & Astrophysics 7, 381–404 (1970).
Oort, J.H., "The Density of the Universe," Astronomy & Astrophysics 7, 405 (1970).
Oort, J.H., "Galaxies and the Universe," Science 170, 1363–70 (1970).
van der Kruit, P.C., J.H. Oort, & D.S. Mathewson, "The Radio Emission of NGC 4258 and the Possible Origin of Spiral Structure," Astronomy & Astrophysics 21, 169–84 (1972).
Oort, J.H., "The Development of our Insight into the Structure of the Galaxy between 1920 and 1940," Ann. NY Acad. Sci. 198, 255–66 (1972).
Oort, Jan H. "On the Problem of the Origin of Spiral Structure," Mitteilungen der AG 32, 15–31 (1973) [Karl Schwarzschild Lecture, 1972].
Oort, J.H. & L. Plaut, "The Distance to the Galactic Centre Derived from RR Lyrae Variables, the Distribution of these Variables in the Galaxy's Inner Region and Halo, and A Rediscussion of the Galactic Rotation Constants," Astronomy & Astrophysics 41, 71–86 (1975).
Strom, R. G., G.K. Miley, & J. Oort, "Giant Radio Galaxies," Sci. Amer. 233, 26 (1975).
Pels, G., J.H. Oort, & H.A. Pels-Kluyver, "New Members of the Hyades Cluster and a Discussion of its Structure," Astronomy & Astrophysics 43, 423–41 (1975).
Rubin, Vera C., W. Kent Ford, Jr., Charles J. Peterson, & J.H. Oort,"New Observations of the NGC 1275 Phenomenon," Ap.J. 211, 693–96 (1977).
Oort, J.H., "The Galactic Center," Annual Review of Astronomy & Astrophysics 15, 295–362 (1977).
Oort, J.H., "Superclusters and Lyman α Absorption Lines in Quasars," Astronomy & Astrophysics 94, 359–64 (1981).
Oort, J.H., H. Arp, & H. de Ruiter, "Evidence for the Location of Quasars in Superclusters," Astronomy & Astrophysics 95, 7–13 (1981).
Oort, J.H., "Superclusters," Annual Review of Astronomy & Astrophysics 21, 373–428 (1983).
Oort, J.H., "Structure of the Universe," in Early Evolution of the Universe and its Present Structure; Proceedings of the Symposium, Kolymbari, Greece, August 30 – September 2, 1982, (Reidel, Dordrecht & Boston, 1983), 1–6.
Oort, Jan H. "The Origin and Dissolution of Comets (1986 Halley Lecture)" Observatory 106, 186–93 (1986).
Oort, Jan H. "Origin of Structure in the Universe," Publ. Astron. Soc. Jpn. 40, 1–14 (1988).
Oort, J.H., "Questions Concerning the Large-scale Structure of the Universe," in Problems in Theoretical Physics and Astrophysics: Collection of Articles in Celebration of the 70th Birthday of V. L. Ginzburg (Izdatel'stvo Nauka, Moscow, 1989), pp. 325–37.
Oort, J.H., "Orbital Distribution of Comets," in W.F. Huebner, ed., Physics and Chemistry of Comets (Springer-Verlag, 1990), pp. 235–44 (1990).
Oort, J.H., "Exploring the Nuclei of Galaxies," Mercury 21, 57 (1992).
Oort, J.H., "Non-Light-Emitting Matter in the Stellar System," public lecture of 1926, reprinted in The Legacy of J. C. Kapteyn, ed. by P. C. van der Kruit and K. van Berkel (Kluwer, Dordrecht, 2000) [abstract].
A few of Oort's discoveries
In 1924, Oort discovered the galactic halo, a group of stars orbiting the Milky Way but outside the main disk.
In 1927, he calculated that the center of the Milky Way was 5,900 parsecs (19,200 light years) from the Earth in the direction of the constellation Sagittarius.
In 1932, by measuring the motions of stars in the Milky Way he was the first to find evidence for dark matter, when he found the mass of the galactic plane must be more than the mass of the material that can be seen.
He showed that the Milky Way had a mass 100 billion times that of the Sun.
In 1950, he suggested that comets came from a common region of the Solar System (now called the Oort cloud).
He found that the light from the Crab Nebula was polarized, and produced by synchrotron emission.
Honours
Awards
Bruce Medal of the Astronomical Society of the Pacific in 1942
Gold Medal of the Royal Astronomical Society in 1946
Janssen Medal from the French Academy of Sciences in 1946
Prix Jules Janssen, the highest award of the Société astronomique de France, the French astronomical society (1947)
Henry Norris Russell Lectureship of the American Astronomical Society in 1951
Gouden Ganzenveer in 1960
Vetlesen Prize in 1966
National Radio Astronomy Observatory, Jansky Prize, 1967
Karl Schwarzschild Medal of the Astronomische Gesellschaft in 1972
Association pour le Développement International de l'Observatoire de Nice, ADION medal, 1978
Balzan Prize for Astrophysics in 1984
Inamori Foundation, Kyoto Prize, 1987
Named after him
1691 Oort (asteroid)
Oort cloud (Öpik–Oort cloud)
Oort limit
Oort constants
Oort building, the current building of the Leiden Observatory
Memberships
Member of the Royal Netherlands Academy of Arts and Sciences (1937–1943, 1945–)
Member of the American Academy of Arts and Sciences (1946–)
Members of the United States National Academy of Sciences (1953–)
Member of the American Philosophical Society (1957–)
Upon his death, Nobel Prize winning astrophysicist Subrahmanyan Chandrasekhar remarked, "The great oak of Astronomy has been felled, and we are lost without its shadow."
References
Notes
Biographical materials
Blaauw, Adriaan, Biographical Encyclopedia of Astronomers (Springer, NY, 2007), pp. 853–55.
Chapman, David M.F., "Reflections: Jan Hendrik Oort – Swirling Galaxies and Clouds of Comets," JRASC 94, 53–54 (2000).
ESA Space Science, "Comet Pioneer: Jan Hendrik Oort," 27 February 2004.
Katgert-Merkelijn, J., University of Leiden, "Jan Oort, Astronomer".
Katgert-Merkelijn, J.K.: The letters and papers of Jan Hendrik Oort, as archived in the University Library, Leiden. Dordrecht, Kluwer Academic Publishers, 1997. .
Oort, J.H., "Some Notes on My Life as an Astronomer," Annual Review of Astronomy & Astrophysics 19, 1 (1981).
van de Hulst, H.C., Biographical Memoirs of the Royal Society of London 40, 320–26 (1994).
van der Kruit, Pieter C.: Jan Hendrik Oort. Master of the Galactic System. Springer Nature, 2019. .
van Woerden, Hugo, Willem N. Brouw, and Henk C. van de Hulst, eds., "Oort and the Universe: A Sketch of Oort's Research and Person" (D. Reidel, Dordrecht, 1980).
Obituaries
Blaauw, Adriaan, Zenit jaarg, 196–210 (1993).
Blaauw, Adriaan & Maarten Schmidt, PASP 105, 681 (1993).
Blaauw, Adriaan, "Oort im Memoriam," in Leo Blitz & Peter Teuben, eds., 169th IAU Symposium: Unsolved Problems of the
Milky Way, (Kluwer Acad. Publishers, 1996), pp. xv–xvi.
Pecker, J.-C., "La Vie et l'Oeuvre de Jan Hendrik Oort," Comptes Rendus de l'Acadèmie des Sciences: La Vie des Science 10, 5, 535–40 (1993).
van de Hulst, H.C., QJRAS 35, 237–42 (1994).
van den Bergh, Sidney, "An Astronomical Life: J.H. Oort (1900–1992)," JRASC 87, 73–76 (1993).
Woltjer, L., J. Astrophys. Astron. 14, 3–5 (1993).
Woltjer, Lodewijk, Physics Today 46, 11, 104–05 (1993).
Literature
External links
Oral history interview transcript with Jan Oort on 10 November 1977, American Institute of Physics, Niels Bohr Library & Archives
Jan Oort, astronomer (Leiden University Library, April–May 2000)—Online exhibition
1900 births
1992 deaths
Academic staff of Leiden University
Foreign associates of the National Academy of Sciences
Foreign members of the Royal Society
Foreign members of the Russian Academy of Sciences
Foreign members of the USSR Academy of Sciences
Kyoto laureates in Basic Sciences
Members of the American Philosophical Society
Members of the Royal Netherlands Academy of Arts and Sciences
People from Franekeradeel
Presidents of the International Astronomical Union
Recipients of the Gold Medal of the Royal Astronomical Society
University of Groningen alumni
Vetlesen Prize winners | Jan Oort | Astronomy | 7,748 |
38,241,742 | https://en.wikipedia.org/wiki/Cloud-based%20integration | Cloud-based integration is a form of systems integration business delivered as a cloud computing service that addresses data, process, service-oriented architecture (SOA) and application integration.
Description
Integration platform as a service (iPaaS) is a suite of cloud services enabling customers to develop, execute and govern integration flows between disparate applications. Under the cloud-based iPaaS integration model, customers drive the development and deployment of integrations without installing or managing any hardware or middleware. The iPaaS model allows businesses to achieve integration without big investment into skills or licensed middleware software. iPaaS used to be regarded primarily as an integration tool for cloud-based software applications, used mainly by small to mid-sized business. Over time, a hybrid type of iPaaS—Hybrid-IT iPaaS—that connects cloud to on-premises, is becoming increasingly popular. Additionally, large enterprises are exploring new ways of integrating iPaaS into their existing IT infrastructures.
Cloud integration was basically created to break down the data silos, improve connectivity and optimize the business process. Cloud integration has increased its popularity as the usage of Software as a Service solutions is growing day by day.
Prior to the emergence of cloud computing in the early 2000s, integration could be categorized as either internal or business to business (B2B). Internal integration requirements were serviced through an on-premises middleware platform and typically utilized a service bus to manage exchange of data between systems. B2B integration was serviced through EDI gateways or value-added network (VAN). The advent of SaaS applications created a new kind of demand which was met through cloud-based integration. Since their emergence, many such services have also developed the capability to integrate legacy or on-premises applications, as well as function as EDI gateways.
The following essential features were proposed by one marketing company:
Deployed on a multi-tenant, elastic cloud infrastructure
Subscription model pricing (operating expense, not capital expenditure)
No software development (required connectors should already be available)
Users do not perform deployment or manage the platform itself
Presence of integration management and monitoring features
The emergence of this sector led to new cloud-based business process management tools that do not need to build integration layers - since those are now a separate service.
Drivers of growth include the need to integrate mobile app capabilities with proliferating API publishing resources and the growth in demand for the Internet of things functionalities as more 'things' connect to the Internet.
See also
Platform as a service
as a service
References
Cloud computing
System integration | Cloud-based integration | Engineering | 513 |
70,502,475 | https://en.wikipedia.org/wiki/Desert%20kite | Desert kites () are dry stone wall structures found in Southwest Asia (Middle East, but also North Africa, Central Asia and Arabia), which were first discovered from the air during the 1920s. There are over 6,000 known desert kites, with sizes ranging from less than a hundred metres to several kilometres. They typically have a kite shape formed by two convergent "antennae" that run towards an enclosure, all formed by walls of dry stone less than one metre high, but variations exist.
Little is known about their ages, but the few dated examples appear to span the entire Holocene. The majority view on their purpose is that they were used as traps for hunting game animals such as gazelles, which were driven into the kites and hunted there.
Appearance
Desert kites are stone structures with a convergent shape, composed of linear piles of stones. The structures have lengths ranging from less than a hundred metres to several kilometres and heights of less than one metre, even accounting for erosion. There often are gaps in the lines, which were presumably either purposeful (left by the builders) or the consequence of lines being formed by alignments of cairns rather than a continuous row. There are a number of different shapes that are referred to as "desert kites", but one common feature of all such structure are the lines forming two walls ("antennae") that converge into an enclosure ("head") with attached cells. Different regions have different prevalent kite types. Sometimes the existence of these cells is considered essential for a desert kite to be considered as such.
Research published in 2022 has shown that pits several metres deep often lie at the margins of enclosures, which have been interpreted as traps and killing pits. The kites enclose surface areas with a median of , but much larger and much smaller sizes are also known.
They are typically found in areas with elevated but flat topography or topographically complex terrain, but are rare or absent from sloping terrain, mountainous regions, or within endorheic basins, although they occur at the margins of mountains. Often, the terrain within the kite is much more open than the outside terrain, lacking vegetation and rocks. In general, the visibility of the kites from their inside is poor, which appears to be a purposeful feature of their construction; for example, the ends and entrances of the kites often coincide with slope breaks (places where the slope changes). Within a given region, the kites tend to have a preferred orientation. They are absent from humid climates and from certain hyperarid areas, and their use may have been influenced by Holocene climate changes.
Their often enormous size and conspicuousness in arid or semiarid terrain renders them visible in aerial images, while their construction in rough terrain makes them almost invisible on the ground. Sometimes, natural features like cliffs are used in conjunction with the artificial walls to form a kite. Clearing vegetation around the lines or using rocks with a different colour from the background has been documented in volcanic terrain. In Arabia, cairns and linear stone alignments have been found associated with kites.
Dating
Dating kites is difficult; various dating methods like radiocarbon dating and optically stimulated luminescence (OSL) have yielded ages ranging from the early to the late Holocene, and there are sporadic reports of their use in travel records. The early Holocene kites are the most complex man-made structures of that time. Some kites have been overprinted by later archaeological structures, destroyed, eroded or submerged, or built out over time to form more complex shapes. In some places, structures like cairns, tombs or square walls occur alongside kites.
Occurrence
Kites are known from the Middle East and Central Asia, with examples known mainly from Uzbekistan, Kazakhstan, Armenia, Turkey, Iraq, Syria, Lebanon, Israel, Palestine, Jordan, Saudi Arabia, Yemen, Egypt and Libya.
Kites have also been found in Mongolia and South Africa. , there were over 6,000 known kites in Asia and the Middle East, and in some parts of Syria there are as many as 1 kite every , to the point that they are partially overlapping or form complicated structures. Similar large enclosures that were presumably used as traps have been found in Europe, where they were dated to Mesolithic and Neolithic age; North America, where structures known as drive lines have been used into the 19th century AD; South America; and Japan.
Function
Both archaeological studies and ethnographic accounts from the 19th and 20th century indicate that desert kites in the Middle East and North Africa were used as traps for wild game. A minority viewpoint is that they were used for livestock management. The disagreement stems mainly from a lack of factual evidence to support either hypothesis, from disputes on the interpretation of evidence, and from the extinction of traditions involving desert kites. There is almost no evidence of what happened to animals after they were trapped or which animals were targeted, but ethnographic analyses indicate that kites were used to hunt ungulates like gazelles, which live in groups and form defensive formations when threatened. The usage of traps in catching animals in the steppe is mentioned in the Epic of Gilgamesh. The construction of kites would have required coordinated work from multiple people and are thus indicative of social organization, even if the trapping of animals is a comparatively simple hunting technique. The use of kites in trapping animals is depicted in Israeli, Mongolian and Sinai petroglyphs; these drawings may not always be contemporaneous to the actual usage of the kites. Petroglyphs relating to kites have been found on kites.
Studies show that even low walls or linear structures like pipelines can effectively "guide" animals, which do not attempt to cross the lines even if they are physically able to do so, explaining the effectiveness of desert kites. The low visibility of the kite structures prevents the animals from recognizing the trap. The positioning of pits at the end of convergent enclosures and the presence of small walls delimitating pits from the enclosure would hide the pit from the animals until they are too close to change course in their panic. The entrances often are situated opposite to the direction of animal migration in the region on a wide scale, or of daily animal behaviours on a small scale. The use of desert kites may have had a significant impact on wild animals.
Research history
The usage of kite-like structures to trap animals is attested in 1831. Desert kites were originally identified in aerial images during the 1920s and were initially interpreted as animal traps, enclosures for domesticated animals or fortresses. They are referred to as "desert kites" or "kites", a name bestowed to them by the Royal Air Force pilot Group Captain Lionel Rees, in reference to their resemblance to toy kites. Given that they are commonly found in desert areas, they later became known as "desert kites", which is now the commonly used term in academic literature.
The advent of publicly available satellite imagery such as Google Earth and Google Maps during the 2010s, on which desert kites are visible to everyone, has led to a resurgence of interest in these archaeological sites and the realization that they are widespread. However, without fieldwork, it is difficult to gain a full picture of what they were. Only a very few kites have been excavated or subject to dating efforts, and many of these are not representative of the majority of kites.
Engraved depictions of the layout of desert kites have been found, some of which are schematic and others are like scaled models. Open questions in kite research include what they were used for, when they were used and why the technology is so widespread.
See also
Buffalo jump
Fishing weir
Game drive system
Hartashen Megalithic Avenue
Jawa, Jordan
Mustatil, similar formations in Arabia
Petroform
References
Sources
External links
Globalkites, worldwide database of kites
Archaeological terminology
History of hunting
Hunting methods
Open problems
Types of wall | Desert kite | Engineering | 1,620 |
19,447,362 | https://en.wikipedia.org/wiki/Beta%20Hydrae | Beta Hydrae, Latinized from β Hydrae, is a double star in the equatorial constellation of Hydra. Historically, Beta Hydrae was designated 28 Crateris, but the latter fell out of use when the IAU defined the permanent constellation boundaries in 1930. The system is faintly visible to the naked eye with a combined apparent visual magnitude that ranges around 4.29. It is located at a distance of approximately 310 light years from the Sun based on parallax.
The double nature of this system was first reported by English astronomer John Herschel in 1834. The brighter primary, designated component A, has an average visual magnitude of 4.67, while the secondary, component B, is of magnitude 5.47. As of 2002, the secondary is located at an angular separation of from the primary, along a position angle of 28.5°.
The brighter component is an α2 Canum Venaticorum variable that changes in brightness with a period of 2.344 days and an amplitude of 0.04 in visual magnitude. It is a magnetic chemically-peculiar star with an average quadratic field strength of . The star is around 178 million years old with 3.4 times the mass of the Sun and 3.9 times the Sun's radius. On average, it is radiating 257 times the luminosity of the Sun from its photosphere at an effective temperature of 10,980 K.
In 1972, M. R. Molnar found a stellar classification of B9IIIp Si for β Hydrae A, showing an abundance anomaly for silicon. R. F. Garrison and R. O. Gray assigned it a class of kB8hB8HeA0VSi in 1994. This notation indicates the Calcium K line matches a star of class B8, the hydrogen lines also match a B8 spectrum, while the helium lines match an A-type main-sequence star of class A0V. They noted that the hydrogen lines have "curious rounded profiles". Later studies list abundance anomalies of silicon, chromium, and strontium.
Cultural significance
The Kalapalo people of Mato Grosso state in Brazil called this star and ψ Hya Kafanifani.
References
Ap stars
B-type giants
Alpha2 Canum Venaticorum variables
Double stars
Hydra (constellation)
Hydrae, Beta
CD-33 08018
Crateris, 28
103192
057936
4552 | Beta Hydrae | Astronomy | 496 |
17,124,400 | https://en.wikipedia.org/wiki/SkyTran | Skytran (stylized as skyTran) is a personal rapid transit system concept. It was first proposed by the inventor Douglas Malewicki in 1990 and was under development by Unimodal Inc. A prototype of the skyTran vehicle and a section of track have been constructed. The early magnetic levitation system, Inductrack, which SkyTran has replaced with a similar proprietary design, has been tested by General Atomics with a full-scale model. In 2010, Unimodal signed an agreement with NASA to test and develop skyTran. skyTran had proposed additional projects in France, Germany, India, Indonesia, Malaysia, the United Kingdom, and the United States.
System details
To minimize maintenance and make switching on and off the tracks efficient at high speeds, early versions of the system were proposed using the Inductrack passive magnetic levitation system instead of wheels. Passive maglev requires no external power to levitate vehicles. Rather, the magnetic repulsion is produced by the movement of the vehicle over shorted wire coils in the track. The cars would be driven by a linear motor in the track or vehicle. Therefore, the system would have no electromechanical moving parts, making it entirely a "solid-state".
In this first version, the passive maglev coils are enclosed and supported by a light shell called a guideway that also captures the vehicles mechanically to prevent derailment. Malewicki proposes a 3D grid design that avoids accident-prone intersections by grade separation, with guideways and their exit and entry ramps crossing above or below each other. Tracks will be supported above the ground by standard metal utility poles. They could also be attached to the sides of buildings.
After identifying problems with Inductrack and the cost associated with it, skyTran described an improved design during a Horizon BBC interview with skyTran at NASA Ames in Mountain View, CA.
New details about the levitation and motor were described in a keynote speech in June 2016, showing levitation stators being plain aluminum plates and motor stators as aluminum tubes. The guideway is also significantly enlarged and wider than the vehicle, so the switching can be vertical, going through the guideway. The guideway shape is shown at 16:26 above the referenced video. This new concept can be seen in a short simulation film. Instead of the purely passive induct rack system, the new mechanism modifies lift by mechanically angling the magnetic pads and needs a servo-controlled actuation. The lift control also does the switching by moving vertically through the rails.
The patents filed by skyTran for this new system are and
History
Malewicki conceived the basic idea of skyTran in 1990, filing a US patent application that year that was granted as US Patent #5108052 in 1992.
He published several technical papers on skyTran in the following years. In 1991, he presented a paper entitled "People Pods Miniature Magnetic Levitation Vehicles for Personal Non-Stop Transportation" to the Society of Automotive Engineers (SAE) Future Transportation Conference in Portland, Oregon. The paper is a thorough description of the concept at that point, although some important features of the current skyTran design are only discussed as options, including magnetic levitation rather than wheels and hanging below the guideway instead of riding above it.
The paper describes how Malewicki had built and driven a freeway-legal 154-MPG car in 1981, but realised it could never be safe on a street surrounded by far larger and heavier vehicles. Elevated tracks would allow a very light vehicle to be safe. They are also basic to the system's inexpensiveness, because there is no need to acquire a huge right of way and tear down buildings. It presents an aerodynamic analysis (Malewicki is an aerospace engineer) supporting claims of very high energy efficiency (the paper claims for skyTran's current two-passenger tandem design, though the Unimodal site claims only, "over "). It also described how a very light vehicle that can squeeze both surfaces of a track simultaneously could reliably achieve a 6-G deceleration, allowing it to brake safely to a stop from in just .
The 2008, energy shortages stimulated renewed interest in green vehicle proposals such as skyTran. The "Maglev skyTran" topic quoted a number of skyTran and personal rapid transit ideas, such as passengers exiting and boarding at off-line elevated "portal" stops while high-speed traffic continues to speed by on its main line.
In September 2009, the US NASA (National Aeronautics and Space Administration) signed a Space Act joint development agreement with Unimodal. Unimodal has tested prototype vehicles on short guideway sections at NASA's Ames Research Center, in Mountain View, California. NASA control and vehicle dynamics simulation software was made available to Unimodal, which hired NASA subcontractors to program them using US DOT grant funding.
In June 2014, Unimodal and Israel Aerospace Industries (IAI) contracted to build a 400-500 meter elevated loop test track on IAI's campus in central Israel. If the pilot project is successful, IAI will build a commercial skyTran network in the city of Tel Aviv, Herzliya and Netanya. In April 2015, the Herzliya city council approved a budget for the skyTran project.
In June 2016, skyTran signed a memorandum of understanding in the United Arab Emirates for the study and implementation of a personal rapid transit system in Yas Island.
In 2018, it was announced that Indian conglomerate Reliance Industries had acquired a 12.7% stake in SkyTran through its subsidiary Reliance Strategic Business Ventures Limited. As part of the deal, Reliance would supply communication equipment and a prototype would be built in India.
In April 2019, SkyTran signed a memorandum of understanding with Eilat to build an elevated rail system serving Ramon Airport.
In June 2019 a memorandum of understanding was signed between skyTran and the Roads and Transport Authority (RTA) of Dubai in the United Arab Emirates to develop a Sky Pod suspended transit system.
In February 2021, Reliance Industries increased its shareholding in skyTran to 54.46% with an additional investment of $26.76 million making Reliance Industries Limited the majority stakeholder in SkyTran.
In September 2023, skyTran was shuttered and filed for bankruptcy due to no additional funding from Reliance Industries; even though a full-scale in-door prototype was within 6 to 9 months.
See also
Gondola lift
Maglev train proposals
Transport
Sustainable transport
Levicar
String transport
References
External links
skyTran official site
Archive of old skyTran.net site
Are Magnetically Levitating 'Sky Pods' the Future of Travel?
NASA skyTran press release
Maglev
Proposed monorails
Reliance Industries subsidiaries
Reliance Industries
Personal rapid transit
Vertical transport devices
Projects disestablished in 2023
Transport companies disestablished in 2023 | SkyTran | Technology | 1,419 |
368,389 | https://en.wikipedia.org/wiki/Ultrafiltration | Ultrafiltration (UF) is a variety of membrane filtration in which forces such as pressure or concentration gradients lead to a separation through a semipermeable membrane. Suspended solids and solutes of high molecular weight are retained in the so-called retentate, while water and low molecular weight solutes pass through the membrane in the permeate (filtrate). This separation process is used in industry and research for purifying and concentrating macromolecular (103–106 Da) solutions, especially protein solutions.
Ultrafiltration is not fundamentally different from microfiltration. Both of these are separate based on size exclusion or particle capture. It is fundamentally different from membrane gas separation, which separate based on different amounts of absorption and different rates of diffusion. Ultrafiltration membranes are defined by the molecular weight cut-off (MWCO) of the membrane used. Ultrafiltration is applied in cross-flow or dead-end mode.
Applications
Industries such as chemical and pharmaceutical manufacturing, food and beverage processing, and waste water treatment, employ ultrafiltration in order to recycle flow or add value to later products. Blood dialysis also utilizes ultrafiltration.
Drinking water
Ultrafiltration can be used for the removal of particulates and macromolecules from raw water to produce potable water. It has been used to either replace existing secondary (coagulation, flocculation, sedimentation) and tertiary filtration (sand filtration and chlorination) systems employed in water treatment plants or as standalone systems in isolated regions with growing populations. When treating water with high suspended solids, UF is often integrated into the process, utilising primary (screening, flotation, filtration) and some secondary treatments as pre-treatment stages. UF processes are currently preferred over traditional treatment methods for the following reasons:
No chemicals required (aside from cleaning)
Constant product quality regardless of feed quality
Compact plant size
Capable of exceeding regulatory standards of water quality, achieving 90–100% pathogen removal
UF processes are currently limited by the high cost incurred due to membrane fouling and replacement. Additional pretreatment of feed water is required to prevent excessive damage to the membrane units.
In many cases UF is used for pre filtration in reverse osmosis (RO) plants to protect the RO membranes.
Protein concentration
UF is used extensively in the dairy industry; particularly in the processing of cheese whey to obtain whey protein concentrate (WPC) and lactose-rich permeate. In a single stage, a UF process is able to concentrate the whey 10–30 times the feed.
The original alternative to membrane filtration of whey was using steam heating followed by drum drying or spray drying. The product of these methods had limited applications due to its granulated texture and insolubility. Existing methods also had inconsistent product composition, high capital and operating costs and due to the excessive heat used in drying would often denature some of the proteins.
Compared to traditional methods, UF processes used for this application:
Are more energy efficient
Have consistent product quality, 35–80% protein product depending on operating conditions
Do not denature proteins as they use moderate operating conditions
The potential for fouling is widely discussed, being identified as a significant contributor to decline in productivity. Cheese whey contains high concentrations of calcium phosphate which can potentially lead to scale deposits on the membrane surface. As a result, substantial pretreatment must be implemented to balance pH and temperature of the feed to maintain solubility of calcium salts.
Other applications
Filtration of effluent from paper pulp mill
Cheese manufacture, see ultrafiltered milk
Removal of some bacteria from milk
Process and waste water treatment
Enzyme recovery
Fruit juice concentration and clarification
Dialysis and other blood treatments
Desalting and solvent-exchange of proteins (via diafiltration)
Laboratory grade manufacturing
Radiocarbon dating of bone collagen
Recovery of electrodeposition paints
Treatment of oil and latex emulsions
Recovery of lignin compounds in spent pulping liquors
Principles
The basic operating principle of ultrafiltration uses a pressure induced separation of solutes from a solvent through a semi permeable membrane. The relationship between the applied pressure on the solution to be separated and the flux through the membrane is most commonly described by the Darcy equation:
,
where is the flux (flow rate per membrane area), is the transmembrane pressure (pressure difference between feed and permeate stream), is solvent viscosity and is the total resistance (sum of membrane and fouling resistance).
Membrane fouling
Concentration polarization
When filtration occurs the local concentration of rejected material at the membrane surface increases and can become saturated. In UF, increased ion concentration can develop an osmotic pressure on the feed side of the membrane. This reduces the effective TMP of the system, therefore reducing permeation rate. The increase in concentrated layer at the membrane wall decreases the permeate flux, due to increase in resistance which reduces the driving force for solvent to transport through membrane surface. CP affects almost all the available membrane separation processes. In RO, the solutes retained at the membrane layer results in higher osmotic pressure in comparison to the bulk stream concentration. So the higher pressures are required to overcome this osmotic pressure. Concentration polarisation plays a dominant role in ultrafiltration as compared to microfiltration because of the small pore size membrane. Concentration polarization differs from fouling as it has no lasting effects on the membrane itself and can be reversed by relieving the TMP. It does however have a significant effect on many types of fouling.
Types of fouling
Types of Foulants
The following are the four categories by which foulants of UF membranes can be defined in:
biological substances
macromolecules
particulates
ions
Particulate deposition
The following models describe the mechanisms of particulate deposition on the membrane surface and in the pores:
Standard blocking: macromolecules are uniformly deposited on pore walls
Complete blocking: membrane pore is completely sealed by a macromolecule
Cake formation: accumulated particles or macromolecules form a fouling layer on the membrane surface, in UF this is also known as a gel layer
Intermediate blocking: when macromolecules deposit into pores or onto already blocked pores, contributing to cake formation
Scaling
As a result of concentration polarization at the membrane surface, increased ion concentrations may exceed solubility thresholds and precipitate on the membrane surface. These inorganic salt deposits can block pores causing flux decline, membrane degradation and loss of production. The formation of scale is highly dependent on factors affecting both solubility and concentration polarization including pH, temperature, flow velocity and permeation rate.
Biofouling
Microorganisms will adhere to the membrane surface forming a gel layer – known as biofilm. The film increases the resistance to flow, acting as an additional barrier to permeation. In spiral-wound modules, blockages formed by biofilm can lead to uneven flow distribution and thus increase the effects of concentration polarization.
Membrane arrangements
Depending on the shape and material of the membrane, different modules can be used for ultrafiltration process. Commercially available designs in ultrafiltration modules vary according to the required hydrodynamic and economic constraints as well as the mechanical stability of the system under particular operating pressures. The main modules used in industry include:
Tubular modules
The tubular module design uses polymeric membranes cast on the inside of plastic or porous paper components with diameters typically in the range of 5–25 mm with lengths from 0.6–6.4 m. Multiple tubes are housed in a PVC or steel shell. The feed of the module is passed through the tubes, accommodating radial transfer of permeate to the shell side. This design allows for easy cleaning however the main drawback is its low permeability, high volume hold-up within the membrane and low packing density.
Hollow fibre
This design is conceptually similar to the tubular module with a shell and tube arrangement. A single module can consist of 50 to thousands of hollow fibres and therefore are self-supporting unlike the tubular design. The diameter of each fibre ranges from 0.2–3 mm with the feed flowing in the tube and the product permeate collected radially on the outside. The advantage of having self-supporting membranes as is the ease at which it can be cleaned due to its ability to be backflushed. Replacement costs however are high, as one faulty fibre will require the whole bundle to be replaced. Considering the tubes are of small diameter, using this design also makes the system prone to blockage.
Spiral-wound modules
Are composed of a combination of flat membrane sheets separated by a thin meshed spacer material which serves as a porous plastic screen support. These sheets are rolled around a central perforated tube and fitted into a tubular steel pressure vessel casing. The feed solution passes over the membrane surface and the permeate spirals into the central collection tube. Spiral-wound modules are a compact and cheap alternative in ultrafiltration design, offer a high volumetric throughput and can also be easily cleaned. However it is limited by the thin channels where feed solutions with suspended solids can result in partial blockage of the membrane pores.
Plate and frame
This uses a membrane placed on a flat plate separated by a mesh like material. The feed is passed through the system from which permeate is separated and collected from the edge of the plate. Channel length can range from 10–60 cm and channel heights from 0.5–1.0 mm. This module provides low volume hold-up, relatively easy replacement of the membrane and the ability to feed viscous solutions because of the low channel height, unique to this particular design.
Process characteristics
The process characteristics of a UF system are highly dependent on the type of membrane used and its application. Manufacturers' specifications of the membrane tend to limit the process to the following typical specifications:
Process design considerations
When designing a new membrane separation facility or considering its integration into an existing plant, there are many factors which must be considered. For most applications a heuristic approach can be applied to determine many of these characteristics to simplify the design process. Some design areas include:
Pre-treatment
Treatment of feed prior to the membrane is essential to prevent damage to the membrane and minimize the effects of fouling which greatly reduce the efficiency of the separation. Types of pre-treatment are often dependent on the type of feed and its quality. For example, in wastewater treatment, household waste and other particulates are screened. Other types of pre-treatment common to many UF processes include pH balancing and coagulation. Appropriate sequencing of each pre-treatment phase is crucial in preventing damage to subsequent stages. Pre-treatment can even be employed simply using dosing points.
Membrane specifications
Material
Most UF membranes use polymer materials (polysulfone, polypropylene, cellulose acetate, polylactic acid) however ceramic membranes are used for high temperature applications.
Pore size
A general rule for choice of pore size in a UF system is to use a membrane with a pore size one tenth that of the particle size to be separated. This limits the number of smaller particles entering the pores and adsorbing to the pore surface. Instead they block the entrance to the pores allowing simple adjustments of cross-flow velocity to dislodge them.
Operation strategy
Flowtype
UF systems can either operate with cross-flow or dead-end flow. In dead-end filtration the flow of the feed solution is perpendicular to the membrane surface. On the other hand, in cross flow systems the flow passes parallel to the membrane surface. Dead-end configurations are more suited to batch processes with low suspended solids as solids accumulate at the membrane surface therefore requiring frequent backflushes and cleaning to maintain high flux. Cross-flow configurations are preferred in continuous operations as solids are continuously flushed from the membrane surface resulting in a thinner cake layer and lower resistance to permeation.
Flow velocity
Flow velocity is especially critical for hard water or liquids containing suspensions in preventing excessive fouling. Higher cross-flow velocities can be used to enhance the sweeping effect across the membrane surface therefore preventing deposition of macromolecules and colloidal material and reducing the effects of concentration polarization. Expensive pumps are however required to achieve these conditions.
Flow temperature
To avoid excessive damage to the membrane, it is recommended to operate a plant at the temperature specified by the membrane manufacturer. In some instances however temperatures beyond the recommended region are required to minimise the effects of fouling. Economic analysis of the process is required to find a compromise between the increased cost of membrane replacement and productivity of the separation.
Pressure
Pressure drops over multi-stage separation can result in a drastic decline in flux performance in the latter stages of the process. This can be improved using booster pumps to increase the TMP in the final stages. This will incur a greater capital and energy cost which will be offset by the improved productivity of the process. With a multi-stage operation, retentate streams from each stage are recycled through the previous stage to improve their separation efficiency.
Multi-stage, multi-module
Multiple stages in series can be applied to achieve higher purity permeate streams. Due to the modular nature of membrane processes, multiple modules can be arranged in parallel to treat greater volumes.
Post-treatment
Post-treatment of the product streams is dependent on the composition of the permeate and retentate and its end-use or government regulation. In cases such as milk separation both streams (milk and whey) can be collected and made into useful products. Additional drying of the retentate will produce whey powder. In the paper mill industry, the retentate (non-biodegradable organic material) is incinerated to recover energy and permeate (purified water) is discharged into waterways. It is essential for the permeate water to be pH balanced and cooled to avoid thermal pollution of waterways and altering its pH.
Cleaning
Cleaning of the membrane is done regularly to prevent the accumulation of foulants and reverse the degrading effects of fouling on permeability and selectivity.
Regular backwashing is often conducted every 10 min for some processes to remove cake layers formed on the membrane surface. By pressurising the permeate stream and forcing it back through the membrane, accumulated particles can be dislodged, improving the flux of the process. Backwashing is limited in its ability to remove more complex forms of fouling such as biofouling, scaling or adsorption to pore walls.
These types of foulants require chemical cleaning to be removed. The common types of chemicals used for cleaning are:
Acidic solutions for the control of inorganic scale deposits
Alkali solutions for removal of organic compounds
Biocides or disinfection such as chlorine or peroxide when bio-fouling is evident
When designing a cleaning protocol it is essential to consider:
Cleaning time – Adequate time must be allowed for chemicals to interact with foulants and permeate into the membrane pores. However, if the process is extended beyond its optimum duration it can lead to denaturation of the membrane and deposition of removed foulants. The complete cleaning cycle including rinses between stages may take as long as 2 hours to complete.
Aggressiveness of chemical treatment – With a high degree of fouling it may be necessary to employ aggressive cleaning solutions to remove fouling material. However, in some applications this may not be suitable if the membrane material is sensitive, leading to enhanced membrane ageing.
Disposal of cleaning effluent – The release of some chemicals into wastewater systems may be prohibited or regulated therefore this must be considered. For example, the use of phosphoric acid may result in high levels of phosphates entering water ways and must be monitored and controlled to prevent eutrophication.
Summary of common types of fouling and their respective chemical treatments
New developments
In order to increase the life-cycle of membrane filtration systems, energy efficient membranes are being developed in membrane bioreactor systems. Technology has been introduced which allows the power required to aerate the membrane for cleaning to be reduced whilst still maintaining a high flux level. Mechanical cleaning processes have also been adopted using granulates as an alternative to conventional forms of cleaning; this reduces energy consumption and also reduces the area required for filtration tanks.
Membrane properties have also been enhanced to reduce fouling tendencies by modifying surface properties. This can be noted in the biotechnology industry where membrane surfaces have been altered in order to reduce the amount of protein binding. Ultrafiltration modules have also been improved to allow for more membrane for a given area without increasing its risk of fouling by designing more efficient module internals.
The current pre-treatment of seawater desulphonation uses ultrafiltration modules that have been designed to withstand high temperatures and pressures whilst occupying a smaller footprint. Each module vessel is self supported and resistant to corrosion and accommodates easy removal and replacement of the module without the cost of replacing the vessel itself.
See also
List of wastewater treatment technologies
References
External links
Filtration techniques
Water treatment
Membrane technology | Ultrafiltration | Chemistry,Engineering,Environmental_science | 3,521 |
461,477 | https://en.wikipedia.org/wiki/Incompressible%20flow | In fluid mechanics, or more generally continuum mechanics, incompressible flow (isochoric flow) refers to a flow in which the material density of each fluid parcel — an infinitesimal volume that moves with the flow velocity — is time-invariant. An equivalent statement that implies incompressible flow is that the divergence of the flow velocity is zero (see the derivation below, which illustrates why these conditions are equivalent).
Incompressible flow does not imply that the fluid itself is incompressible. It is shown in the derivation below that under the right conditions even the flow of compressible fluids can, to a good approximation, be modelled as incompressible flow.
Derivation
The fundamental requirement for incompressible flow is that the density, , is constant within a small element volume, dV, which moves at the flow velocity u. Mathematically, this constraint implies that the material derivative (discussed below) of the density must vanish to ensure incompressible flow. Before introducing this constraint, we must apply the conservation of mass to generate the necessary relations. The mass is calculated by a volume integral of the density, :
The conservation of mass requires that the time derivative of the mass inside a control volume be equal to the mass flux, J, across its boundaries. Mathematically, we can represent this constraint in terms of a surface integral:
The negative sign in the above expression ensures that outward flow results in a decrease in the mass with respect to time, using the convention that the surface area vector points outward. Now, using the divergence theorem we can derive the relationship between the flux and the partial time derivative of the density:
therefore:
The partial derivative of the density with respect to time need not vanish to ensure incompressible flow. When we speak of the partial derivative of the density with respect to time, we refer to this rate of change within a control volume of fixed position. By letting the partial time derivative of the density be non-zero, we are not restricting ourselves to incompressible fluids, because the density can change as observed from a fixed position as fluid flows through the control volume. This approach maintains generality, and not requiring that the partial time derivative of the density vanish illustrates that compressible fluids can still undergo incompressible flow. What interests us is the change in density of a control volume that moves along with the flow velocity, u. The flux is related to the flow velocity through the following function:
So that the conservation of mass implies that:
The previous relation (where we have used the appropriate product rule) is known as the continuity equation. Now, we need the following relation about the total derivative of the density (where we apply the chain rule):
So if we choose a control volume that is moving at the same rate as the fluid (i.e. (dx/dt, dy/dt, dz/dt) = u), then this expression simplifies to the material derivative:
And so using the continuity equation derived above, we see that:
A change in the density over time would imply that the fluid had either compressed or expanded (or that the mass contained in our constant volume, dV, had changed), which we have prohibited. We must then require that the material derivative of the density vanishes, and equivalently (for non-zero density) so must the divergence of the flow velocity:
And so beginning with the conservation of mass and the constraint that the density within a moving volume of fluid remains constant, it has been shown that an equivalent condition required for incompressible flow is that the divergence of the flow velocity vanishes.
Relation to compressibility
In some fields, a measure of the incompressibility of a flow is the change in density as a result of the pressure variations. This is best expressed in terms of the compressibility
If the compressibility is acceptably small, the flow is considered incompressible.
Relation to solenoidal field
An incompressible flow is described by a solenoidal flow velocity field. But a solenoidal field, besides having a zero divergence, also has the additional connotation of having non-zero curl (i.e., rotational component).
Otherwise, if an incompressible flow also has a curl of zero, so that it is also irrotational, then the flow velocity field is actually Laplacian.
Difference from material
As defined earlier, an incompressible (isochoric) flow is the one in which
This is equivalent to saying that
i.e. the material derivative of the density is zero. Thus if one follows a material element, its mass density remains constant. Note that the material derivative consists of two terms. The first term describes how the density of the material element changes with time. This term is also known as the unsteady term. The second term, describes the changes in the density as the material element moves from one point to another. This is the advection term (convection term for scalar field). For a flow to be accounted as bearing incompressibility, the accretion sum of these terms should vanish.
On the other hand, a homogeneous, incompressible material is one that has constant density throughout. For such a material, . This implies that,
and
independently.
From the continuity equation it follows that
Thus homogeneous materials always undergo flow that is incompressible, but the converse is not true. That is, compressible materials might not experience compression in the flow.
Related flow constraints
In fluid dynamics, a flow is considered incompressible if the divergence of the flow velocity is zero. However, related formulations can sometimes be used, depending on the flow system being modelled. Some versions are described below:
Incompressible flow: . This can assume either constant density (strict incompressible) or varying density flow. The varying density set accepts solutions involving small perturbations in density, pressure and/or temperature fields, and can allow for pressure stratification in the domain.
Anelastic flow: . Principally used in the field of atmospheric sciences, the anelastic constraint extends incompressible flow validity to stratified density and/or temperature as well as pressure. This allows the thermodynamic variables to relax to an 'atmospheric' base state seen in the lower atmosphere when used in the field of meteorology, for example. This condition can also be used for various astrophysical systems.
Low Mach-number flow, or pseudo-incompressibility: . The low Mach-number constraint can be derived from the compressible Euler equations using scale analysis of non-dimensional quantities. The restraint, like the previous in this section, allows for the removal of acoustic waves, but also allows for large perturbations in density and/or temperature. The assumption is that the flow remains within a Mach number limit (normally less than 0.3) for any solution using such a constraint to be valid. Again, in accordance with all incompressible flows the pressure deviation must be small in comparison to the pressure base state.
These methods make differing assumptions about the flow, but all take into account the general form of the constraint for general flow dependent functions and .
Numerical approximations
The stringent nature of incompressible flow equations means that specific mathematical techniques have been devised to solve them. Some of these methods include:
The projection method (both approximate and exact)
Artificial compressibility technique (approximate)
Compressibility pre-conditioning
See also
Bernoulli's principle
Euler equations (fluid dynamics)
Navier–Stokes equations
References
Fluid mechanics | Incompressible flow | Engineering | 1,570 |
3,480,509 | https://en.wikipedia.org/wiki/Cottage%20flat | Cottage flats, also known as four-in-a-block flats, are a style of housing common in Scotland, where there are single floor dwellings at ground level, and similar dwellings on the floor above. All have doors directly to the outside of the building, rather than into a 'close', or common staircase, although some do retain a shared entrance. The name 'cottage flats' is confusing as before the mid-1920s cottage housing referred to a single house, normally semi-detached, which contained living accommodation downstairs and bedrooms above. These were phased out by most urban local authorities as wasteful of space and economy after central government subsidies were reduced in the 1924 Housing Act.
The majority consist of four dwellings per block (which appear like semi-detached houses), although such buildings are sometimes in the form of longer terraces. Many were built in the 1920s and 1930s as part of the 'Homes fit for heroes' programme, but it has proved a popular housing model and examples are still being built today. Cottage flats are the predominant form of housing in many parts of Glasgow, including Knightswood, Mosspark, Croftfoot and Carntyne. In Edinburgh they are found in Lochend, Saughton, Stenhouse and Prestonfield.
In Edinburgh, colony houses are mid-Victorian cottage flat-type dwellings which are a similar idea, but of a very distinctly different architecture, being always found in terraces, never as semi-detached type cottages. They normally consist of a first floor flat with a two storey upper flat (known as a double upper), in early developments, accessed from an external stair. Both flats have their own garden either side of the building. The popularity of this arrangement has led to new developments echoing the form.
Tyneside flats in Newcastle and Sunderland, flats on the Warner Estate in Walthamstow, London and Polish flats in Milwaukee, Wisconsin are also similar in form. Some of the model dwellings in Noel Park, London are cottage flats, typically in the middle of ordinary terraced housing.
The Scottish cottage flat and Tyneside flat models were brought to Montreal, Canada in the late 1850s where, usually in terrace form, they became the dominant housing type of the remainder of the 19th century. In Montreal, this housing is known as duplexes (sometimes four-plexes when referring to the paired arrangement). During the first third of the 20th century, the three-story variant – the triplex – became dominant, usually built with an outdoor stairway.
See also
Two-up two-down, referring to the rooms in a small house
References
Housing in Scotland
House types in the United Kingdom | Cottage flat | Engineering | 532 |
1,351,407 | https://en.wikipedia.org/wiki/Ali%20Erdemir | Ali Erdemir, born on July 2, 1954, in Kadirli, Adana, Turkey, is a Turkish American materials scientist specializing in surface engineering and tribology.
Education and career
Erdemir graduated from the Metallurgy Department of the Istanbul Technical University in 1977. After working for two years at the İskenderun Iron and Steel Company in Turkey as an engineer, he went to the USA for doctoral studies. Erdemir received a master's degree in materials engineering and a doctorate in materials science and engineering from the Georgia Institute of Technology in 1982 and 1986, respectively. After completing his military service in Turkey, Erdemir began in 1987 to work as an assistant metallurgist at the Argonne National Laboratory near Chicago, which is operated by the University of Chicago for the U.S. Department of Energy. In 2020, he relocated to Texas, where he is currently holds an appointment as an Eminent Professor in the Department of Mechanical Engineering at Texas A&M University in College Station, Texas.
Recognition and awards
Erdemir is member of several professional societies and has published more than 100 scientific papers in the fields of friction, wear, lubrication of materials and coatings.
He was awarded an honorary doctorate degree from the Anadolu University, Eskişehir, Turkey in 1998.
Erdemir has been awarded international prizes including R&D 100 Awards in 1991, 1998 and 2003 for a boric acid lubricant and carbon coatings with very low friction coefficients. He has patent rights for six of his inventions.
He received the Mayo D. Hersey Award from the American Society of Mechanical Engineers in 2015.
In 2019, Erdemir was elected a member of the National Academy of Engineering for contributions to the science and technology of friction, lubrication, and wear.
He was awarded the International Award from the Society of Tribologists and Lubrication Engineers (STLE) in 2020.
He is currently President of the International Tribology Council.
References
External links
Homepage at Texas A&M
Turkish non-fiction writers
Texas A&M University faculty
Turkish materials scientists
Argonne National Laboratory people
Members of the United States National Academy of Engineering
Turkish academics
People from Kadirli
Georgia Tech alumni
Turkish expatriates in the United States
Living people
Year of birth missing (living people)
Tribologists | Ali Erdemir | Materials_science | 484 |
8,964,665 | https://en.wikipedia.org/wiki/Category%20utility | Category utility is a measure of "category goodness" defined in and . It attempts to maximize both the probability that two objects in the same category have attribute values in common, and the probability that objects from different categories have different attribute values. It was intended to supersede more limited measures of category goodness such as "cue validity" (; ) and "collocation index" . It provides a normative information-theoretic measure of the predictive advantage gained by the observer who possesses knowledge of the given category structure (i.e., the class labels of instances) over the observer who does not possess knowledge of the category structure. In this sense the motivation for the category utility measure is similar to the information gain metric used in decision tree learning. In certain presentations, it is also formally equivalent to the mutual information, as discussed below. A review of category utility in its probabilistic incarnation, with applications to machine learning, is provided in .
Probability-theoretic definition of category utility
The probability-theoretic definition of category utility given in and is as follows:
where is a size- set of -ary features, and is a set of categories. The term designates the marginal probability that feature takes on value , and the term designates the category-conditional probability that feature takes on value given that the object in question belongs to category .
The motivation and development of this expression for category utility, and the role of the multiplicand as a crude overfitting control, is given in the above sources. Loosely , the term is the expected number of attribute values that can be correctly guessed by an observer using a probability-matching strategy together with knowledge of the category labels, while is the expected number of attribute values that can be correctly guessed by an observer the same strategy but without any knowledge of the category labels. Their difference therefore reflects the relative advantage accruing to the observer by having knowledge of the category structure.
Information-theoretic definition of category utility
The information-theoretic definition of category utility for a set of entities with size- binary feature set , and a binary category is given in as follows:
where is the prior probability of an entity belonging to the positive category (in the absence of any feature information), is the conditional probability of an entity having feature given that the entity belongs to category , is likewise the conditional probability of an entity having feature given that the entity belongs to category , and is the prior probability of an entity possessing feature (in the absence of any category information).
The intuition behind the above expression is as follows: The term represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category . Similarly, the term represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category . The sum of these two terms in the brackets is therefore the weighted average of these two costs. The final term, , represents the cost (in bits) of optimally encoding (or transmitting) feature information when no category information is available. The value of the category utility will, in the above formulation, be non-negative.
Category utility and mutual information
and mention that the category utility is equivalent to the mutual information. Here is a simple demonstration of the nature of this equivalence. Assume a set of entities each having the same features, i.e., feature set , with each feature variable having cardinality . That is, each feature has the capacity to adopt any of distinct values (which need not be ordered; all variables can be nominal); for the special case these features would be considered binary, but more generally, for any , the features are simply m-ary. For the purposes of this demonstration, without loss of generality, feature set can be replaced with a single aggregate variable that has cardinality , and adopts a unique value corresponding to each feature combination in the Cartesian product . (Ordinality does not matter, because the mutual information is not sensitive to ordinality.) In what follows, a term such as or simply refers to the probability with which adopts the particular value . (Using the aggregate feature variable replaces multiple summations, and simplifies the presentation to follow.)
For this demonstration, also assume a single category variable , which has cardinality . This is equivalent to a classification system in which there are non-intersecting categories. In the special case of there are the two-category case discussed above. From the definition of mutual information for discrete variables, the mutual information between the aggregate feature variable and the category variable is given by:
where is the prior probability of feature variable adopting value , is the marginal probability of category variable adopting value , and is the joint probability of variables and simultaneously adopting those respective values. In terms of the conditional probabilities this can be re-written (or defined) as
If the original definition of the category utility from above is rewritten with ,
This equation clearly has the same form as the (blue) equation expressing the mutual information between the feature set and the category variable; the difference is that the sum in the category utility equation runs over independent binary variables , whereas the sum in the mutual information runs over values of the single -ary variable . The two measures are actually equivalent then only when the features , are independent (and assuming that terms in the sum corresponding to are also added).
Insensitivity of category utility to ordinality
Like the mutual information, the category utility is not sensitive to any ordering in the feature or category variable values. That is, as far as the category utility is concerned, the category set {small,medium,large,jumbo} is not qualitatively different from the category set {desk,fish,tree,mop} since the formulation of the category utility does not account for any ordering of the class variable. Similarly, a feature variable adopting values {1,2,3,4,5} is not qualitatively different from a feature variable adopting values {fred,joe,bob,sue,elaine}. As far as the category utility or mutual information are concerned, all category and feature variables are nominal variables. For this reason, category utility does not reflect any gestalt aspects of "category goodness" that might be based on such ordering effects. One possible adjustment for this insensitivity to ordinality is given by the weighting scheme described in the article for mutual information.
Category "goodness": models and philosophy
This section provides some background on the origins of, and need for, formal measures of "category goodness" such as the category utility, and some of the history that lead to the development of this particular metric.
What makes a good category?
At least since the time of Aristotle there has been a tremendous fascination in philosophy with the nature of concepts and universals. What kind of entity is a concept such as "horse"? Such abstractions do not designate any particular individual in the world, and yet we can scarcely imagine being able to comprehend the world without their use. Does the concept "horse" therefore have an independent existence outside of the mind? If it does, then what is the locus of this independent existence? The question of locus was an important issue on which the classical schools of Plato and Aristotle famously differed. However, they remained in agreement that universals did indeed have a mind-independent existence. There was, therefore, always a fact to the matter about which concepts and universals exist in the world.
In the late Middle Ages (perhaps beginning with Occam, although Porphyry also makes a much earlier remark indicating a certain discomfort with the status quo), however, the certainty that existed on this issue began to erode, and it became acceptable among the so-called nominalists and empiricists to consider concepts and universals as strictly mental entities or conventions of language. On this view of concepts—that they are purely representational constructs—a new question then comes to the fore: "Why do we possess one set of concepts rather than another?" What makes one set of concepts "good" and another set of concepts "bad"? This is a question that modern philosophers, and subsequently machine learning theorists and cognitive scientists, have struggled with for many decades.
What purpose do concepts serve?
One approach to answering such questions is to investigate the "role" or "purpose" of concepts in cognition. Thus the answer to "What are concepts good for in the first place?" by and many others is that classification (conception) is a precursor to induction: By imposing a particular categorization on the universe, an organism gains the ability to deal with physically non-identical objects or situations in an identical fashion, thereby gaining substantial predictive leverage (; ). As J.S. Mill puts it ,
From this base, Mill reaches the following conclusion, which foreshadows much subsequent thinking about category goodness, including the notion of category utility:
One may compare this to the "category utility hypothesis" proposed by : "A category is useful to the extent that it can be expected to improve the ability of a person to accurately predict the features of instances of that category." Mill here seems to be suggesting that the best category structure is one in which object features (properties) are maximally informative about the object's class, and, simultaneously, the object class is maximally informative about the object's features. In other words, a useful classification scheme is one in which category knowledge can be used to accurately infer object properties, and property knowledge can be used to accurately infer object classes. One may also compare this idea to Aristotle's criterion of counter-predication for definitional predicates, as well as to the notion of concepts described in formal concept analysis.
Attempts at formalization
A variety of different measures have been suggested with an aim of formally capturing this notion of "category goodness," the best known of which is probably the "cue validity". Cue validity of a feature with respect to category is defined as the conditional probability of the category given the feature (;;), , or as the deviation of the conditional probability from the category base rate (;), . Clearly, these measures quantify only inference from feature to category (i.e., cue validity), but not from category to feature, i.e., the category validity . Also, while the cue validity was originally intended to account for the demonstrable appearance of basic categories in human cognition—categories of a particular level of generality that are evidently preferred by human learners—a number of major flaws in the cue validity quickly emerged in this regard (;;, and others).
One attempt to address both problems by simultaneously maximizing both feature validity and category validity was made by in defining the "collocation index" as the product , but this construction was fairly ad hoc (see ). The category utility was introduced as a more sophisticated refinement of the cue validity, which attempts to more rigorously quantify the full inferential power of a class structure. As shown above, on a certain view the category utility is equivalent to the mutual information between the feature variable and the category variable. It has been suggested that categories having the greatest overall category utility are those that are not only those "best" in a normative sense, but also those human learners prefer to use, e.g., "basic" categories . Other related measures of category goodness are "cohesion" (;) and "salience" .
Applications
Category utility is used as the category evaluation measure in the popular conceptual clustering algorithm called COBWEB .
See also
Abstraction
Concept learning
Universals
Unsupervised learning
References
.
Category utility
Category utility | Category utility | Engineering | 2,408 |
9,259,009 | https://en.wikipedia.org/wiki/Ilya%20Lifshitz | Ilya Mikhailovich Lifshitz (, ; January 13, 1917 – October 23, 1982) was a leading Soviet theoretical physicist, brother of Evgeny Lifshitz. He is known for his works in solid-state physics, electron theory of metals, disordered systems, and the theory of polymers.
Work
Ilya Lifshitz was born into a Ukrainian Jewish family in Kharkov, Kharkov Governorate, Russian Empire (now Kharkiv, Ukraine). Together with Arnold Kosevich, in 1954 Lifshitz established the connection between the oscillation of magnetic characteristics of metals and the form of an electronic surface of Fermi (Lifshitz–Kosevich formula) from de Haas–van Alphen experiments.
Lifshitz was one of the founders of the theory of disordered systems. He introduced some of the basic notions, such as self-averaging, and discovered what is now called Lifshitz tails and Lifshitz singularity.
In perturbation theory, Lifshitz introduced the notion of spectral shift function, which was later developed by Mark Krein.
A phase transition involving topological changes of the material's Fermi surface is called a Lifshitz phase transition.
Starting from the late 1960s, Lifshitz started considering problems of statistical physics of polymers. Together with his students Alexander Yu. Grosberg and Alexei R. Khokhlov, Lifshitz proposed a theory of coil-to-globule transition in homopolymers and derived the formula for the conformational entropy of a polymer chain, that is referred to as the Lifshitz entropy.
References
External links
Page at KPI
Moscow university site
1917 births
1982 deaths
Scientists from Kharkiv
People from Kharkov Governorate
Academic staff of the National University of Kharkiv
Foreign associates of the National Academy of Sciences
Full Members of the USSR Academy of Sciences
Kharkiv Polytechnic Institute alumni
National University of Kharkiv alumni
Recipients of the Lenin Prize
Recipients of the Order of the Red Banner of Labour
Theoretical physicists
Jewish physicists
Soviet Jews
Soviet physicists
Burials at Kuntsevo Cemetery | Ilya Lifshitz | Physics | 449 |
3,622,944 | https://en.wikipedia.org/wiki/Strace | strace is a diagnostic, debugging and instructional userspace utility for Linux. It is used to monitor and tamper with interactions between processes and the Linux kernel, which include system calls, signal deliveries, and changes of process state. The operation of strace is made possible by the kernel feature known as ptrace.
Some Unix-like systems provide other diagnostic tools similar to strace, such as truss.
History
Strace was originally written for SunOS by Paul Kranenburg in 1991, according to its copyright notice, and published early in 1992, in volume three of comp.sources.sun. The initial README file contained the following:
is a system call tracer for Sun(tm) systems much like the Sun supplied program . is a useful utility to sort of debug programs for which no source is available which unfortunately includes almost all of the Sun supplied system software.
Later, Branko Lankester ported this version to Linux, releasing his version in November 1992 with the second release following in 1993. Richard Sladkey combined these separate versions of strace in 1993, and ported the program to SVR4 and Solaris in 1994, resulting in strace 3.0 that was announced in comp.sources.misc in mid-1994.
Beginning in 1996, strace was maintained by Wichert Akkerman. During his tenure, strace development migrated to CVS; ports to FreeBSD and many architectures on Linux (including ARM, IA-64, MIPS, PA-RISC, PowerPC, s390, SPARC) were introduced. In 2002, the burden of strace maintainership was transferred to Roland McGrath. Since then, strace gained support for several new Linux architectures (AMD64, s390x, SuperH), bi-architecture support for some of them, and received numerous additions and improvements in syscalls decoders on Linux; strace development migrated to git during that period. Since 2009, strace is actively maintained by Dmitry Levin. strace gained support for AArch64, ARC, AVR32, Blackfin, Meta, Nios II, OpenSISC 1000, RISC-V, Tile/TileGx, Xtensa architectures since that time.
The last version of strace that had some (evidently dead) code for non-Linux operating systems was 4.6, released in March 2011. In strace version 4.7, released in May 2012, all non-Linux code had been removed; since strace 4.13, the project follows Linux kernel's release schedule, and as of version 5.0, it follows Linux's versioning scheme as well.
In 2012 strace also gained support for path tracing and file descriptor path decoding. In August 2014, strace 4.9 was released, where support for stack traces printing was added. In December 2016, syscall fault injection feature was implemented.
Version history
Usage and features
The most common use is to start a program using strace, which prints a list of system calls made by the program. This is useful if the program continually crashes, or does not behave as expected; for example using strace may reveal that the program is attempting to access a file which does not exist or cannot be read.
An alternative application is to use the flag to attach to a running process. This is useful if a process has stopped responding, and might reveal, for example, that the process is blocking whilst attempting to make a network connection.
Among other features, strace allows the following:
Specifying a filter of syscall names that should be traced (via the -e trace= option): by name, like ; using one of the predefined groups, like or ; or (since strace 4.17) using regular expression syntax, like -e trace=/clock_.*.
Specifying a list of paths to be traced (-P /etc/ld.so.cache, for example).
Specifying a list of file descriptors whose I/O should be dumped (-e read= and -e write= options).
Counting syscall execution time and count (-T, -c, -C, and -w options; -U option enables printing of additional information, like minimum and maximum syscall execution time).
Printing relative or absolute time stamps (-t and -r options).
Tampering with the syscalls being executed (-e inject=syscall specification:tampering specification option): modifying return (:retval=; since strace 4.16) and error code (:error=; since strace 4.15) of the specified syscalls, inject signals (:signal=; since strace 4.16), delays (:delay_enter= and :delay_exit=; since strace 4.22), and modify data pointed by syscall arguments (:poke_enter= and :poke_exit=; since strace 5.11) upon their execution.
Extracting information about file descriptors (including sockets, -y option; -yy option provides some additional information, like endpoint addresses for sockets, paths and device major/minor numbers for files).
Printing stack traces, including (since strace 4.21) symbol demangling (-k option).
Filtering by syscall return status (-e status= option; since strace 5.2).
Perform translation of thread, process, process group, and session IDs appearing in the trace into strace's PID namespace (--pidns-translation option; since strace 5.9).
Decoding SELinux context information associated with processes, files, and descriptors (--secontext option; since strace 5.12).
strace supports decoding of arguments of some classes of ioctl commands, such as BTRFS_*, V4L2_*, DM_*, NSFS_*, MEM*, EVIO*, KVM_*, and several others; it also supports decoding of various netlink protocols.
As strace only details system calls, it cannot be used to detect as many problems as a code debugger such as GNU Debugger (gdb). It is, however, easier to use than a code debugger, and is a very useful tool for system administrators. It is also used by researchers to generate system call traces for later system call replay.
Examples
The following is an example of typical output of the strace command:
user@server:~$ strace ls
...
open(".", O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY|O_CLOEXEC) = 3
fstat64(3, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
fcntl64(3, F_GETFD) = 0x1 (flags FD_CLOEXEC)
getdents64(3, /* 18 entries */, 4096) = 496
getdents64(3, /* 0 entries */, 4096) = 0
close(3) = 0
fstat64(1, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f2c000
write(1, "autofs\nbackups\ncache\nflexlm\ngames"..., 86autofsA
The above fragment is only a small part of the output of strace when run on the 'ls' command. It shows that the current working directory is opened, inspected and its contents retrieved. The resulting list of file names is written to standard output.
Similar tools
Different operating systems feature other similar or related instrumentation tools, offering similar or more advanced features; some of the tools (although using the same or a similar name) may use completely different work mechanisms, resulting in different feature sets or results. Such tools include the following:
Linux has ltrace that can trace library and system calls, xtrace that can trace X Window System programs, SystemTap, perf, and trace-cmd and KernelShark that extend ftrace.
AIX provides the command
HP-UX offers the command
Solaris / Illumos has truss and DTrace
UnixWare provides the command
FreeBSD provides the command, ktrace and DTrace
NetBSD provides ktrace and DTrace
OpenBSD uses ktrace and kdump
macOS provides ktrace (10.4 and earlier), DTrace (from Solaris) and associated dtruss in 10.5 and later.
Microsoft Windows has a similar utility called StraceNT, written by Pankaj Garg, and a similar GUI-based utility called Process Monitor, developed by Sysinternals.
See also
gdb
List of Unix commands
lsof
Notes
References
External links
strace project page
Manual page
OS Reviews article on strace
"System Call Tracing with strace", a talk with an overview of strace features and usage, given by Michael Kerrisk at NDC TechTown 2018
"Modern strace" (source), a talk with an overview of strace features, given by Dmitry Levin at DevConf.cz 2019
Command-line software
Debuggers
Free software programmed in C
Software using the GNU Lesser General Public License
Unix programming tools | Strace | Technology | 2,038 |
66,574,331 | https://en.wikipedia.org/wiki/Bellman%20filter | The Bellman filter is an algorithm that estimates the value sequence of hidden states in a state-space model. It is a generalization of the Kalman filter, allowing for nonlinearity in both the state and observation equations. The principle behind the Bellman filter is an approximation of the maximum a posteriori estimator, which makes it robust to heavy-tailed noise. It is in general a very fast method, since at each iteration only the very last state value is estimated. The algorithm owes its name to the Bellman equation, which plays a central role in the derivation of the algorithm.
References
Control theory
Nonlinear filters
Signal estimation | Bellman filter | Mathematics | 129 |
56,321,751 | https://en.wikipedia.org/wiki/Martha%20Austin%20Phelps | Martha Austin Phelps was an American chemist who conducted research in measuring metal concentrations and developed several analytical protocols to do so. She also worked on the early development of ester synthesis. In between conducting research, she worked as school teacher teaching chemistry and physics. She finished her career as an activist in women's academic clubs. Her chemistry career lasted approximately 10 years, resulting in 15 publications, which set her apart as one of the most skilled female chemists within the first generation of women chemists in the United States.
Biography
Martha was born in Georgia, Vermont on February 13, 1870. She grew up in Easthampton, Massachusetts where she graduated high school. She earned her B.S. degree from Smith College in 1892. Prior to enrolling in graduate school at Yale University, she taught as a science teacher in New York and Massachusetts. She started graduate school in 1896 and graduated with her Ph.D. in chemistry in 1898. After completing her Ph.D., she continued working for her Ph.D. advisor, Andrew Gooch in the Kent Chemistry Laboratory for one year and then transferred to the Rhode Island Experiment Station where she worked as an assistant chemist in 1900. From 1901-1904, she went back to being a science teacher and taught chemistry and physics at Wilson College. In 1904, she also married Issac King Phelps, who was working as a chemistry instructor at Yale and George Washington University at the time. In 1908, the couple moved to Washington D.C. where Martha went to work for the Bureau of Standards as a chemistry researcher and Issac went to work for the U.S. Department of Agriculture. Martha ceased working as researcher in 1909. Martha was one of the first female scientists to be employed by the Bureau of Standards. After 1909, she began her activist work in women's education clubs until retirement. Martha and Issac lived in Washington D.C until 1923, until they moved to New Haven, Connecticut where Martha eventually died on March 15, 1933.
Work in Science
Martha spent much of her scientific career interested in the quantitative analysis of various elements. Specifically, she studied the composition of ammonium phosphates in magnesium, zinc, and cadmium. Her study of ammonium phosphates was later used as a tool in her later analytical work. Martha spent much of her research career developing several analytical procedures for metal estimations and separations, with a special focus on gravimetric estimations of manganese and magnesium. Her other work included the analysis of prehistoric bronzes without major disruption to the metal composition and the study of organic esters.
While in graduate school at Yale, she worked for Andrew Gooch in the Kent Chemical Laboratory. Throughout her time in graduate school, she published nine papers in the American Journal of Science and in Zeitschrift für anorganische und allgemeine Chemie for her work in analytical chemistry. Four of the nine papers she published individually and the other five in collaboration with Gooch. After her marriage to Issac King Phelps in 1904, she collaborated with her husband to study ester synthesis. The work resulted in six more publications. In total, she published fifteen papers throughout the span of her ten year career as a chemist.
Martha's work is primarily recognized for her development of a protocol estimating arsenic as ammonium magnesium arsenate. The paper detailing the procedure was published in 1900. The protocol became a standard method of arsenic estimation in quantitative analysis studies and was written into textbooks for several years.
Awards
From 1897-1898, she was granted a graduate scholarship to pursue her PhD. Upon completion of her PhD, she was granted one of Yale's University Fellowships to continue work in Gooch's laboratory until 1899.
References
People from Easthampton, Massachusetts
People from Georgia, Vermont
Smith College alumni
Yale University alumni
Wilson College (Pennsylvania) faculty
Scientists from New Haven, Connecticut
Sportspeople from Washington, D.C.
1870 births
1933 deaths
Analytical chemists
American chemists
Chemists from Vermont | Martha Austin Phelps | Chemistry | 804 |
18,774,046 | https://en.wikipedia.org/wiki/Wireless%20game%20adapter | A Wireless game adapter is a device that, once connected to a video game console or handheld, enables internet and\or multiplayer access.
Consoles
Xbox 360
The Wireless Network Adapter for the Xbox 360 is a device that is plugged into the system's rear USB port, allowing for access to the internet via a wireless router.
Wii
While the Wii has built-in wireless capabilities, it is not compatible with every wireless router. For this reason Nintendo released the Nintendo Wi-Fi USB Connector peripheral that a Wii can connect wirelessly to via an internet enabled computer, wireless or otherwise.
Handhelds
Game Boy Color
The Mobile GB Adaptor was a Japan-only device that attached to the EXT port on a Game Boy Color. The other end was connected to a cell phone, allowing for access to the internet. It was primarily used for trading and battling on Pokémon Crystal.
Game Boy Advance
The Game Boy Advance and its two redesigns, the Game Boy Advance SP and the Game Boy Micro all had wireless adapters that were meant to replace the link cable used for local multiplayer. It is not compatible with any game released prior to the adapter's release, and afterwards was only compatible with a select few games.
Nintendo DS
Much like the Wii, the Nintendo DS has built-in wireless capabilities and is similarly not compatible with all wireless routers. Another hindrance is that the DS does not support certain levels of wi-fi encryption (e.g. WPA), thus necessitating the Nintendo Wi-Fi USB Connector.
References
Wireless networking | Wireless game adapter | Technology,Engineering | 317 |
17,655,150 | https://en.wikipedia.org/wiki/Metallophilic%20interaction | In chemistry, a metallophilic interaction is defined as a type of non-covalent attraction between heavy metal atoms. The atoms are often within Van der Waals distance of each other and are about as strong as hydrogen bonds. The effect can be intramolecular or intermolecular. Intermolecular metallophilic interactions can lead to formation of supramolecular assemblies whose properties vary with the choice of element and oxidation states of the metal atoms and the attachment of various ligands to them.
The nature of such interactions remains the subject of vigorous debate with recent studies emphasizing that the metallophilic interaction is repulsive due to strong metal-metal Pauli exclusion principle repulsion.
Nature of the interaction
Previously, this type of interaction was considered to be enhanced by relativistic effects. A major contributor is electron correlation of the closed-shell components, which is unusual because closed-shell atoms generally have negligible interaction with one another at the distances observed for the metal atoms. As a trend, the effect becomes larger moving down a periodic table group, for example, from copper to silver to gold, in keeping with increased relativistic effects. Observations and theory find that, on average, 28% of the binding energy in gold–gold interactions can be attributed to relativistic expansion of the gold d orbitals.
Recently, the relativistic effect was found to enhance the intermolecular M-M Pauli repulsion of the closed-shell organometallic complexes. At close M–M distances, metallophilicity is repulsive in nature due to strong M–M Pauli repulsion. The relativistic effect facilitates (n + 1)s-nd and (n + 1)p-nd orbital hybridization of the metal atom, where (n + 1)s-nd hybridization induces strong M–M Pauli repulsion and repulsive M–M orbital interaction, and (n + 1)p-nd hybridization suppresses M–M Pauli repulsion. This model is validated by both DFT (density functional theory) and high-level CCSD(T) (coupled-cluster singles and doubles with perturbative triples) computations.
An important and exploitable property of aurophilic interactions relevant to their supramolecular chemistry is that while both inter- and intramolecular interactions are possible, intermolecular aurophilic linkages are comparatively weak and the gold–gold bonds are easily broken by solvation; most complexes that exhibit intramolecular aurophilic interactions retain such moieties in solution. One way of probing the strength of particular intermolecular metallophilic interactions is to use a competing solvent and examine how it interferes with supromolecular properties. For example, adding various solvents to gold(I) nanoparticles whose luminescence is attributed to Au–Au interactions will have decreasing luminescence as the solvent disrupts the metallophilic interactions.
Applications
The polymerization of metal atoms can lead to the formation of long chains or nucleated clusters. Gold nanoparticles formed from chains of gold(I) complexes linked by aurophilic interactions often give rise to intense luminescence in the visible region of the spectrum.
Chains of Pd(II)–Pd(I) and Pt(II)–Pd(I) complexes have been explored as potential molecular wires.
See also
Metal aromaticity
References
Chemical bonding | Metallophilic interaction | Physics,Chemistry,Materials_science | 733 |
21,368,973 | https://en.wikipedia.org/wiki/Hahn%20series | In mathematics, Hahn series (sometimes also known as Hahn–Mal'cev–Neumann series) are a type of formal infinite series. They are a generalization of Puiseux series (themselves a generalization of formal power series) and were first introduced by Hans Hahn in 1907 (and then further generalized by Anatoly Maltsev and Bernhard Neumann to a non-commutative setting). They allow for arbitrary exponents of the indeterminate so long as the set supporting them forms a well-ordered subset of the value group (typically or ). Hahn series were first introduced, as groups, in the course of the proof of the Hahn embedding theorem and then studied by him in relation to Hilbert's second problem.
Formulation
The field of Hahn series (in the indeterminate ) over a field and with value group (an ordered group) is the set of formal expressions of the form
with such that the support of f is well-ordered. The sum and product of
and
are given by
and
(in the latter, the sum over values such that , and is finite because a well-ordered set cannot contain an infinite decreasing sequence).
For example, is a Hahn series (over any field) because the set of rationals
is well-ordered; it is not a Puiseux series because the denominators in the exponents are unbounded. (And if the base field K has characteristic p, then this Hahn series satisfies the equation so it is algebraic over .)
Properties
Properties of the valued field
The valuation of a non-zero Hahn series
is defined as the smallest such that (in other words, the smallest element of the support of ): this makes into a spherically complete valued field with value group and residue field (justifying a posteriori the terminology). In fact, if has characteristic zero, then is up to (non-unique) isomorphism the only spherically complete valued field with residue field and value group .
The valuation defines a topology on . If , then corresponds to an ultrametric absolute value , with respect to which is a complete metric space. However, unlike in the case of formal Laurent series or Puiseux series, the formal sums used in defining the elements of the field do not converge: in the case of for example, the absolute values of the terms tend to 1 (because their valuations tend to 0), so the series is not convergent (such series are sometimes known as "pseudo-convergent").
Algebraic properties
If is algebraically closed (but not necessarily of characteristic zero) and is divisible, then is algebraically closed. Thus, the algebraic closure of is contained in , where is the algebraic closure of (when is of characteristic zero, it is exactly the field of Puiseux series): in fact, it is possible to give a somewhat analogous description of the algebraic closure of in positive characteristic as a subset of .
If is an ordered field then is totally ordered by making the indeterminate infinitesimal (greater than 0 but less than any positive element of ) or, equivalently, by using the lexicographic order on the coefficients of the series. If is real-closed and is divisible then is itself real-closed. This fact can be used to analyse (or even construct) the field of surreal numbers (which is isomorphic, as an ordered field, to the field of Hahn series with real coefficients and value group the surreal numbers themselves).
If κ is an infinite regular cardinal, one can consider the subset of consisting of series whose support set has cardinality (strictly) less than κ: it turns out that this is also a field, with much the same algebraic closedness properties as the full : e.g., it is algebraically closed or real closed when is so and is divisible.
Summable families
Summable families
One can define a notion of summable families in . If is a set and is a family of Hahn series , then we say that is summable if the set is well-ordered, and each set for is finite.
We may then define the sum as the Hahn series
If are summable, then so are the families , and we have
and
This notion of summable family does not correspond to the notion of convergence in the valuation topology on . For instance, in , the family is summable but the sequence does not converge.
Evaluating analytic functions
Let and let denote the ring of real-valued functions which are analytic on a neighborhood of .
If contains , then we can evaluate every element of at every element of of the form , where the valuation of is strictly positive.
Indeed, the family is always summable, so we can define . This defines a ring homomorphism .
Hahn–Witt series
The construction of Hahn series can be combined with Witt vectors (at least over a perfect field) to form twisted Hahn series or Hahn–Witt series: for example, over a finite field K of characteristic p (or their algebraic closure), the field of Hahn–Witt series with value group Γ (containing the integers) would be the set of formal sums where now are Teichmüller representatives (of the elements of K) which are multiplied and added in the same way as in the case of ordinary Witt vectors (which is obtained when Γ is the group of integers). When Γ is the group of rationals or reals and K is the algebraic closure of the finite field with p elements, this construction gives a (ultra)metrically complete algebraically closed field containing the p-adics, hence a more or less explicit description of the field or its spherical completion.
Examples
The field of formal Laurent series over can be described as .
The field of surreal numbers can be regarded as a field of Hahn series with real coefficients and value group the surreal numbers themselves.
The Levi-Civita field can be regarded as a subfield of , with the additional imposition that the coefficients be a left-finite set: the set of coefficients less than a given coefficient is finite.
The field of transseries is a directed union of Hahn fields (and is an extension of the Levi-Civita field). The construction of resembles (but is not literally) , .
See also
Rational series
Notes
References
(reprinted in: )
Commutative algebra
Mathematical series | Hahn series | Mathematics | 1,292 |
57,578,128 | https://en.wikipedia.org/wiki/Courage%20to%20Care | Courage to Care (also known as B'nai B'rith Courage to Care) is an organization based in Australia founded by the Jewish service organization B'nai B'rith. The group's mission is to prevent discrimination and bullying through educational programs.
The organisation's programme is student-centred, focused exclusively on the stories of people who rescued Jews during the Holocaust. The programme's aim is to convey community tolerance and living in harmony.
Courage to Care has three divisions, one based in Sydney, New South Wales (covering the states of New South Wales and Queensland), one in Melbourne, Victoria, and one based in Perth, Western Australia.
Activities
Courage to Care operates a traveling exhibition featuring stories of Holocaust survivors and those who rescued them.
Other activities include programs and workshops for schools and workplaces.
In 2016, the program was delivered for new recruits at the Queensland Police Service.
See also
Courage to Care Award
The Courage to Care (film)
Ernie Friedlander - anti-racism activist associated with B'nai B'rith and Courage to Care
Moving Forward Together
Alan Gold (author) - former Vice-President of Courage to Care
Anti-Defamation Commission
References
External links
Courage to Care NSW / QLD website
Courage to Care Victoria website
Courage to Care Perth website
Educational organisations based in Australia
Jewish organisations based in Australia
The Holocaust
Organisations based in Sydney
Organisations based in Melbourne
Organisations based in Perth, Western Australia | Courage to Care | Biology | 289 |
15,070,671 | https://en.wikipedia.org/wiki/Molecular%20and%20Cellular%20Biology | Molecular and Cellular Biology is a biweekly peer-reviewed scientific journal covering all aspects of molecular and cellular biology. It is published by the American Society for Microbiology and the editor-in-chief is Peter Tontonoz (University of California, Los Angeles). It was established in 1981. The h-index (1981-2021) is 338.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2021 impact factor of 5.069.
References
External links
Molecular and cellular biology journals
Delayed open access journals
Academic journals established in 1981
English-language journals
Biweekly journals
American Society for Microbiology academic journals | Molecular and Cellular Biology | Chemistry | 140 |
77,372,003 | https://en.wikipedia.org/wiki/Terzan%20Catalogue | The Terzan Catalogue (abbreviation: Ter) is an astronomical catalogue of globular clusters.
Overview
The Terzan Catalogue consists of 11 globular clusters discovered by Agop Terzan using infrared observations made at Lyon Observatory in France during the 1960s and early 1970s. Most of the globular clusters are located in the constellations of Sagittarius and Scorpius, near the Milky Way's Galactic Center; Terzan 7 and Terzan 8 are most likely part of the Sagittarius Dwarf Elliptical Galaxy. Although all of the Terzan Catalogue objects were originally presumed to be globular clusters, there have been recent suggestions (by Mikkel Steine and others) that some of them may in fact be open clusters.
Since the original Terzan 11 is a duplicate of Terzan 5, more recent versions of the catalogue have renamed the original Terzan 12 as Terzan 11.
The catalogue is based on scientific papers published by Agop Terzan in 1966 (for Terzan 1), 1967 (for Terzan 2), 1968 (for Terzan 3 to 8), and 1971 (for Terzan 9 to 12).
List of clusters
List of star clusters in the Terzan Catalogue:
See also
List of astronomical catalogues
List of globular clusters
References
Astronomical catalogues | Terzan Catalogue | Astronomy | 266 |
23,435,460 | https://en.wikipedia.org/wiki/C10H8N2 | {{DISPLAYTITLE:C10H8N2}}
The molecular formula C10H8N2 (molar mass: 156.18 g/mol, exact mass: 156.0687 u) may refer to:
Bipyridines
2,2'-Bipyridine
4,4'-Bipyridine | C10H8N2 | Chemistry | 74 |
36,806,787 | https://en.wikipedia.org/wiki/Omicron%20Puppis | Omicron Puppis (ο Puppis) is candidate binary star system in the southern constellation of Puppis. It is visible to the naked eye, having a combined apparent visual magnitude of +4.48. Based upon an annual parallax shift of 2.30 mas as seen from Earth, it is located roughly 1,400 light years from the Sun.
This is a suspected close spectroscopic binary system. The spectrum varies with a periodicity of 28.9 days, and a helium emission component shows a radial velocity variation that matches the period. The properties indicate it may be a φ Per-like system with a Be star primary and a hot subdwarf companion of type sdO. If this is the case, then the pair have a circular orbit with a period that matches the variability. The close-orbiting pair may have undergone interaction in the past, leaving the subdwarf stripped down and the primary star spinning rapidly.
ο Puppis is slightly variable. The General Catalogue of Variable Stars lists it as a possible Be star with a magnitude range of 4.43 - 4.46. The International Variable Star Index classifies it as a Lambda Eridani variable.
Naming
The correct Bayer designation for ο Puppis has been debated. Lacaille assigned one Greek letter sequence for the bright stars of Argo Navis. These Lacaille designations are now shared across the three modern constellations of Carina, Puppis, and Vela so that (except for omicron) each Greek letter is found in only one of the three. However, ο (omicron) is now commonly used for two stars, one each in Vela and Puppis. In the Coelum Australe Stelliferum itself, this star is labelled (Latin) o Argus in puppi (Pouppe du Navire in the French edition), while ο Velorum is labelled ο (omicron) Argus (du Navire in the French edition). Some later authors state the reverse, that Lacaille actually assigned omicron to ο Puppis and Latin lower case 'o' to ο Velorum. Modern catalogs and atlases generally use omicron for both stars.
In Chinese, (), meaning Bow and Arrow, refers to an asterism consisting of ο Puppis, δ Canis Majoris, η Canis Majoris, HD 63032, HD 65456, k Puppis, ε Canis Majoris, κ Canis Majoris and π Puppis. Consequently, ο Puppis itself is known as (, .)
References
B-type subgiants
Lambda Eridani variables
Be stars
Puppis, Omicron
Puppis
Durchmusterung objects
063462
038070
3045 | Omicron Puppis | Astronomy | 569 |
37,607,968 | https://en.wikipedia.org/wiki/Fricke%20involution | In mathematics, a Fricke involution is the involution of the modular curve X0(N) given by τ → –1/Nτ. It is named after Robert Fricke. The Fricke involution also acts on other objects associated with the modular curve, such as spaces of modular forms and the Jacobian J0(N) of the modular curve. The quotient of X0(N) by the Fricke involution is a curve called X0+(N), and for N prime this has genus zero only for a finite list of primes, called supersingular primes, which are the primes that divide the order of the Monster group.
See also
Atkin–Lehner involution
References
Modular forms | Fricke involution | Mathematics | 168 |
82,933 | https://en.wikipedia.org/wiki/Chloroform | Chloroform, or trichloromethane (often abbreviated as TCM), is an organochloride with the formula and a common solvent. It is a volatile, colorless, sweet-smelling, dense liquid produced on a large scale as a precursor to refrigerants and PTFE. Chloroform was once used as an inhalational anesthetic between the 19th century and the first half of the 20th century. It is miscible with many solvents but it is only very slightly soluble in water (only 8 g/L at 20°C).
Structure and name
The molecule adopts a tetrahedral molecular geometry with C3v symmetry. The chloroform molecule can be viewed as a methane molecule with three hydrogen atoms replaced with three chlorine atoms, leaving a single hydrogen atom.
The name "chloroform" is a portmanteau of terchloride (tertiary chloride, a trichloride) and formyle, an obsolete name for the methylylidene radical (CH) derived from formic acid.
Natural occurrence
Many kinds of seaweed produce chloroform, and fungi are believed to produce chloroform in soil. Abiotic processes are also believed to contribute to natural chloroform productions in soils, although the mechanism is still unclear.
Chloroform is a volatile organic compound.
History
Chloroform was synthesized independently by several investigators :
Moldenhawer, a German pharmacist from Frankfurt an der Oder, appears to have produced chloroform in 1830 by mixing chlorinated lime with ethanol; however, he mistook it for Chloräther (chloric ether, 1,2-dichloroethane).
Samuel Guthrie, a U.S. physician from Sackets Harbor, New York, also appears to have produced chloroform in 1831 by reacting chlorinated lime with ethanol, and noted its anaesthetic properties; however, he also believed that he had prepared chloric ether.
Justus von Liebig carried out the alkaline cleavage of chloral. Liebig incorrectly states that the empirical formula of chloroform was and named it "Chlorkohlenstoff" ("carbon chloride").
Eugène Soubeiran obtained the compound by the action of chlorine bleach on both ethanol and acetone.
In 1834, French chemist Jean-Baptiste Dumas determined chloroform's empirical formula and named it: "Es scheint mir also erweisen, dass die von mir analysirte Substanz, … zur Formel hat: C2H2Cl6." (Thus it seems to me to show that the substance I analyzed … has as [its empirical] formula: C2H2Cl6.). [Note: The coefficients of his empirical formula should be halved.] ... "Diess hat mich veranlasst diese Substanz mit dem Namen 'Chloroform' zu belegen." (This had caused me to impose the name "chloroform" upon this substance [i.e., formyl chloride or chloride of formic acid].)
In 1835, Dumas prepared the substance by alkaline cleavage of trichloroacetic acid.
In 1842, Robert Mortimer Glover in London discovered the anaesthetic qualities of chloroform on laboratory animals.
In 1847, Scottish obstetrician James Y. Simpson was the first to demonstrate the anaesthetic properties of chloroform, provided by local pharmacist William Flockhart of Duncan, Flockhart and company, in humans, and helped to popularize the drug for use in medicine.
By the 1850s, chloroform was being produced on a commercial basis. In Britain, about 750,000 doses a week were being produced by 1895, using the Liebig procedure, which retained its importance until the 1960s. Today, chloroform – along with dichloromethane – is prepared exclusively and on a massive scale by the chlorination of methane and chloromethane.
Production
Industrially, chloroform is produced by heating a mixture of chlorine and either methyl chloride () or methane (). At 400–500 °C, free radical halogenation occurs, converting these precursors to progressively more chlorinated compounds:
Chloroform undergoes further chlorination to yield carbon tetrachloride ():
The output of this process is a mixture of the four chloromethanes: chloromethane, methylene chloride (dichloromethane), trichloromethane (chloroform), and tetrachloromethane (carbon tetrachloride). These can then be separated by distillation.
Chloroform may also be produced on a small scale via the haloform reaction between acetone and sodium hypochlorite:
Deuterochloroform
Deuterated chloroform is an isotopologue of chloroform with a single deuterium atom. is a common solvent used in NMR spectroscopy. Deuterochloroform is produced by the reaction of hexachloroacetone with heavy water. The haloform process is now obsolete for production of ordinary chloroform. Deuterochloroform can also be prepared by reacting sodium deuteroxide with chloral hydrate.
Inadvertent formation of chloroform
The haloform reaction can also occur inadvertently in domestic settings. Sodium hypochlorite solution (chlorine bleach) mixed with common household liquids such as acetone, methyl ethyl ketone, ethanol, or isopropyl alcohol can produce some chloroform, in addition to other compounds, such as chloroacetone or dichloroacetone.
Uses
In terms of scale, the most important reaction of chloroform is with hydrogen fluoride to give monochlorodifluoromethane (HCFC-22), a precursor in the production of polytetrafluoroethylene (Teflon) and other fluoropolymers:
The reaction is conducted in the presence of a catalytic amount of mixed antimony halides. Chlorodifluoromethane is then converted to tetrafluoroethylene, the main precursor of Teflon.
Solvent
The hydrogen attached to carbon in chloroform participates in hydrogen bonding, making it a good solvent for many materials.
Worldwide, chloroform is also used in pesticide formulations, as a solvent for lipids, rubber, alkaloids, waxes, gutta-percha, and resins, as a cleaning agent, as a grain fumigant, in fire extinguishers, and in the rubber industry. is a common solvent used in NMR spectroscopy.
Refrigerant
Chloroform is used as a precursor to make R-22 (chlorodifluoromethane). This is done by reacting it with a solution of hydrofluoric acid (HF) which fluorinates the molecule and releases hydrochloric acid as a byproduct. Before the Montreal Protocol was enforced, most of the chloroform produced in the United States was used in the production of chlorodifluoromethane. However, its production remains high, as it is a key precursor of PTFE.
Although chloroform has properties such as a low boiling point, and a low global warming potential of only 31 (compared to the 1760 of R-22), which are appealing properties for a refrigerant, there is little information to suggest that it has seen widespread use as a refrigerant in any consumer products.
Lewis acid
In solvents such as and alkanes, chloroform hydrogen bonds to a variety of Lewis bases. is classified as a hard acid, and the ECW model lists its acid parameters as EA = 1.56 and CA = 0.44.
Reagent
As a reagent, chloroform serves as a source of the dichlorocarbene intermediate . It reacts with aqueous sodium hydroxide, usually in the presence of a phase transfer catalyst, to produce dichlorocarbene, . This reagent effects ortho-formylation of activated aromatic rings, such as phenols, producing aryl aldehydes in a reaction known as the Reimer–Tiemann reaction. Alternatively, the carbene can be trapped by an alkene to form a cyclopropane derivative. In the Kharasch addition, chloroform forms the free radical which adds to alkenes.
Anaesthetic
Chloroform is a powerful general anesthetic, euphoriant, anxiolytic, and sedative when inhaled or ingested. The anaesthetic qualities of chloroform were first described in 1842 in a thesis by Robert Mortimer Glover, which won the Gold Medal of the Harveian Society for that year. Glover also undertook practical experiments on dogs to prove his theories, refined his theories, and presented them in his doctoral thesis at the University of Edinburgh in the summer of 1847, identifying anaesthetizing halogenous compounds as a "new order of poisonous substances".
The Scottish obstetrician James Young Simpson was one of those examiners required to read the thesis, but later claimed to have never read it and to have come to his own conclusions independently. Perkins-McVey, among others, have raised doubts about the credibility of Simpson's claim, noting that Simpson's publications on the subject in 1847 explicitly echo Glover's and, being one of the thesis examiners, Simpson was likely aware of the content of Glover's study, even if he skirted his duties as an examiner. In 1847 and 1848, Glover would pen a series of heated letters accusing Simpson of stealing his discovery, which had already earned Simpson considerable notoriety. Whatever the source of his inspiration, on 4 November 1847, Simpson argued that he had discovered the anaesthetic qualities of chloroform in humans. He and two colleagues entertained themselves by trying the effects of various substances, and thus revealed the potential for chloroform in medical procedures.
A few days later, during the course of a dental procedure in Edinburgh, Francis Brodie Imlach became the first person to use chloroform on a patient in a clinical context.
In May 1848, Robert Halliday Gunning made a presentation to the Medico-Chirurgical Society of Edinburgh following a series of laboratory experiments on rabbits that confirmed Glover's findings and also refuted Simpson's claims of originality. The laboratory experiments that proved the dangers of chloroform were largely ignored.
The use of chloroform during surgery expanded rapidly in Europe; for instance in the 1850s chloroform was used by the physician John Snow during the births of Queen Victoria's last two children Leopold and Beatrice. In the United States, chloroform began to replace ether as an anesthetic at the beginning of the 20th century; it was abandoned in favor of ether on discovery of its toxicity, especially its tendency to cause fatal cardiac arrhythmias analogous to what is now termed "sudden sniffer's death". Some people used chloroform as a recreational drug or to attempt suicide. One possible mechanism of action of chloroform is that it increases the movement of potassium ions through certain types of potassium channels in nerve cells. Chloroform could also be mixed with other anaesthetic agents such as ether to make C.E. mixture, or ether and alcohol to make A.C.E. mixture.
In 1848, Hannah Greener, a 15-year-old girl who was having an infected toenail removed, died after being given the anaesthetic. Her autopsy establishing the cause of death was undertaken by John Fife assisted by Robert Mortimer Glover. A number of physically fit patients died after inhaling it. In 1848, however, John Snow developed an inhaler that regulated the dosage and so successfully reduced the number of deaths.
The opponents and supporters of chloroform disagreed on the question of whether the medical complications were due to respiratory disturbance or whether chloroform had a specific effect on the heart. Between 1864 and 1910, numerous commissions in Britain studied chloroform but failed to come to any clear conclusions. It was only in 1911 that Levy proved in experiments with animals that chloroform can cause ventricular fibrillation. Despite this, between 1865 and 1920, chloroform was used in 80 to 95% of all narcoses performed in the UK and German-speaking countries. In Germany, comprehensive surveys of the fatality rate during anaesthesia were made by Gurlt between 1890 and 1897. At the same time in the UK the medical journal The Lancet carried out a questionnaire survey and compiled a report detailing numerous adverse reactions to anesthetics, including chloroform. In 1934, Killian gathered all the statistics compiled until then and found that the chances of suffering fatal complications under ether were between 1:14,000 and 1:28,000, whereas with chloroform the chances were between 1:3,000 and 1:6,000. The rise of gas anaesthesia using nitrous oxide, improved equipment for administering anesthetics, and the discovery of hexobarbital in 1932 led to the gradual decline of chloroform narcosis.
The latest reported anaesthetic use of chloroform in the Western world dates to 1987, when the last doctor who used it retired, about 140 years after its first use.
Criminal use
Chloroform has been used by criminals to knock out, daze, or murder victims. Joseph Harris was charged in 1894 with using chloroform to rob people. Serial killer H. H. Holmes used chloroform overdoses to kill his female victims. In September 1900, chloroform was implicated in the murder of the U.S. businessman William Marsh Rice. Chloroform was deemed a factor in the alleged murder of a woman in 1991, when she was asphyxiated while asleep. In 2002, 13-year-old Kacie Woody was sedated with chloroform when she was abducted by David Fuller and during the time that he had her, before he shot and killed her. In a 2007 plea bargain, a man confessed to using stun guns and chloroform to sexually assault minors.
The use of chloroform as an incapacitating agent has become widely recognized, bordering on cliché, through the adoption by crime fiction authors of plots involving criminals' use of chloroform-soaked rags to render victims unconscious. However, it is nearly impossible to incapacitate someone using chloroform in this way. It takes at least five minutes of inhalation of chloroform to render a person unconscious. Most criminal cases involving chloroform involve co-administration of another drug, such as alcohol or diazepam, or the victim being complicit in its administration. After a person has lost consciousness owing to chloroform inhalation, a continuous volume must be administered, and the chin must be supported to keep the tongue from obstructing the airway, a difficult procedure, typically requiring the skills of an anesthesiologist. In 1865, as a direct result of the criminal reputation chloroform had gained, the medical journal The Lancet offered a "permanent scientific reputation" to anyone who could demonstrate "instantaneous insensibility", i.e. loss of consciousness, using chloroform.
Safety
Exposure
Chloroform is formed as a by-product of water chlorination, along with a range of other disinfection by-products, and it is therefore often present in municipal tap water and swimming pools. Reported ranges vary considerably, but are generally below the current health standard for total trihalomethanes (THMs) of 100 μg/L. However, when considered in combination with other trihalomethanes often present in drinking water, the concentration of THMs often exceeds the recommended limit of exposure.
While few studies have assessed the risks posed by chloroform exposure through drinking water in isolation from other THMs, many studies have shown that exposure to the general category of THMs, including chloroform, is associated with an increased risk of cancer of the bladder or lower GI tract.
Historically, chloroform exposure may well have been higher, owing to its common use as an anesthetic, as an ingredient in cough syrups, and as a constituent of tobacco smoke, where DDT had previously been used as a fumigant.
Pharmacology
Chloroform is well absorbed, metabolized, and eliminated rapidly by mammals after oral, inhalation, or dermal exposure. Accidental splashing into the eyes has caused irritation. Prolonged dermal exposure can result in the development of sores as a result of defatting. Elimination is primarily through the lungs as chloroform and carbon dioxide; less than 1% is excreted in the urine.
Chloroform is metabolized in the liver by the cytochrome P-450 enzymes, by oxidation to trichloromethanol and by reduction to the dichloromethyl free radical. Other metabolites of chloroform include hydrochloric acid and diglutathionyl dithiocarbonate, with carbon dioxide as the predominant end-product of metabolism.
Like most other general anesthetics and sedative-hypnotic drugs, chloroform is a positive allosteric modulator at GABAA receptors. Chloroform causes depression of the central nervous system (CNS), ultimately producing deep coma and respiratory center depression. When ingested, chloroform causes symptoms similar to those seen after inhalation. Serious illness has followed ingestion of . The mean lethal oral dose in an adult is estimated at .
The anesthetic use of chloroform has been discontinued, because it caused deaths from respiratory failure and cardiac arrhythmias. Following chloroform-induced anesthesia, some patients suffered nausea, vomiting, hyperthermia, jaundice, and coma owing to hepatic dysfunction. At autopsy, liver necrosis and degeneration have been observed. The hepatotoxicity and nephrotoxicity of chloroform is thought to be due largely to phosgene, one of its metabolites.
Conversion to phosgene
Chloroform converts slowly in the presence of UV light and air to the extremely poisonous gas, phosgene (), releasing HCl in the process.
To prevent accidents, commercial chloroform is stabilized with ethanol or amylene, but samples that have been recovered or dried no longer contain any stabilizer. Amylene has been found to be ineffective, and the phosgene can affect analytes in samples, lipids, and nucleic acids dissolved in or extracted with chloroform. When ethanol is used as a stabiliser for chloroform, it reacts with phosgene (which is soluble in chloroform) to form the relatively harmless diethyl carbonate ester:
2 CH3CH2OH + COCl2 → CO3(CH2CH3)2 + 2 HCl
Phosgene and HCl can be removed from chloroform by washing with saturated aqueous carbonate solutions, such as sodium bicarbonate. This procedure is simple and results in harmless products. Phosgene reacts with water to form carbon dioxide and HCl, and the carbonate salt neutralizes the resulting acid.
Suspected samples can be tested for phosgene using filter paper which when treated with 5% diphenylamine, 5% dimethylaminobenzaldehyde in ethanol, and then dried, turns yellow in the presence of phosgene vapour. There are several colorimetric and fluorometric reagents for phosgene, and it can also be quantified using mass spectrometry.
Regulation
Chloroform is suspected of causing cancer (i.e. it is possibly carcinogenic, IARC Group 2B) as per the International Agency for Research on Cancer (IARC) Monographs.
It is classified as an extremely hazardous substance in the United States, as defined in Section 302 of the US Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities that produce, store, or use it in significant quantities.
Bioremediation of chloroform
Some anaerobic bacteria use chloroform for respiration, termed organohalide respiration, converting it to dichloromethane.
Gallery
See also
Joseph Thomas Clover
References
External links
Chloroform "The Molecular Lifesaver" – An article at Oxford University providing facts about chloroform.
Chloroform Administration – a short film of anaesthetic chloroform application, filmed in the 1930s
Concise International Chemical Assessment Document 58
IARC Summaries & Evaluations: Vol. 1 (1972), Vol. 20 (1979), Suppl. 7 (1987), Vol. 73 (1999)
NIST Standard Reference Database
Endocrine disruptors
Chloroalkanes
Halomethanes
General anesthetics
Hazardous air pollutants
Halogenated solvents
Halogen-containing natural products
Hepatotoxins
GABAA receptor positive allosteric modulators
Glycine receptor agonists
Sweet-smelling chemicals
Trichloromethyl compounds | Chloroform | Chemistry | 4,672 |
59,131 | https://en.wikipedia.org/wiki/Colon%20%28punctuation%29 | The colon, , is a punctuation mark consisting of two equally sized dots aligned vertically. A colon often precedes an explanation, a list, or a quoted sentence. It is also used between hours and minutes in time, between certain elements in medical journal citations, between chapter and verse in Bible citations, and, in the US, for salutations in business letters and other formal letters.
History
In Ancient Greek, in rhetoric and prosody, the term (, 'limb, member of a body') did not refer to punctuation, but to a member or section of a complete thought or passage; see also Colon (rhetoric). From this usage, in palaeography, a colon is a clause or group of clauses written as a line in a manuscript.
In the 3rd century BC, Aristophanes of Byzantium is alleged to have devised a punctuation system, in which the end of such a was thought to occasion a medium-length breath, and was marked by a middot . In practice, evidence is scarce for its early usage, but it was revived later as the ano teleia, the modern Greek semicolon. Some writers also used a double dot symbol , that later came to be used as a full stop or to mark a change of speaker. (See also Punctuation in Ancient Greek.)
In 1589, in The Arte of English Poesie, the English term colon and the corresponding punctuation mark is attested:
In 1622, in Nicholas Okes' print of William Shakespeare's Othello, the typographical construction of a colon followed by a hyphen or dash to indicate a restful pause is attested. This construction, known as the dog's bollocks, was once common in British English, though this usage is now discouraged.
As late as the 18th century, John Mason related the appropriateness of a colon to the length of the pause taken when reading the text aloud, but silent reading eventually replaced this with other considerations.
Usage in English
In modern English usage, a complete sentence precedes a colon, while a list, description, explanation, or definition follows it. The elements which follow the colon may or may not be a complete sentence: since the colon is preceded by a sentence, it is a complete sentence whether what follows the colon is another sentence or not. While it is acceptable to capitalise the first letter after the colon in American English, it is not the case in British English, except where a proper noun immediately follows a colon.
Colon used before list
Daequan was so hungry that he ate everything in the house: chips, cold pizza, pretzels and dip, hot dogs, peanut butter, and candy.
Colon used before a description
Bertha is so desperate that she'll date anyone, even William: he's uglier than a squashed toad on the highway, and that's on his good days.
Colon before definition
For years while I was reading Shakespeare's Othello and criticism on it, I had to constantly look up the word "egregious" since the villain uses that word: outstandingly bad or shocking.
Colon before explanation
I guess I can say I had a rough weekend: I had chest pain and spent all Saturday and Sunday in the emergency room.
Some writers use fragments (incomplete sentences) before a colon for emphasis or stylistic preferences (to show a character's voice in literature), as in this example:
Dinner: chips and juice. What a well-rounded diet I have.
The Bedford Handbook describes several uses of a colon. For example, one can use a colon after an independent clause to direct attention to a list, an appositive, or a quotation, and it can be used between independent clauses if the second summarizes or explains the first. In non-literary or non-expository uses, one may use a colon after the salutation in a formal letter, to indicate hours and minutes, to show proportions, between a title and subtitle, and between city and publisher in bibliographic entries.
Luca Serianni, an Italian scholar who helped to define and develop the colon as a punctuation mark, identified four punctuational modes for it: syntactical-deductive, syntactical-descriptive, appositive, and segmental.
Syntactical-deductive
The colon introduces the logical consequence, or effect, of a fact stated before.
There was only one possible explanation: the train had never arrived.
Syntactical-descriptive
In this sense the colon introduces a description; in particular, it makes explicit the elements of a set.
I have three sisters: Daphne, Rose, and Suzanne.
Syntactical-descriptive colons may separate the numbers indicating hours, minutes, and seconds in abbreviated measures of time.
The concert begins at 21:45.
The rocket launched at 09:15:05.
British English and Australian English, however, more frequently uses a point for this purpose:
The programme will begin at 8.00 pm.
You will need to arrive by 14.30.
A colon is also used in the descriptive location of a book verse if the book is divided into verses, such as in the Bible or the Quran:
"Isaiah 42:8"
"Deuteronomy 32:39"
"Quran 10:5"
Appositive
Luruns could not speak: he was drunk.
An appositive colon also separates the subtitle of a work from its principal title. (In effect, the example given above illustrates an appositive use of the colon as an abbreviation for the conjunction "because".) Dillon has noted the impact of colons on scholarly articles, but the reliability of colons as a predictor of quality or impact has also been challenged. In titles, neither needs to be a complete sentence as titles do not represent expository writing:
Star Wars Episode VI: Return of the Jedi
Segmental
Like a dash or quotation mark, a segmental colon introduces speech. The segmental function was once a common means of indicating an unmarked quotation on the same line. The following example is from the grammar book The King's English:
Benjamin Franklin proclaimed the virtue of frugality: A penny saved is a penny earned.
This form is still used in British industry-standard templates for written performance dialogues, such as in a play. The colon indicates that the words following an character's name are spoken by that character.
Patient: Doctor, I feel like a pair of curtains.
Doctor: Pull yourself together!
The uniform visual pattern of <character_nametag : character_spoken_lines> placement on a script page assists an actor in scanning for the lines of their assigned character during rehearsal, especially if a script is undergoing rewrites between rehearsals.
Use of capitals
Use of capitalization or lower-case after a colon varies. In British English, and in most Commonwealth countries, the word following the colon is in lower case unless it is normally capitalized for some other reason, as with proper nouns and acronyms. British English also capitalizes a new sentence introduced by a colon's segmental use.
American English permits writers to similarly capitalize the first word of any independent clause following a colon. This follows the guidelines of some modern American style guides, including those published by the Associated Press and the Modern Language Association. The Chicago Manual of Style, however, requires capitalization only when the colon introduces a direct quotation, a direct question, or two or more complete sentences.
In many European languages, the colon is usually followed by a lower-case letter unless the upper case is required for other reasons, as with British English. German usage requires capitalization of independent clauses following a colon. Dutch further capitalizes the first word of any quotation following a colon, even if it is not a complete sentence on its own.
Spacing and parentheses
In print, a thin space was traditionally placed before a colon and a thick space after it. In modern English-language printing, no space is placed before a colon and a single space is placed after it. In French-language typing and printing, the traditional rules are preserved.
One or two spaces may be and have been used after a colon. The older convention (designed to be used by monospaced fonts) was to use two spaces after a colon.
In modern typography, a colon will be placed outside the closing parenthesis introducing a list. In very early English typography, it could be placed inside, as seen in Roger Williams' 1643 book about the Native American languages of New England.
Usage in other languages
Suffix separator
In Finnish and Swedish, the colon can appear inside words in a manner similar to the apostrophe in the English possessive case, connecting a grammatical suffix to an abbreviation or initialism, a special symbol, or a digit (e.g., Finnish USA:n and Swedish USA:s for the genitive case of "USA", Finnish %:ssa for the inessive case of "%", or Finnish 20:een for the illative case of "20").
Abbreviation mark
Written Swedish uses colons in contractions, such as S:t for Sankt (Swedish for "Saint") – for example in the name of the Stockholm metro station S:t Eriksplan, and k:a for kyrka ("church") – for instance Svenska k:a (Svenska kyrkan), the Evangelical Lutheran national Church of Sweden. This can even occur in people's names, for example Antonia Ax:son Johnson (Ax:son for Axelson). Early Modern English texts also used colons to mark abbreviations.
Word separator
In Ethiopia, both Amharic and Ge'ez script used and sometimes still use a colon-like mark as word separator.
Historically, a colon-like mark was used as a word separator in Old Turkic script.
End of sentence or verse
In Armenian, a colon indicates the end of a sentence, similar to a Latin full stop or period.
In liturgical Hebrew, the sof pasuq is used in some writings such as prayer books to signal the end of a verse.
Score divider
In German, Hebrew, and sometimes in English, a colon divides the scores of opponents in sports and games. A result of 149–0 would be written as 149 : 0 in German and in Hebrew.
Mathematics and logic
The colon is used in mathematics, cartography, model building, and other fields, in this context it denotes a ratio or a scale, as in 3:1 (pronounced "three to one").
When a ratio is reduced to a simpler form, such as 10:15 to 2:3, this may be expressed with a double colon as 10:15::2:3; this would be read "10 is to 15 as 2 is to 3". This form is also used in tests of logic where the question of "Dog is to Puppy as Cat is to _?" can be expressed as "Dog:Puppy::Cat:_". For these uses, there is a dedicated Unicode symbol () that is preferred in some contexts. Compare (ratio colon) with 2:3 (U+003A ASCII colon).
In some languages (e.g. German, Russian, and French), the colon is the commonly used sign for division (instead of ÷).
The notation | : | may also denote the index of a subgroup.
The notation indicates that is a function with domain and codomain .
The combination with an equal sign () is used for definitions.
In mathematical logic, when using set-builder notation for describing the characterizing property of a set, it is used as an alternative to a vertical bar (which is the ISO 31-11 standard), to mean "such that". Example:
(S is the set of all in (the real numbers) such that is strictly greater than 1 and strictly smaller than 3)
In older literature on mathematical logic, it is used to indicate how expressions should be bracketed (see Glossary of Principia Mathematica).
In type theory and programming language theory, the colon sign after a term is used to indicate its type, sometimes as a replacement to the "∈" symbol. Example:
.
A colon is also sometimes used to indicate a tensor contraction involving two indices, and a double colon (::) for a contraction over four indices.
A colon is also used to denote a parallel sum operation involving two operands (many authors, however, instead use a ∥ sign and a few even a ∗ for this purpose).
Computing
The character was on early typewriters and therefore appeared in most text encodings, such as Baudot code and EBCDIC. It was placed at code 58 in ASCII and from there inherited into Unicode. Unicode also defines several related characters:
, used in IPA.
, IPA modifier-letter.
, used in IPA.
, IPA modifier-letter.
, used by Uralic Phonetic Alphabet.
, compatible with right-to-left text.
, for mathematical usage.
, for use in pretty-printing programming languages.
, see Colon (letter), sometimes used in Windows filenames as it is identical to the colon in the Segoe UI font used for filenames. The colon itself is not permitted as it is a reserved character.
, compatibility character for the Chinese Standard GB 18030.
, for compatibility with halfwidth and fullwidth fonts.
, compatibility character for the Chinese National Standard CNS 11643.
Programming languages
Many programming languages, most notably ALGOL, Pascal and Ada, use a colon and equals sign as the assignment operator, to distinguish it from a single equals which is an equality test (C instead uses a single equals as assignment, and a double equals as the equality test).
Many languages including C and Java use the colon to indicate the text before it is a label, such as a target for a goto or an introduction to a case in a switch statement. In a related use, Python uses a colon to separate a control statement (the clause header) from the block of statements it controls (the suite):
if test(x):
print("test(x) is true!")
else:
print("test(x) is not true...")
In many languages, including JavaScript, colons are used to define name–value pairs in a dictionary or object. This is also used by data formats such as JSON. Some other languages use an equals sign.
var obj = {
name: "Charles",
age: 18,
}
The colon is used as part of the ?: conditional operator in C and many other languages.
C++ uses a double colon as the scope resolution operator, and class member access. Most other languages use a period but C++ had to use this for compatibility with C. Another language using colons for scope resolution is Erlang, which uses a single colon.
In BASIC, it is used as a separator between the statements or instructions in a single line. Most other languages use a semicolon, but BASIC had used semicolon to separate items in print statements.
In Forth, a colon precedes definition of a new word.
Haskell uses a colon (pronounced as "cons", short for "construct") as an operator to add a data element to the front of a list:
"child" : ["woman", "man"] -- equals ["child","woman","man"]
while a double colon :: is read as "has type of" (compare scope resolution operator):
("text", False) :: ([Char], Bool)
The ML languages (such as Standard ML) have the above reversed, where the double colon (::) is used to add an element to the front of a list; and the single colon (:) is used for type guards.
MATLAB uses the colon as a binary operator to generate a vector, or to select a part of an extant matrix.
APL uses the colon:
to introduce a control structure element. In this usage it must be the first non-blank character of the line.
after a label name that will be the target of a :goto or a right-pointing arrow (this style of programming is deprecated and programs are supposed to use control structures instead).
to separate a guard (Boolean expression) from its expression in a dynamic function. Two colons are used for an Error guard (one or more error numbers).
Colon + space are used in class definitions to indicate inheritance.
⍠ (a colon in a box) is used by APL for its variant operator.
The colon is also used in many operating systems commands.
In the esoteric programming language INTERCAL, the colon is called two-spot and used to label a 32-bit variable, distinct from spot (.) to label a 16-bit variable.
Addresses
Internet URLs use the colon to separate the protocol (such as ) from the hostname or IP address.
In an IPv6 address, colons (and one optional double colon) separate up to 8 groups of 16 bits in hexadecimal representation. In a URL, a colon follows the initial scheme name (such as Hypertext Transfer Protocol (HTTP) and File Transfer Protocol (FTP), and separates a port number from the hostname or IP address.
In Microsoft Windows filenames, the colon is reserved for use in alternate data streams and cannot appear in a filename. It was used as the directory separator in Classic Mac OS, and was difficult to use in early versions of the newer BSD-based macOS due to code swapping the slash and colon to try to preserve this usage. In most systems it is often difficult to put a colon in a filename as the shell interprets it for other purposes.
CP/M and early versions of MSDOS required the colon after the names of devices, such as though this gradually disappeared except for disks (where it had to be between the disk name and the required path representation of the file as in C:\Windows\). This then migrated to use in URLs.
Text markup
It is often used as a single post-fix delimiter, signifying a token keyword had immediately preceded it or the transition from one mode of character string interpretation to another related mode. Some applications, such as the widely used MediaWiki, utilize the colon as both a pre-fix and post-fix delimiter.
In wiki markup, the colon is often used to indent text. Common usage includes separating or marking comments in a discussion as replies, or to distinguish certain parts of a text.
In human-readable text messages, a colon, or multiple colons, is sometimes used to denote an action (similar to how asterisks are used) or to emote (for example, in vBulletin). In the action denotation usage it has the inverse function of quotation marks, denoting actions where unmarked text is assumed to be dialogue. For example:
Tom: Pluto is so small; it should not be considered a planet. It is tiny!
Mark: Oh really? ::drops Pluto on Tom's head:: Still think it's small now?
Colons may also be used for sounds, e.g., ::click::, though sounds can also be denoted by asterisks or other punctuation marks.
Colons can also be used to represent eyes in emoticons.
See also
Semicolon ()
Two dots (disambiguation)
Notes
References
External links
Walden University Guides Punctuation: Colons
Punctuation
Typographical symbols
Programming language comparisons
Articles with example Haskell code
Articles with example JavaScript code
Articles with example Python (programming language) code | Colon (punctuation) | Mathematics,Technology | 4,122 |
42,951,933 | https://en.wikipedia.org/wiki/Robotic%20sperm | Robotic sperm (also called spermbots) are biohybrid microrobots consisting of sperm cells and artificial microstructures. Currently there are two types of spermbots. The first type, the tubular spermbot, consists of a single sperm cell that is captured inside a microtube. Single bull sperm cells enter these microtubes and become trapped inside. The tail of the sperm is the driving force for the microtube. The second type, the helical spermbot, is a small helix structure which captures and transports single immotile sperm cells. In this case, a rotating magnetic field drives the helix in a screw-like motion. Both kinds of spermbots can be guided by weak magnetic fields. These two spermbot designs are hybrid microdevices, they consist of a living cell combined with synthetic attachments. Other approaches exist to create purely synthetic microdevices inspired by the swimming of natural sperm cells, i.e. with a biomimetic design, for example so-called Magnetosperm which are made of a flexible polymeric structure coated with a magnetic layer and can be actuated by a magnetic field.
Design
Tubular spermbots
Initially, the microtubes for the tubular spermbots were made using roll-up nanotechnology on photoresist. In this process, thin films of titanium and iron were deposited onto a sacrificial layer. When the sacrificial layer was removed, the thin films rolled into 50 μm long microtubes with a diameter of 5 - 8 μm. Later on, the microtubes were made from a temperature-responsive polymer to enable the controlled release of the sperm cells upon a small temperature change of a few degrees.
Tubular spermbots are assembled by adding a large amount of the microtubes to a diluted sperm sample under the microscope. The sperm cells randomly enter the microtubes and become trapped in their slightly conical cavity. In order to increase the coupling efficiency between sperm cells and microtubes, the microtubes have been functionalized with proteins or sperm chemoattractant. This has been done using thiol chemistry once the tubes are rolled-up or by transferring the molecules with an elastomer stamp onto the material before rolling the tubes.
Helical spermbots
Helical spermbots are assembled by driving a magnetic microhelix over an individual sperm cell, thereby confining its tail inside the helix lumen and pushing the head of the sperm forward. The sperm cell is loosely coupled to the helix and can be released by reversing the rotation of the helix, letting it withdraw from the head and free the confined tail in the process. Such microhelices were fabricated by direct laser lithography and coated with nickel or iron for magnetization.
Navigation
Robotic sperm can be navigated by weak external magnetic fields of a few mT. These fields can be generated by permanent magnets or by a setup of electromagnets. The applied magnetic field can be a homogeneous, rotating, or gradient field. Tubular and helical spermbots can also be navigated in a closed-loop control scheme with an electromagnetic coil setup.
Applications
Spermbots hold promise for potential application in single cell manipulation and assisted reproduction, but also for targeted drug delivery. A recent study shows that modified tubular spermbots can be used for delivery of cancer drugs. In this case, the sperm cell is loaded with doxorubicin. The artificial microstructure fabricated by two-photon nanolithography captures the drug-loaded sperm cell. The sperm cell is the actuation source for the magnetic microstructure and can propel it to cancer spheroids. At this location, the drug-loaded sperm is released by a spring mechanism and the sperm cell delivers the drug to the cancer cells.
Perspectives
Robotic sperms as microswimmers are interesting for diverse biomedical applications, specifically for new assisted fertilization techniques and for the targeted delivery of therapeutic cargo. These microswimmers are meant to operate in in vivo environments, a feature that may revolutionize assisted reproduction technologies and nanomedicine in the future. New designs are emerging and plenty of applications can be derived from the here reported concept.
References
Medical robotics
Nanotechnology | Robotic sperm | Materials_science,Engineering,Biology | 867 |
5,659,240 | https://en.wikipedia.org/wiki/Inconstant%20Star | Inconstant Star is a science fiction fix-up novel by American writer Poul Anderson. It is formed by the novellas Iron and Inconstant Star, first published in The Man-Kzin Wars (1988) and Man-Kzin Wars III (1990), respectively. The title is from the tumbling alien artifact that sends out radiation. Due to the tumbling effect, the output can only be seen briefly from a given point in space, looking like a star, but then disappearing as the artifact moves.
The title also references another Niven story, "Inconstant Moon", which is not part of the Known Space series. The novel is the story of Robert Saxtorph and his ship Rover, hired for peaceful missions, but which run into Kzinti at every turn.
Plot summary
There are two parts to the novel, Iron, and Inconstant Star.
In “Iron”, Saxtorph and the Rover, hired by the wealthy Crashlander Laurinda Brozik, set out to explore a newly discovered red dwarf star. When they arrive, they are challenged by a Kzinti warship. Separating the crew onto the shuttles, the Rover is captured and landed on one of the moons. The first shuttle sets on Prima, the first planet, and is held fast by a planet-sized organism that begins dissolving the shuttle. They broadcast for rescue, and are refused help by the Kzin.
Meanwhile, helpless to rescue their friends, Robert, Dorcas, and Laurinda make a plan to steal a tug and escape back to friendly space with the news of the Kzin base. Dorcas pilots the tug, and takes out the ship guarding the Rover. Robert and Laurinda land, fight off a Kzinti shuttle, and recover the Rover. They are able to rescue Juan and Carita, and destroy the base with a guided asteroid.
In “Inconstant Star”, Saxtorph and crew are hired by Tyra Nordbo to redeem her father's honor, as he was accused of collaboration with the Kzin during their occupation of Wunderland. To do so, they must use notes he had left behind and follow a ship that had left 30 years prior to investigate a concentration of gamma rays. They travel to the coordinates, and find a massive artifact made of an unknown metal. A hole in the spherical artifact is pouring out lethal radiation. As they study it, they learn it is a weapon of the Tnuctip. It is a shell around a “captured” black hole, one that had been holed by a meteorite and is thus releasing the Hawking radiation. They then deduce the route of the original Kzin ship, and head off to the Father Sun, the star of the Kzin homeworld. En route, they locate the Sherrek, where Tyra's father Peter had worked free of his Kzin captors. They rescue him and head back to the artifact. Another Kzin ship, Swordbeak, also finds the old ship. They, too, head to the artifact, and catch the Rover by surprise. Just when all looks lost, Robert and Dorcas conceive a plan to use the artifact's radiation against the Kzin warship. In a last act of defiance, a dying Weoch-Captain activates the artifact's hyperdrive and heads out into unknown space.
Characters
Robert Saxtorph – Terran. Captain of the Rover. Husband of Dorcas.
Dorcas Saxtorph – Terran. First Mate of the Rover. Wife of Robert.
Kamehameha Ryan – Terran (Hawaiian). Crewman on the Rover and longtime friend of the Saxtorphs’.
Carita Fenger – Crewman on the Rover. Jinxian.
Juan Yoshii – Crewman on the Rover and aspiring poet. Belter.
Laurinda Broznik – Astronomer who discovered the star in "Iron". Crashlander.
Arthur Treginnis – Scientist. Mountaineer of Crew descent (who was a Colonist sympathizer during the revolution).
Ulf Markham – Commissioner of the Interworld Space Commission and spy for the Kzinti. Wunderlander.
Tyra Nordbo – Hires the Rover in "Inconstant Star". Daughter of Peter Nordbo.
Peter Nordbo – Former landholder on Wunderland, amateur astronomer, and slave of the Kzin.
Weoch-Captain – Kzin captain of the Swordbeak.
See also
Man-Kzin Wars
External links
1991 American novels
1991 science fiction novels
Fiction set around 61 Ursae Majoris
Known Space stories
Novels by Poul Anderson
Fiction about stars
Fiction about black holes | Inconstant Star | Physics | 972 |
22,284,510 | https://en.wikipedia.org/wiki/Hilbert%27s%20lemma | Hilbert's lemma was proposed at the end of the 19th century by mathematician David Hilbert. The lemma describes a property of the principal curvatures of surfaces. It may be used to prove Liebmann's theorem that a compact surface with constant Gaussian curvature must be a sphere.
Statement of the lemma
Given a manifold in three dimensions that is smooth and differentiable over a patch containing the point p, where k and m are defined as the principal curvatures and K(x) is the Gaussian curvature at a point x, if k has a max at p, m has a min at p, and k is strictly greater than m at p, then K(p) is a non-positive real number.
See also
Hilbert's theorem (differential geometry)
References
Lemmas
Differential geometry | Hilbert's lemma | Mathematics | 168 |
64,063,915 | https://en.wikipedia.org/wiki/Phosphorimidazolide | A phosphorimidazolide is a chemical compound in which a phosphoryl mono-ester is covalently bound to a nitrogen atom in an imidazole ring. They are a type of phosphoramidate. These phosphorus (V) compounds are encountered as reagents used for making new phosphoanhydride bonds with phosphate mono-esters, and as reactive intermediates in phosphoryl transfer reactions in some enzyme-catalyzed transformations. They are also being studied as critical chemical intermediates for the polymerization of nucleotides in pre-biotic settings. They are sometimes referred to as phosphorimidazolidates, imidazole-activated phosphoryl groups, and P-imidazolides.
Role in Oligonucleotide formation
Phosphorimidazolides have been investigated for their mechanistic role in abiogenesis (the natural process by which life arose from non-living matter). Specifically, they have been proposed as the active electrophilic species which may have mediated the formation of inter-nucleotide phosphodiester bonds, thereby enabling template-directed oligonucleotide replication before the advent of enzymes. Phosphorimidazolides were originally proposed as mediators of this process by Leslie Orgel in 1968. Early studies showed that divalent metal cations such as Mg2+, Zn2+, and Pb2+ and a complementary template were required for the formation of short oligonucleotides, although nucleotides exhibited 5'-2' connectivity instead of 5'-3' connectivity of present-day life forms. It was also shown that Montmorillonite clay could provide a surface for phosphorimidazolide-mediated oligonucleotide formation with lengths of 20-50 bases.
The research group of Jack W. Szostak has continued to investigate the role of phosphorimidazolides in pre-biotic nucleotide polymerization. The group has investigated a number of imidazole derivatives in the search for chemical moieties which provide longer oligonucleotides necessary for propagating genetic information. Significantly, they discovered that phosphorimidazolides promote template-directed oligonucleotide formation via imidazolium-bridged dinucleotide intermediates.
John D. Sutherland and colleagues have proposed that phosphorimidazolides may have formed in the chemical environment of early Earth via the activation of ribonucleotide phosphates by methyl isocyanaide and acetaldehyde followed by substitution with imidazole.
Phosphoanhydride Bond formation
While early studies of phosphorimidazolide derivatives of nucleotides found that oligonucleotides could form in the presence of a complementary template, pyrophosphate-linked dimers formed predominantly in the absence of a template. This proclivity for forming new phosphoanhydride bonds has been used in the synthesis of several pyrophosphate-containing organic compounds. A variety of modified nucleotide triphosphates were synthesized using a cyanoethyl-protected phosphorimidazolide reagent. Phosphoanhydride bond forming reactions were found to proceed most rapidly in amide-based organic solvents such as N,N-dimethylformamide and particularly in N,N-dimethylacetamide with Mg2+ or Zn2+ catalysts.
Synthesis
Phosphorimidazolide reagents have been synthesized from phosphate mono-esters.
In one method, a phosphate mono-ester is dissolved in anhydrous pyridine or N,N-dimethylformamide (DMF) and activated using triphenylphosphine (PPh3) and 2,2’-Dithiodipyridine (2,2’-DTDP) in the presence of triethylamine (TEA) base and excess imidazole. In another method using fewer reagents, a phosphate mono-ester is dissolved in DMF and carbonyldiimidazole (CDI) is used to both remove an oxygen atom from the phosphate group and supply the imidazole substituent. The product of either reaction may be collected by precipitation using acetonitrile or acetone as antisolvent with sodium or lithium perchlorate to supply the sodium or lithium salt of the phosphorimidazolide respectively. Alternatively, the phosphorimidazolide may be isolated by reverse-phase flash column chromatography with TEAB buffer and acetonitrile.
References
Chemistry
Phosphorus compounds
Phosphorus(V) compounds
Nucleotides
Origin of life
Pyrophosphates | Phosphorimidazolide | Chemistry,Biology | 1,040 |
58,156,803 | https://en.wikipedia.org/wiki/Bunkers%20%28energy%20in%20transport%29 | In energy statistics, marine bunkers and aviation bunkers as defined by the International Energy Agency are the energy consumption of ships and aircraft.
Marine and aviation bunkers are reported separately from international bunkers, which represent consumption of ships and aircraft on international routes.
International bunkers are subtracted from the energy supplies of a country to calculate its domestic consumption. It is as if international aviation and international shipping did not belong to any country. They are managed by the International Civil Aviation Organization (ICAO) and the International Maritime Organization (IMO).
Critics
The European Federation for Transport and Environment has only limited confidence in ICAO and IMO's ability to reduce air and sea emissions due to international bunkers and thus to comply with the Paris Climate Agreement.
A few figures
International marine bunkers amount to 2,466 TWh/a whereas international aviation bunkers amount to 2,163 TWh/a.
References
See also
Bunkering
Energy in transport | Bunkers (energy in transport) | Physics | 193 |
23,275,770 | https://en.wikipedia.org/wiki/Cholinesterase%20inhibitor | Cholinesterase inhibitors (ChEIs), also known as anti-cholinesterase, are chemicals that prevent the breakdown of the neurotransmitter acetylcholine or butyrylcholine by cholinesterase. This increases the amount of the acetylcholine or butyrylcholine in the synaptic cleft that can bind to muscarinic receptors, nicotinic receptors and others. This group of inhibitors is divided into two subgroups, acetylcholinesterase inhibitors (AChEIs) and butyrylcholinesterase inhibitors (BChEIs).
ChEIs may be used as drugs for Alzheimer's and myasthenia gravis, and also as chemical weapons and insecticides. Side effects when used as drugs may include loss of appetite, nausea, vomiting, loose stools, vivid dreams at night, dehydration, rash, bradycardia, peptic ulcer disease, seizures, weight loss, rhinorrhea, salivation, muscle cramps, and fasciculations.
ChEIs are indirect-acting parasympathomimetic drugs.
ChEls are widely used as chemical weapons. Since November 2019 the group of ACheIs known as Novichoks have been banned as agents of warfare under the Chemical Weapons Convention. Novichok agents are neurotoxic organophosphorus compounds and are considered more potent than VX gas, also a neurotoxic organophosphorus compound.
Medical use
While 4 ChEIs are approved in the US for the treatment of Alzheimer's Disease, only three of these are available commercially. The three available are rivastigmine, donepezil, and galantamine, while tacrine is not. They are generally used to treat Alzheimer's disease and dementia. If a benefit occurs, it is generally during the second or third month after starting.
It is difficult to determine which ChEI has greater efficacy, due to design flaws in head-to-head comparison studies.
Pyridostigmine is used in the treatment of myasthenia gravis.
Neostigmine is used in combination with a muscarinic antagonist to reverse the effects of non-depolarizing muscle relaxants e.g. rocuronium bromide
Cholinesterase inhibitor toxicity
Common side effects of one ChEI include insomnia, nausea and vomiting, accidental injury, headache, dizziness, bradycardia, hypotension, ecchymosis, and sleep disturbance.
Binding affinity
Acetylcholinesterase inhibitors
Donepezil, phenserine, huperzine A, and BW284c51 are selective AChE inhibitors.
Butyrylcholinesterase inhibitor
Tetra (monoisopropyl) pyrophosphoramide (Iso-OMPA) and ethopropazine are selective BChE inhibitors.
AChE and BChE inhibitor
Paraoxon and rivastigmine are both acetylcholinesterase inhibitors and butyrylcholinesterase inhibitors.
In 2015, the United States Food and Drug Administration's Adverse Event Reporting System database compared rivastigmine to the other ChEI drugs donepezil and galantamine found that rivastigmine was associated with a higher frequency of reports of death as an adverse event.
Acetylcholinesterase inhibitors and nicotinic receptor modulator
Galantamine might be less well tolerated than donepezil and rivastigmine.
Chemical weapons
Assassination Attempt
Cholinesterase inhibitors came to a public attention in 2020 when Russian opposition and dissent figure Alexei Navalny was treated in Berlin Charité hospital for poisoning by a Russian-made nerve agent which is known since 2019 as belonging to the Novichok agents subgroup of ChEI.
See also
Organophosphate and carbamate
Further reading
References
Neurophysiology
Neurotoxins
Hydrolase inhibitors
Anticholinesterases
Parasympathomimetics
Organophosphates | Cholinesterase inhibitor | Chemistry | 837 |
37,690,835 | https://en.wikipedia.org/wiki/WR%20102c | WR 102c is a Wolf–Rayet star located in the constellation Sagittarius towards the galactic centre. It is only a few parsecs from the Quintuplet Cluster, within the Sickle Nebula.
Features
According to recent estimations, WR 102c is as much as 500,000 times brighter than the Sun. An initial study reporting a much higher luminosity mistakenly used photometry from a nearby star. It would have formed as a O-type main-sequence star a few million years ago and has since spent a period as a red supergiant before losing its outer layers completely. It is now almost hydrogen-free and nearing the end of its life. It will collapse within the next few hundred thousand years as it runs out of fuel in its core, producing a type Ib or Ic supernova or collapsing directly into a black hole.
WR 102c is surrounded by a shell of nebulosity which contains dust made even hotter than the star itself by intense radiation. The nebula also includes nearly of molecular hydrogen and around of ionised hydrogen, all expelled from the star.
There is a suggestion that WR 102c may be a binary star. A nearby corkscrew-shaped jet of nebulosity could have been expelled during the orbital motion. which would imply a period of 800 - 1,400 days. It is surrounded by a small cluster of stars around in total, separate from the much more massive Quintuplet Cluster.
References
Sagittarius (constellation)
Wolf–Rayet stars | WR 102c | Astronomy | 312 |
1,907,549 | https://en.wikipedia.org/wiki/Oneirogen | An oneirogen, from the Greek ὄνειρος óneiros meaning "dream" and gen "to create", is a substance or other stimulus which produces or enhances dreamlike states of consciousness. This is characterized by an immersive dream state similar to REM sleep, which can range from realistic to alien or abstract.
Many dream-enhancing plants such as dream herb (Calea zacatechichi) and African dream herb (Entada rheedii), as well as the hallucinogenic diviner's sage (Salvia divinorum), have been used for thousands of years in a form of divination through dreams, called oneiromancy, in which practitioners seek to receive psychic or prophetic information during dream states. The term oneirogen commonly describes a wide array of psychoactive plants and chemicals ranging from normal dream enhancers to intense dissociative or deliriant drugs.
Effects experienced with the use of oneirogens may include microsleep, hypnagogia, fugue states, rapid eye movement sleep (REM), hypnic jerks, lucid dreams, and out-of-body experiences. Some oneirogenic substances are said to have little to no effect on waking consciousness, and will not exhibit their effects until the user falls into a natural sleep state.
List of oneirogens
Calea zacatechichi ("Calea ternifolia") has been traditionally used in Central America as a believed way to potentiate lucid dreams and perform dream divination. It can promote dreams vivid to the senses, sight, scent, hearing, touch, and taste. May be taken as a tea or smoked.
Entada rheedii ("African dream bean")
Mugwort, see Artemisia douglasiana
Silene undulata (also known as "Silene capensis" or "African dream root") is used by the Xhosa people of South Africa to induce lucid dreams.
List of possible oneirogens
Amanita muscaria (contains muscimol)
Amphetamines and other stimulants can create psychotic episodes (called stimulant psychosis) which may be defined as bursts of dream activity erupting spontaneously into waking states; this is not due to the substance itself but rather a result of the prolonged suppression of cholinergic activity and REM sleep due to amphetamine or stimulant abuse.
Artemisia douglasiana or California mugwort, Douglas's sagewort or dream plant, is a western North American species of aromatic herb in the sunflower family that can be used as a scent, tea, or smoke to trigger vivid and lucid dreams.
Artemisia vulgaris
Wild red asparagus root may promote dreams that involve flying.
Atropa belladonna (contains atropine, hyoscyamine, and scopolamine)
Atropine (via blockade of acetylcholine receptors)
Benzatropine
Bupropion
Datura (contains atropine, hyoscyamine, and scopolamine)
Dextromethorphan (the main ingredient in many cough syrups)
Dimethyltryptamine can trigger intensely vivid and surreal spiritually charged dream states.
Diphenhydramine ("Benadryl") can invoke an intense hypnagogic REM-like microsleep often indifferentiable from reality. It accomplishes this by blocking various acetylcholine receptors in the brain.
Galantamine was shown to increase lucid dreaming by 27% at 4 mg and 42% at 8 mg in a 2018 double-blind study lasting three nights.
Galanthus (genus) – An alkaloid in the plant is believed to increase the concentration of acetylcholine – a neurotransmitter that plays a very active role in dreaming
Harmaline
Hyoscyamine
Ibogaine, Ibogamine, and Tabernanthe iboga
Ilex guayusa can promote vivid dreams and aids in dream recollection.
Melatonin and Ramelteon may cause vivid dreams as a side effect
Mirtazapine, paroxetine, and varenicline often cause vivid dreams.
MMDA
Muscimol and other GABA receptor agonists like Zolpidem
Nutmeg in commonly used amounts myristicin and elemicin, can increase vividness of dreams
Water lily dried flowers may be smoked, or the rhizomes eaten, to promote vivid dreams.
Many opioids may produce a euphoric dream-like state with microsleep, known colloquially as "nodding".
Peganum harmala (contains harmaline)
Scopolamine
Hallucinogenic oneirogens
Tabernanthe iboga (iboga) is a perennial rainforest shrub native to West Africa. An evergreen bush indigenous to Gabon, the Democratic Republic of Congo, and the Republic of Congo, it is cultivated across West Africa. In African traditional medicine and rituals, the yellowish root or bark is used to produce hallucinations and near-death outcomes, with some fatalities occurring.
Psilocybe mushrooms and their active ingredients psilocin and psilocybin
Salvia divinorum and other Kappa receptor agonists
Ketamine
Disputed oneirogens
Valerian (herb) – A study conducted in the UK in 2001 showed that valerian root significantly improved stress induced insomnia, but as a side effect greatly increased the vividness of dreams. This study concluded that valerian root affects REM due to natural chemicals and essential oils that stimulate serotonin and opioid receptors. Another study found no encephalographic changes in subjects under its influence.
Nonchemical oneirogens
Binaural beats can be used to stimulate or trigger dream states, like hypnagogia or rapid eye movement sleep.
Mindfulness practices could be useful in achieving lucid dream.
Sleep deprivation can make dreams more intense, which is caused by REM rebound effect
See also
Oneiromancy
Oneirophrenia
References
Sources
External links
Psychedelics, dissociatives and deliriants
Dream | Oneirogen | Biology | 1,278 |
31,754,978 | https://en.wikipedia.org/wiki/SQALE | SQALE (Software Quality Assessment based on Lifecycle Expectations) is a method to support the evaluation of a software application source code. It is a generic method, independent of the language and source code analysis tools, licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license. Software editors can freely use and implement the SQALE method.
The SQALE method was developed by inspearit France (formerly DNV ITGS France). It is used by many organizations for applications of any type and any size. This method is implemented by several static code analysis tools that produce the defined indices and indicators. In addition, this method allows doing the precise management of design debt for Agile software development projects.
History
The SQALE method has been developed to answer a general need for assessing the quality of source code. It is meant to answer fundamental questions such as:
What is the quality of the source code delivered by the developers?
Is the code changeable, maintainable, portable, reusable?
What is the design debt stored up by the project?
Standards, like ISO 9126, do not provide effective support about the manner of building a global answer. To be able to evaluate the quality of source code, the developers community needs a generic method having the following properties:
Objective, specific and reproducible
Producing indices, syntheses or/and indicators easily understandable and helping to make decisions relating to the improvement of the source code
Independent of the languages
Independent of the tools for analysis
Fundamental principles
The quality of the source code is a non-functional requirement.
The requirements in relation to the quality of the source code have to be formalised according to the same quality criteria as all other requirements.
Assessing the quality of a source code is in essence assessing the distance between its state and its expected quality objective.
The SQALE method assesses the distance to the conformity with the requirements by considering the necessary remediation cost for bringing the source code to conformity.
The SQALE method respects the representation condition.
The SQALE method uses addition for aggregating the remediation costs and for calculating its quality indicators.
The SQALE method's quality model is orthogonal.
The SQALE method's quality model takes the software's lifecycle into account.
Details
The method is based on 4 main concepts:
The quality model
The analysis model
The indices
The indicators
The quality model
The SQALE Quality Model is used for formulating and organising the non-functional requirements that relate to the code's quality. It is organised in three hierarchical levels. The first level is composed of characteristics, the second of sub-characteristics. The third level is composed of requirements that relate to the source code's internal attributes. These requirements usually depend on the software's context and language.
The analysis model
The SQALE analysis model contains on the one hand the rules that are used for normalising the measures and the controls relating to the code, and on the other hand the rules for aggregating the normalised values.
The SQALE method normalises the reports resulting from the source code analysis tools by transforming them into remediation costs. To do this, either a remediation factor or a remediation function is used. The SQALE Method defines rules for aggregating the remediation costs, either in the Quality Model's tree structure, or in the hierarchy of the source code's artefacts.
The indices
All SQALE indices represent costs. These costs can be calculated in work unit, in time unit or in monetary unit. In all cases, the indices values are on a scale of ratio type. They can be handled with all the allowed operations for this type of scale. For any element of the hierarchy of the source code artefacts, the remediation cost relating to a given characteristic can be estimated by adding all remediation costs linked to the requirements of the characteristic.
The indices of SQALE characteristics are the following:
SQALE Testability Index : STI
SQALE Reliability Index : SRI
SQALE Changeability Index : SCI
SQALE Efficiency Index : SEI
SQALE Security Index : SSI
SQALE Maintainability Index : SMI
SQALE Portability Index : SPI
SQALE Reusability Index : SRuI
The method also defines a global index: For any element of the hierarchy of the source code artefacts, the remediation cost relating to all the characteristics of the quality model can be estimated by adding all remediation costs linked to all the requirements of the quality model.
This derived measurement is called: SQALE Quality Index: SQI
For the AGILE Software Development, the SQI index correspond to the design debt (or technical debt) of the project.
The method also defines index densities which allow comparing the products quality of different size (for example SQID: SQALE Quality Density Index).
The indicators
The SQALE method defines three synthesised indicators. Each user can define indicators according to his needs.
SQALE and Agile Software Development
The SQALE method is particularly devoted to the management of the design debt (or technical debt) of Agile Software Development. It allows:
To clearly define what creates design debt
To correctly estimate design debt
To describe this debt into various parts relating to the testability, the reliability, the changeability, the maintainability... This classification supports the analysis regarding the impact of the debt and how to define the priority actions of code refactoring.
In the requirements relating to the source code (the SQALE Quality Model), the method allows to include a minimum threshold to reach with unit testing. In the case that this threshold is not reached, the reliability index of the application is impacted.
Tools which implement the SQALE method
SQuORE
SonarQube
Security Reviewer Suite
See also
Static program analysis
ISO 9126
Software Quality
References
Reliable Software Technologies - Ada-Europe 2011: 16th Ada-Europe International Conference on Reliable Software Technologies, Springer, 2011,
External links
inspearit France
Official site of the SQALE method
White paper describing the SQALE method
Software quality
Software architecture
Software engineering terminology
Software maintenance | SQALE | Technology,Engineering | 1,232 |
26,618,225 | https://en.wikipedia.org/wiki/C8H20N2 | {{DISPLAYTITLE:C8H20N2}}
The molecular formula C8H20N2 (molar mass: 144.26 g/mol, exact mass: 144.1626 u) may refer to:
Octamoxin
Octamethylenediamine(OMDA)
Molecular formulas | C8H20N2 | Physics,Chemistry | 65 |
11,421,741 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORA24 | In molecular biology, SNORA24 (also known as ACA24) is a member of the H/ACA class of small nucleolar RNA that guide the sites of modification of uridines to pseudouridines.
References
External links
Small nuclear RNA | Small nucleolar RNA SNORA24 | Chemistry | 56 |
47,561,162 | https://en.wikipedia.org/wiki/History%20of%20Sulzer%20diesel%20engines | This article covers the History of Sulzer diesel engines from 1898 to 1997. Sulzer Brothers foundry was established in Winterthur, Switzerland, in 1834 by Johann Jakob Sulzer-Neuffert and his two sons, Johann Jakob and Salomon. Products included cast iron, firefighting pumps and textile machinery. Rudolf Diesel was educated in Augsburg and Munich and his works training was with Sulzer, and his later co-operation with Sulzer led to the construction of the first Sulzer diesel engine in 1898. In 2015, the Sulzer company lives on but it no longer manufactures diesel engines, having sold the diesel engine business to Wärtsilä in 1997.
Overview
Sulzer built diesel engines for stationary, road, rail and marine use. The engine types usually comprise a number, then some letters, then another number. For example, 6LDA28 indicates a six-cylinder engine in the "LDA" series with a 28 cm cylinder bore.
Road
In 1937, Sulzer introduced an opposed piston two-stroke diesel engine for road use. This used a single crank with con-rods operating levers that moved the opposed pistons, the same layout as the 1905 Arrol-Johnston petrol engine. The post-war Commer TS3 was similar but the Sulzer had a piston-type blower instead of a Roots blower. It was made in two sizes: 69 mm bore x 101.6 mm stroke or 89 mm bore by 120 mm stroke. The smaller version had two cylinders, produced 35 hp, and was intended for tractors. The larger version was available with two, three or four cylinders and was intended for trucks.
Rail
Sulzer supplied the main and auxiliary engines for a large diesel locomotive built by A. Borsig of Berlin in 1912, the first large locomotive of its kind. Like many modern diesels this had a cab at each end, and the Sulzer 4LV38 two-stroke diesel engine (of 1,200 bhp) could be run in either direction of rotation. Transmission was direct from the transverse crankshaft via coupling rods to the two driving axles. A Sulzer auxiliary engine provided the air for starting. In tests in 1913 on the Faunfeld-Winterthur line where express steam took 21 minutes, this did it in 17 minutes. The direct drive was a problem; starting the engine meant starting the locomotive and its load moving, requiring a lot of compressed air.
Sulzer moved to a range of smaller locomotive engines and including railcars in the 1920s, but in the mid-1930s they came out with their LD series designed just for railway locomotives which they would produce for many years. The top of this range (in 1945) was the 12LDA31, which was a 12 cylinder engine with two parallel crankshafts geared together, effectively two straight 6 engines in a common crankcase. These were 2-stroke engines with four blowers, each serving three cylinders. In 1945 the engines were in use in France and Romania, and their output of 2,200 bhp at 700 rpm placed them as one of the most powerful locomotive engines at the time.
There was also an LVA range of V-format locomotive engines from 1960. 12LVA24 versions of these engines were installed in D1702-D1706 in the UK, but there were some reliability problems. The locomotives were reverted to the LDA engine. The refurbished 12LVA24 engines were sold to SNCF. At the end of the 1960s the Sulzer traction division was merged into the far larger marine division.
Some examples of Sulzer powered rail locomotives:
Large numbers of Sulzer-engined locomotives were supplied to rail companies all over the world, particularly using the LDA engine.
Type LDA28
Example applications, 6 cylinders
British Rail Class 24
British Rail Class 25
British Rail Class 26
British Rail Class 27
CIE 101 Class
CIE 113 Class
Commonwealth Railways NSU class
Commonwealth Railways NT class
Example applications, 8 cylinders
British Rail Class 33
Example applications, 12 cylinders
British Rail Class 44
British Rail Class 45
British Rail Class 46
British Rail Class 47
SNCF Class CC 65500
PKP class ST43 (manufacturer type: LDE 1200)
Type LVA24
Example applications, 12 cylinders
British Rail Class 48
SNCF Class A1AA1A 68000
Example applications, 16 cylinders
British Rail HS4000
Type LV31
Example applications, 8 cylinders
Russian locomotive class E el-8
Marine
Sulzer marine engines were well engineered and so various trials in the early days of oil engines paid dividends. In 1910 there was an icebreaker tug equipped with a Sulzer diesel at Hamburg. The 4 cylinder two-stroke diesel engine gave an indicated 210 bhp and 9.75 knots, had a 1/3rd smaller engine room than the steam equivalent, and the engine weighed just a quarter of the equivalent steam plant. It did well in all trials, and the simple controls meant the engine could be reversed faster than a steam engine.
In 1911 the British Admiralty purchased a diesel motor launch they had been evaluating for some time. By Sulzer standards this had a very small 4-cylinder two-stroke engine of just 100 bhp. The boat was only 60 foot long, and as part of its trials the engine had been successfully run on full power for a period of 24 hours, reaching over 10 knots. The vessel was bought for £3,000 to be the subject for further experimental work.
The MV Monte Penedo, Germany's first sea-going motor ship (in 1912), was fitted with two Sulzer 4S47 two-stroke crosshead diesel engines. These were replaced in 1949 with new Sulzer 7TS36 diesel engines. In 1912 Sulzer also provided the main diesel propulsion engine and the two diesel generator engines on the new outer Elbe lightship, the 'Burgermeister Oswald'.
The US Navy also chose Sulzer for some of their submarines, following discussions held in Switzerland in 1915, a design for submarines was developed and tested. These were built by the Lake Torpedo Boat company as the US L Class, and the engines involved the US licensee Busch-Sulzer. Sulzer licensed their engine builds to many companies worldwide, without apparent compromise to the quality of their engines.
In the mid-1920s Sulzer started to advertise their airless Diesel engines, meaning they were using liquid injection rather than injecting the fuel using an air-blast (as used by Rudolph Diesel). They exhibited their 300hp 2-stroke airless engine, suitable for yachts, tugs, barges, etc at the Olympia Shipping Exhibition of 1925.
Sulzer's marine engine range in 1969 extended from the 3 cylinder version of the A25 auxiliary engine at 550 bhp to the 12 cylinder 48,000 bhp 12RND105 engine, 10 metres high, with cylinders over a metre in diameter. More recently these main engines have been replaced by the RTA series of engines, still two stroke.
The RD type marine two-stroke cross-head engine ranged from the 5 cylinder 5RD44, to the 12 cylinder 12RD90. This was developed into the RND engine, with numerous improvements including eliminating mechanically operated exhaust valves, with sizes ranging from 5RND68 (8,250 bhp), to 12RND105 (48,000 bhp).
A new marine range of engines in the later 1960s were the Z and ZV range covering 2,600 bhp to 6,600 bhp. These were again two-strokes but in 1969 four-stroke version were being planned. They were available either as in-line engines or as 50 degree V-engines. Trunk pistons were used instead of crossheads, and both long and short stroke engines were available.
These are examples, not a full list.
Type RTA76
Example applications, 5 cylinders
USNS Paul Buck (T-AOT-1122)
Example applications, 8 cylinders
MV Rena
Type TADS56
Example applications, 5 cylinders
Galați-class cargo ship
Stationary
While Sulzer was well known for its two-stroke engines, in 1931 there was a brief resume of their four stroke diesel engines which had been built alongside the two stroke engines. In 1903 they offered four stroke low speed A-frame style engines from 20 to 800 hp. Enquiries for lighter high-speed engines led by 1911 to forced lubrication four-stroke engines built with closed box-frames, low set camshafts and pushrod valve operation. The airless (injection) four-stroke engines were then developed from this type and by 1931 were available with from 2 to 8 cylinders with DD engine designation.
At the start of 1912 it was claimed that the largest diesel engines built for stationary work (of 2,000 and 2,400 bhp) were built by Sulzers, though this had been surpassed by an order received by Sulzers for four 4,000 hp 6-cylinder engines from Chile for electricity generation. These two-stroke crosshead engines were described as being basically the same as their marine engines without the reversing gear. In 1915 a 4,500 bhp engine was installed at Harland & Wolfe, Belfast. This installation had been delayed due to the war complicating commercial arrangements. A second engine of the same size had been installed at a Zurich electricity works, and another was being installed at a French electricity station - the engines were described as being "identical in essential principles and also external construction with large marine engines by the same firm".
Sulzer could provide engines from 550 bhp upwards for electricity generation, pumping plant, and industrial applications. These designs were essentially versions of their wide range of marine engines, although without the thrust block, and normally without the reversing gear.
The stationary versions of the marine RD and RND crosshead range were designated RF and RNF, but Z engines and the A25/AL25 auxiliary engine were designated for either marine or industrial use. The AL25 wasa higher output version of the A25 developed for emergency generating sets.
Examples :
Two twin cylinder engines at King Edward Mine, Camborne(http://kingedwardmine.co.uk/) They were originally installed for the Falmouth Water Company around 1926/27, one of them was kept as standby until the early 1970s. King Edward Mine removed the engines in 1989 and erected the better one at KEM around 1994.
Licences
Licences to build diesel engines to Sulzer's design were granted to Vickers-Armstrongs, and George Clark of Sunderland (after WW2), Wallsend Slipway and Engineering Co (c1925) in the United Kingdom; to Busch-Sulzer in the United States, to Reșița works in Romania, to Messrs Werkspoor in Holland, to Messrs. Workman, Clark and Company Ltd of Ireland, to Mitsubish Heavy Industries in Japan, and to H. Cegielski – Poznań in Poland.
A new Vickers-Armstrong licence in 1957 also allowed associates of the licensee, Cockatoo Docks and Engineering Co Pty Ltd of Sydney, Australia and Canadian Vickers Ltd of Montreal, Canada, to build Sulzer engines under licence.
References
Sulzer
Diesel engines
Marine engines | History of Sulzer diesel engines | Technology | 2,266 |
220,642 | https://en.wikipedia.org/wiki/Geometrization%20conjecture | In mathematics, Thurston's geometrization conjecture (now a theorem) states that each of certain three-dimensional topological spaces has a unique geometric structure that can be associated with it. It is an analogue of the uniformization theorem for two-dimensional surfaces, which states that every simply connected Riemann surface can be given one of three geometries (Euclidean, spherical, or hyperbolic).
In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by as part of his 24 questions, and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture.
Thurston's hyperbolization theorem implies that Haken manifolds satisfy the geometrization conjecture. Thurston announced a proof in the 1980s, and since then, several complete proofs have appeared in print.
Grigori Perelman announced a proof of the full geometrization conjecture in 2003 using Ricci flow with surgery in two papers posted at the arxiv.org preprint server. Perelman's papers were studied by several independent groups that produced books and online manuscripts filling in the complete details of his arguments. Verification was essentially complete in time for Perelman to be awarded the 2006 Fields Medal for his work, and in 2010 the Clay Mathematics Institute awarded him its 1 million USD prize for solving the Poincaré conjecture, though Perelman declined both awards.
The Poincaré conjecture and the spherical space form conjecture are corollaries of the geometrization conjecture, although there are shorter proofs of the former that do not lead to the geometrization conjecture.
The conjecture
A 3-manifold is called closed if it is compact – without "punctures" or "missing endpoints" – and has no boundary ("edge").
Every closed 3-manifold has a prime decomposition: this means it is the connected sum ("a gluing together") of prime 3-manifolds. This reduces much of the study of 3-manifolds to the case of prime 3-manifolds: those that cannot be written as a non-trivial connected sum.
Here is a statement of Thurston's conjecture:
Every oriented prime closed 3-manifold can be cut along tori, so that the interior of each of the resulting manifolds has a geometric structure with finite volume.
There are 8 possible geometric structures in 3 dimensions. There is a unique minimal way of cutting an irreducible oriented 3-manifold along tori into pieces that are Seifert manifolds or atoroidal called the JSJ decomposition, which is not quite the same as the decomposition in the geometrization conjecture, because some of the pieces in the JSJ decomposition might not have finite volume geometric structures. (For example, the mapping torus of an Anosov map of a torus has a finite volume solv structure, but its JSJ decomposition cuts it open along one torus to produce a product of a torus and a unit interval, and the interior of this has no finite volume geometric structure.)
For non-oriented manifolds the easiest way to state a geometrization conjecture is to first take the oriented double cover. It is also possible to work directly with non-orientable manifolds, but this gives some extra complications: it may be necessary to cut along projective planes and Klein bottles as well as spheres and tori, and manifolds with a projective plane boundary component usually have no geometric structure.
In 2 dimensions, every closed surface has a geometric structure consisting of a metric with constant curvature; it is not necessary to cut the manifold up first. Specifically, every closed surface is diffeomorphic to a quotient of S2, E2, or H2.
The eight Thurston geometries
A model geometry is a simply connected smooth manifold X together with a transitive action of a Lie group G on X with compact stabilizers.
A model geometry is called maximal if G is maximal among groups acting smoothly and transitively on X with compact stabilizers. Sometimes this condition is included in the definition of a model geometry.
A geometric structure on a manifold M is a diffeomorphism from M to X/Γ for some model geometry X, where Γ is a discrete subgroup of G acting freely on X ; this is a special case of a complete (G,X)-structure. If a given manifold admits a geometric structure, then it admits one whose model is maximal.
A 3-dimensional model geometry X is relevant to the geometrization conjecture if it is maximal and if there is at least one compact manifold with a geometric structure modelled on X. Thurston classified the 8 model geometries satisfying these conditions; they are listed below and are sometimes called Thurston geometries. (There are also uncountably many model geometries without compact quotients.)
There is some connection with the Bianchi groups: the 3-dimensional Lie groups. Most Thurston geometries can be realized as a left invariant metric on a Bianchi group. However S2 × R cannot be, Euclidean space corresponds to two different Bianchi groups, and there are an uncountable number of solvable non-unimodular Bianchi groups, most of which give model geometries with no compact representatives.
Spherical geometry S3
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group O(4, R), with 2 components. The corresponding manifolds are exactly the closed 3-manifolds with finite fundamental group. Examples include the 3-sphere, the Poincaré homology sphere, Lens spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type IX. Manifolds with this geometry are all compact, orientable, and have the structure of a Seifert fiber space (often in several ways). The complete list of such manifolds is given in the article on spherical 3-manifolds. Under Ricci flow, manifolds with this geometry collapse to a point in finite time.
Euclidean geometry E3
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group R3 × O(3, R), with 2 components. Examples are the 3-torus, and more generally the mapping torus of a finite-order automorphism of the 2-torus; see torus bundle. There are exactly 10 finite closed 3-manifolds with this geometry, 6 orientable and 4 non-orientable. This geometry can be modeled as a left invariant metric on the Bianchi groups of type I or VII0. Finite volume manifolds with this geometry are all compact, and have the structure of a Seifert fiber space (sometimes in two ways). The complete list of such manifolds is given in the article on Seifert fiber spaces. Under Ricci flow, manifolds with Euclidean geometry remain invariant.
Hyperbolic geometry H3
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group O+(1, 3, R), with 2 components. There are enormous numbers of examples of these, and their classification is not completely understood. The example with smallest volume is the Weeks manifold. Other examples are given by the Seifert–Weber space, or "sufficiently complicated" Dehn surgeries on links, or most Haken manifolds. The geometrization conjecture implies that a closed 3-manifold is hyperbolic if and only if it is irreducible, atoroidal, and has infinite fundamental group. This geometry can be modeled as a left invariant metric on the Bianchi group of type V or VIIh≠0. Under Ricci flow, manifolds with hyperbolic geometry expand.
The geometry of S2 × R
The point stabilizer is O(2, R) × Z/2Z, and the group G is O(3, R) × R × Z/2Z, with 4 components. The four finite volume manifolds with this geometry are: S2 × S1, the mapping torus of the antipode map of S2, the connected sum of two copies of 3-dimensional projective space, and the product of S1 with two-dimensional projective space. The first two are mapping tori of the identity map and antipode map of the 2-sphere, and are the only examples of 3-manifolds that are prime but not irreducible. The third is the only example of a non-trivial connected sum with a geometric structure. This is the only model geometry that cannot be realized as a left invariant metric on a 3-dimensional Lie group. Finite volume manifolds with this geometry are all compact and have the structure of a Seifert fiber space (often in several ways). Under normalized Ricci flow manifolds with this geometry converge to a 1-dimensional manifold.
The geometry of H2 × R
The point stabilizer is O(2, R) × Z/2Z, and the group G is O+(1, 2, R) × R × Z/2Z, with 4 components. Examples include the product of a hyperbolic surface with a circle, or more generally the mapping torus of an isometry of a hyperbolic surface. Finite volume manifolds with this geometry have the structure of a Seifert fiber space if they are orientable. (If they are not orientable the natural fibration by circles is not necessarily a Seifert fibration: the problem is that some fibers may "reverse orientation"; in other words their neighborhoods look like fibered solid Klein bottles rather than solid tori.) The classification of such (oriented) manifolds is given in the article on Seifert fiber spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type III. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
The geometry of the universal cover of SL(2, R)
The universal cover of SL(2, R) is denoted . It fibers over H2, and the space is sometimes called "Twisted H2 × R". The group G has 2 components. Its identity component has the structure . The point stabilizer is O(2,R).
Examples of these manifolds include: the manifold of unit vectors of the tangent bundle of a hyperbolic surface, and more generally the Brieskorn homology spheres (excepting the 3-sphere and the Poincaré dodecahedral space). This geometry can be modeled as a left invariant metric on the Bianchi group of type VIII or III. Finite volume manifolds with this geometry are orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
Nil geometry
This fibers over E2, and so is sometimes known as "Twisted E2 × R". It is the geometry of the Heisenberg group. The point stabilizer is O(2, R). The group G has 2 components, and is a semidirect product of the 3-dimensional Heisenberg group by the group O(2, R) of isometries of a circle. Compact manifolds with this geometry include the mapping torus of a Dehn twist of a 2-torus, or the quotient of the Heisenberg group by the "integral Heisenberg group". This geometry can be modeled as a left invariant metric on the Bianchi group of type II. Finite volume manifolds with this geometry are compact and orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow, compact manifolds with this geometry converge to R2 with the flat metric.
Sol geometry
This geometry (also called Solv geometry) fibers over the line with fiber the plane, and is the geometry of the identity component of the group G. The point stabilizer is the dihedral group of order 8. The group G has 8 components, and is the group of maps from 2-dimensional Minkowski space to itself that are either isometries or multiply the metric by −1. The identity component has a normal subgroup R2 with quotient R, where R acts on R2 with 2 (real) eigenspaces, with distinct real eigenvalues of product 1. This is the Bianchi group of type VI0 and the geometry can be modeled as a left invariant metric on this group. All finite volume manifolds with solv geometry are compact. The compact manifolds with solv geometry are either the mapping torus of an Anosov map of the 2-torus (such a map is an automorphism of the 2-torus given by an invertible 2 by 2 matrix whose eigenvalues are real and distinct, such as ), or quotients of these by groups of order at most 8. The eigenvalues of the automorphism of the torus generate an order of a real quadratic field, and the solv manifolds can be classified in terms of the units and ideal classes of this order.
Under normalized Ricci flow compact manifolds with this geometry converge (rather slowly) to R1.
Uniqueness
A closed 3-manifold has a geometric structure of at most one of the 8 types above, but finite volume non-compact 3-manifolds can occasionally have more than one type of geometric structure. (Nevertheless, a manifold can have many different geometric structures of the same type; for example, a surface of genus at least 2 has a continuum of different hyperbolic metrics.) More precisely, if M is a manifold with a finite volume geometric structure, then the type of geometric structure is almost determined as follows, in terms of the fundamental group π1(M):
If π1(M) is finite then the geometric structure on M is spherical, and M is compact.
If π1(M) is virtually cyclic but not finite then the geometric structure on M is S2×R, and M is compact.
If π1(M) is virtually abelian but not virtually cyclic then the geometric structure on M is Euclidean, and M is compact.
If π1(M) is virtually nilpotent but not virtually abelian then the geometric structure on M is nil geometry, and M is compact.
If π1(M) is virtually solvable but not virtually nilpotent then the geometric structure on M is solv geometry, and M is compact.
If π1(M) has an infinite normal cyclic subgroup but is not virtually solvable then the geometric structure on M is either H2×R or the universal cover of SL(2, R). The manifold M may be either compact or non-compact. If it is compact, then the 2 geometries can be distinguished by whether or not π1(M) has a finite index subgroup that splits as a semidirect product of the normal cyclic subgroup and something else. If the manifold is non-compact, then the fundamental group cannot distinguish the two geometries, and there are examples (such as the complement of a trefoil knot) where a manifold may have a finite volume geometric structure of either type.
If π1(M) has no infinite normal cyclic subgroup and is not virtually solvable then the geometric structure on M is hyperbolic, and M may be either compact or non-compact.
Infinite volume manifolds can have many different types of geometric structure: for example, R3 can have 6 of the different geometric structures listed above, as 6 of the 8 model geometries are homeomorphic to it. Moreover if the volume does not have to be finite there are an infinite number of new geometric structures with no compact models; for example, the geometry of almost any non-unimodular 3-dimensional Lie group.
There can be more than one way to decompose a closed 3-manifold into pieces with geometric structures. For example:
Taking connected sums with several copies of S3 does not change a manifold.
The connected sum of two projective 3-spaces has a S2×R geometry, and is also the connected sum of two pieces with S3 geometry.
The product of a surface of negative curvature and a circle has a geometric structure, but can also be cut along tori to produce smaller pieces that also have geometric structures. There are many similar examples for Seifert fiber spaces.
It is possible to choose a "canonical" decomposition into pieces with geometric structure, for example by first cutting the manifold into prime pieces in a minimal way, then cutting these up using the smallest possible number of tori. However this minimal decomposition is not necessarily the one produced by Ricci flow; in fact, the Ricci flow can cut up a manifold into geometric pieces in many inequivalent ways, depending on the choice of initial metric.
History
The Fields Medal was awarded to Thurston in 1982 partially for his proof of the geometrization conjecture for Haken manifolds.
In 1982, Richard S. Hamilton showed that given a closed 3-manifold with a metric of positive Ricci curvature, the Ricci flow would collapse the manifold to a point in finite time, which proves the geometrization conjecture for this case as the metric becomes "almost round" just before the collapse. He later developed a program to prove the geometrization conjecture by Ricci flow with surgery. The idea is that the Ricci flow will in general produce singularities, but one may be able to continue the Ricci flow past the singularity by using surgery to change the topology of the manifold. Roughly speaking, the Ricci flow contracts positive curvature regions and expands negative curvature regions, so it should kill off the pieces of the manifold with the "positive curvature" geometries S3 and S2 × R, while what is left at large times should have a thick–thin decomposition into a "thick" piece with hyperbolic geometry and a "thin" graph manifold.
In 2003, Grigori Perelman announced a proof of the geometrization conjecture by showing that the Ricci flow can indeed be continued past the singularities, and has the behavior described above.
One component of Perelman's proof was a novel collapsing theorem in Riemannian geometry. Perelman did not release any details on the proof of this result (Theorem 7.4 in the preprint 'Ricci flow with surgery on three-manifolds'). Beginning with Shioya and Yamaguchi, there are now several different proofs of Perelman's collapsing theorem, or variants thereof. Shioya and Yamaguchi's formulation was used in the first fully detailed formulations of Perelman's work.
A second route to the last part of Perelman's proof of geometrization is the method of Laurent Bessières and co-authors, which uses Thurston's hyperbolization theorem for Haken manifolds and Gromov's norm for 3-manifolds. A book by the same authors with complete details of their version of the proof has been published by the European Mathematical Society.
Higher dimensions
In four dimensions, only a rather restricted class of closed 4-manifolds admit a geometric decomposition. However, lists of maximal model geometries can still be given.
The four-dimensional maximal model geometries were classified by Richard Filipkiewicz in 1983. They number eighteen, plus one countably infinite family: their usual names are E4, Nil4, , (a countably infinite family), , , , , , , H4, H2(C) (a complex hyperbolic space), F4 (the tangent bundle of the hyperbolic plane), S2 × E2, , , S4, CP2 (the complex projective plane), and . No closed manifold admits the geometry F4, but there are manifolds with proper decomposition including an F4 piece.
The five-dimensional maximal model geometries were classified by Andrew Geng in 2016. There are 53 individual geometries and six infinite families. Some new phenomena not observed in lower dimensions occur, including two uncountable families of geometries and geometries with no compact quotients.
Footnotes
Notes
References
L. Bessieres, G. Besson, M. Boileau, S. Maillot, J. Porti, 'Geometrisation of 3-manifolds', EMS Tracts in Mathematics, volume 13. European Mathematical Society, Zurich, 2010.
M. Boileau Geometrization of 3-manifolds with symmetries
F. Bonahon Geometric structures on 3-manifolds Handbook of Geometric Topology (2002) Elsevier.
Allen Hatcher: Notes on Basic 3-Manifold Topology 2000
J. Isenberg, M. Jackson, Ricci flow of locally homogeneous geometries on a Riemannian manifold, J. Diff. Geom. 35 (1992) no. 3 723–741.
John W. Morgan. Recent progress on the Poincaré conjecture and the classification of 3-manifolds. Bulletin Amer. Math. Soc. 42 (2005) no. 1, 57–78 (expository article explains the eight geometries and geometrization conjecture briefly, and gives an outline of Perelman's proof of the Poincaré conjecture)
Scott, Peter The geometries of 3-manifolds. (errata) Bull. London Math. Soc. 15 (1983), no. 5, 401–487.
This gives the original statement of the conjecture.
William Thurston. Three-dimensional geometry and topology. Vol. 1. Edited by Silvio Levy. Princeton Mathematical Series, 35. Princeton University Press, Princeton, NJ, 1997. x+311 pp. (in depth explanation of the eight geometries and the proof that there are only eight)
William Thurston. The Geometry and Topology of Three-Manifolds, 1980 Princeton lecture notes on geometric structures on 3-manifolds.
External links
A public lecture on the Poincaré and geometrization conjectures, given by C. McMullen at Harvard in 2006.
Geometric topology
Riemannian geometry
3-manifolds
Conjectures that have been proved
Theorems in topology | Geometrization conjecture | Mathematics | 4,672 |
26,939,239 | https://en.wikipedia.org/wiki/Piperoxan | Piperoxan, also known as benodaine, was the first antihistamine to be discovered. This compound, derived from benzodioxan, was prepared in the early 1930s by Daniel Bovet and Ernest Fourneau at the Pasteur Institute in France. Formerly investigated by Fourneau as an α-adrenergic-blocking agent, they demonstrated that it also antagonized histamine-induced bronchospasm in guinea pigs, and published their findings in 1933. Bovet went on to win the 1957 Nobel Prize in Physiology or Medicine for his contribution. One of Bovet and Fourneau's students, Anne-Marie Staub, published the first structure–activity relationship (SAR) study of antihistamines in 1939. Piperoxan and analogues themselves were not clinically useful due to the production of toxic effects in humans and were followed by phenbenzamine (Antergan) in the early 1940s, which was the first antihistamine to be marketed for medical use.
Synthesis
Condensation of catechol [120-80-9] (1) with epichlorohydrin in the presence of an aqueous base can be visualized as proceeding initially with the epoxide (2) Opening of the oxirane ring by the phenoxide anion then leads to 2-hydroxymethyl-1,4-benzodioxane [3663-82-9] (3). Halogenation with thionyl chloride gives 2-chloromethyl-1,4-benzodioxane [2164-33-2] (4). Displacement of the leaving group by piperidine completed the synthesis of piperoxan (5).
References
Abandoned drugs
Alpha blockers
Antihistamines
Benzodioxans
French inventions
1-Piperidinyl compounds | Piperoxan | Chemistry | 393 |
245,564 | https://en.wikipedia.org/wiki/Nitromethane | Nitromethane, sometimes shortened to simply "nitro", is an organic compound with the chemical formula . It is the simplest organic nitro compound. It is a polar liquid commonly used as a solvent in a variety of industrial applications such as in extractions, as a reaction medium, and as a cleaning solvent. As an intermediate in organic synthesis, it is used widely in the manufacture of pesticides, explosives, fibers, and coatings. Nitromethane is used as a fuel additive in various motorsports and hobbies, e.g. Top Fuel drag racing and miniature internal combustion engines in radio control, control line and free flight model aircraft.
Preparation
Nitromethane is produced industrially by combining propane and nitric acid in the gas phase at . This exothermic reaction produces the four industrially significant nitroalkanes: nitromethane, nitroethane, 1-nitropropane, and 2-nitropropane. The reaction involves free radicals, including the alkoxyl radicals of the type CH3CH2CH2O, which arise via homolysis of the corresponding nitrite ester. These alkoxy radicals are susceptible to C—C fragmentation reactions, which explains the formation of a mixture of products.
Laboratory methods
It can be prepared in other methods that are of instructional value. The reaction of sodium chloroacetate with sodium nitrite in aqueous solution produces this compound:
ClCH2COONa + NaNO2 + H2O → CH3NO2 + NaCl + NaHCO3
Uses
The dominant use of the nitromethane is as a precursor reagent. A major derivative is chloropicrin (), a widely used pesticide. It condenses with formaldehyde (Henry reaction) to eventually give tris(hydroxymethyl)aminomethane ("tris"), a widely used buffer and ingredient in alkyd resins.
Solvent and stabilizer
The major application is as a stabilizer in chlorinated solvents. As an organic solvent, nitromethane has an unusual combination of properties: highly polar (εr = 36 at 20 °C and μ = 3.5 Debye) but aprotic and weakly basic. This combination makes it useful for dissolving positively charged, strongly electrophilic species. It is a solvent for acrylate monomers, such as cyanoacrylates (more commonly known as "super-glues").
Fuel
Although a minor application in terms of volume, nitromethane also is used as a fuel or fuel additive for sports and hobby. For some applications, it is mixed with methanol in racing cars, boats, and model engines.
Nitromethane is used as a fuel in motor racing, particularly drag racing, as well as for radio-controlled model power boats, cars, planes and helicopters. In this context, nitromethane is commonly referred to as "nitro fuel" or simply "nitro", and is the principal ingredient for fuel used in the "Top Fuel" category of drag racing.
The oxygen content of nitromethane enables it to burn with much less atmospheric oxygen than conventional fuels. During nitromethane combustion, nitric oxide (NO) is one of the major emission products along with CO and HO. Nitric oxide contributes to air pollution, acid rain, and ozone layer depletion. Recent (2020) studies suggest the correct stoichiometric equation for the burning of nitromethane is:
4 CH3NO2 + 5 O2 → 4 CO2 + 6 H2O + 4 NO
The amount of air required to burn of gasoline is , but only of air is required for 1 kg of nitromethane. Since an engine's cylinder can only contain a limited amount of air on each stroke, 8.6 times as much nitromethane as gasoline can be burned in one stroke. Nitromethane, however, has a lower specific energy: gasoline provides about 42–44 MJ/kg, whereas nitromethane provides only 11.3 MJ/kg. This analysis indicates that nitromethane generates about 2.3 times the power of gasoline when combined with a given amount of oxygen.
Nitromethane can also be used as a monopropellant, i.e., a propellant that decomposes to release energy without added oxygen. It was first tested as rocket monopropellant in 1930s by fom Italian Rocket Society. There is a renewed interest in nitromethane as safer replacement of hydrazine monopropellant. The following equation describes this process:
2 CH3NO2 → 2 CO + 2 H2O + H2 + N2
Nitromethane has a laminar combustion velocity of approximately 0.5 m/s, somewhat higher than gasoline, thus making it suitable for high-speed engines. It also has a somewhat higher flame temperature of about . The high heat of vaporization of 0.56 MJ/kg together with the high fuel flow provides significant cooling of the incoming charge (about twice that of methanol), resulting in reasonably low temperatures.
Nitromethane is usually used with rich air–fuel mixtures because it provides power even in the absence of atmospheric oxygen. When rich air–fuel mixtures are used, hydrogen and carbon monoxide are two of the combustion products. These gases often ignite, sometimes spectacularly, as the normally very rich mixtures of the still burning fuel exits the exhaust ports. Very rich mixtures are necessary to reduce the temperature of combustion chamber hot parts in order to control pre-ignition and subsequent detonation. Operational details depend on the particular mixture and engine characteristics.
A small amount of hydrazine blended in nitromethane can increase the power output even further. With nitromethane, hydrazine forms an explosive salt that is again a monopropellant. This unstable mixture poses a severe safety hazard. The National Hot Rod Association and Academy of Model Aeronautics do not permit its use in competitions.
In model aircraft and car glow fuel, the primary ingredient is generally methanol with some nitromethane (0% to 65%, but rarely over 30%, and 10–20% lubricants (usually castor oil and/or synthetic oil)). Even moderate amounts of nitromethane tend to increase the power created by the engine (as the limiting factor is often the air intake), making the engine easier to tune (adjust for the proper air/fuel ratio).
Former uses
It formerly was used in the explosives industry as a component in a binary explosive formulation with ammonium nitrate and in shaped charges, and it was used as a chemical stabilizer to prevent decomposition of various halogenated hydrocarbons.
Other
It can be used as an explosive, when gelled with several percent of gelling agent. This type of mixture is called PLX. Other mixtures include ANNM and ANNMAl – explosive mixtures of ammonium nitrate, nitromethane and aluminium powder.
Reactions
Acid-base properties
Nitromethane is a relatively acidic carbon acid. It has a pKa of 17.2 in DMSO solution. This value indicates an aqueous pKa of about 11. It is so acidic because the anion admits an alternate, stabilizing resonance structure:
The acid deprotonates only slowly. Protonation of the conjugate base O2NCH2−, which is nearly isosteric with nitrate, occurs initially at oxygen.
Organic reactions
In organic synthesis nitromethane is employed as a one carbon building block. Its acidity allows it to undergo deprotonation, enabling condensation reactions analogous to those of carbonyl compounds. Thus, under base catalysis, nitromethane adds to aldehydes in 1,2-addition in the nitroaldol reaction. Some important derivatives include the pesticides chloropicrin (Cl3CNO2), beta-nitrostyrene, and tris(hydroxymethyl)nitromethane, ((HOCH2)3CNO2). Reduction of the latter gives tris(hydroxymethyl)aminomethane, (HOCH2)3CNH2, better known as tris, a widely used buffer. In more specialized organic synthesis, nitromethane serves as a Michael donor, adding to α,β-unsaturated carbonyl compounds via 1,4-addition in the Michael reaction.
Purification
Nitromethane is a popular solvent in organic and electroanalytical chemistry. It can be purified by cooling below its freezing point, washing the solid with cold diethyl ether, followed by distillation.
Safety
Nitromethane has a modest acute toxicity. LD50 (oral, rats) is 1210±322 mg/kg.
Nitromethane is "reasonably anticipated to be a human carcinogen" according to a U.S. government report.
Explosive properties
Nitromethane was not known to be a high explosive until a railroad tank car loaded with it exploded on . After much testing, it was realized that nitromethane was a more energetic high explosive than TNT, although TNT has a higher velocity of detonation (VoD) and brisance. Both of these explosives are oxygen-poor, and some benefits are gained from mixing with an oxidizer, such as ammonium nitrate. Pure nitromethane is an insensitive explosive with a VoD of approximately , but even so inhibitors may be used to reduce the hazards. The tank car explosion was speculated to be due to adiabatic compression, a hazard common to all liquid explosives. This is when small entrained air bubbles compress and superheat with rapid rises in pressure. It was thought that an operator rapidly snapped shut a valve creating a "hammer-lock" pressure surge.
If mixed with ammonium nitrate, which is used as an oxidizer, it forms an explosive mixture known as ANNM.
Nitromethane is used as a model explosive, along with TNT. It has several advantages as a model explosive over TNT, namely its uniform density and lack of solid post-detonation species that complicate the determination of equation of state and further calculations.
Nitromethane reacts with solutions of sodium hydroxide or methoxide in alcohol to produce an insoluble salt of nitromethane. This substance is a sensitive explosive which reverts to nitromethane under acidic conditions and decomposes in water to form another explosive compound, sodium methazonate, which has a reddish-brown color:
2 CH3NO2 + NaOH → HON=CHCH=NO2Na + 2 H2O
Nitromethane's reaction with solid sodium hydroxide is hypergolic.
See also
Top Fuel
Adiabatic flame temperature, a thermodynamic calculation of the flame temperature of nitromethane
Dinitromethane
Model engine
Trinitromethane
Tetranitromethane
RE factor
References
Cited sources
Further reading
External links
WebBook page for nitromethane
History of Nitromethane
CDC – NIOSH Pocket Guide to Chemical Hazards
Nitroalkanes
Nitro solvents
Fuels
Rocket fuels
Liquid explosives
Explosive chemicals
Fuel additives
Drag racing
IARC Group 2B carcinogens
Organic compounds with 1 carbon atom
Monopropellants | Nitromethane | Chemistry | 2,368 |
68,727,206 | https://en.wikipedia.org/wiki/Time%20in%20Gabon | Time in Gabon is given by a single time zone, officially denoted as West Africa Time (WAT; UTC+01:00). Gabon adopted WAT on 1 January 1912, and has never observed daylight saving time.
IANA time zone database
In the IANA time zone database, Gabon is given one zone in the file zone.tab – Africa/Libreville. "GA" refers to the country's ISO 3166-1 alpha-2 country code. Data for Gabon directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
See also
List of time zones by country
List of UTC time offsets
References
External links
Current time in Gabon at Time.is
Time in Gabon at TimeAndDate.com
Time by country
Geography of Gabon
Time in Africa | Time in Gabon | Physics | 167 |
33,673,135 | https://en.wikipedia.org/wiki/Behavioral%20ethics | Behavioral ethics is a field of social scientific research that seeks to understand how individuals behave when confronted with ethical dilemmas. It refers to behavior that is judged within the context of social situations and compared to generally accepted behavioral norms.
Ethics, a subsidiary of philosophy, is defined as the communal understanding of social and normative values in a particular society. Compared to normative ethics, which determines the 'right' or 'wrong' of individual situations, behavioral ethics is more similar to applied ethics, a subdivision dedicated to the more practical and real-world considerations of moral dilemmas.
History of behavioral ethics
The history of behavioral ethics includes the development of scientific research into the psychological foundations of ethical decision-making and behavior. Although the field does not have a precise starting point, its development can be traced through important milestones in psychology, sociology, and related disciplines. Moral philosophy is thus more akin to art or literature that creatively illuminates the human condition without without producing the kind of verifiable facts required to support ignoring actual human moral behavior.
The ideology of behavioral ethics given more an emphasis in the middle of the 20th century, when psychologists and social scientists began to study human behavior in ethical dilemmas. Early experiments like the Milgram experiment (1961) and the Stanford prison experiment (1971) shed light on the impact of how situational factors can influence unethical behavior.
The history of behavioral ethics can be interpreted as a journey through the development of understanding of human morality and decision making. It begins with ancient philosophical studies of ethics, where thinkers such as Aristotle considered the nature of virtue and the good life. Over time, as society developed and became more complex, researchers in various fields began to study the psychological bases of ethical behavior. For example, Aristotle asserts in Book II of the Nicomachean Ethics, the man who possesses character excellence will tend to do the right thing, at the right time, and in the right way. Behavioral ethics over time developed models of human morality based upon the fact that morality is an emergent property of the evolutionary dynamic that gave rise to our species. Bravery, and the correct regulation of one's bodily appetites, are examples of character excellence or virtue. So acting bravely and acting temperately are examples of excellent activities.
The moral philosophy of the 18th century philosopher Immanuel Kant, characterized by the principles of rationality, autonomy and universalizability, serves as the cornerstone of the historical trajectory of behavioral ethics. Arguing against the theory of utilitarianism in favor of using a deontological approach such as his most famous ideology, universalizability, Kant's emphasis on these principles provides valuable insight into the psychological foundations of ethical decision-making and behavior and enriches the understanding of human morality in behavioral ethics. According to this view, judgments of right or wrong are determined by the motives of the person acting, not the consequences of their actions.
Furthermore, the emergence of applied ethics in the latter half of the 20th century marked a significant turning point in the field of behavioral ethics. Applied ethics involves the application of ethical principles to real-world issues and dilemmas, such as medical ethics, environmental ethics, and bioethics. This interdisciplinary approach not only extends the theoretical understanding of ethical decision-making but also provides practical frameworks for addressing ethical challenges in various domains of human activity. Scholars and practitioners in applied ethics draw upon insights from behavioral science to develop guidelines, codes of conduct, and decision-making tools aimed at promoting ethical behavior and resolving ethical conflicts in complex societal contexts. Through the integration of empirical research and ethical theory, applied ethics continues to contribute to the ongoing evolution of behavioral ethics as a dynamic and multidisciplinary field.
Behavioral models
Behavioral ethics led to the development of various ethical models and theories.
Bystander intervention
Bystander intervention describes the phenomenon where ethical behavior is far harder to display because of what is learned from social institutions such as family, school, and religion. Due to this, intervening in an ethically challenging situation requires an individual to go through several steps and failure to complete all means a failure to behave ethically.
Rational actor model
In the realm of behavioral ethics, the rational actor model serves as a fundamental framework for understanding decision-making. Traditional economic theory often assumes that individuals are rational actors who make decisions by carefully weighing costs and benefits to maximize their own self-interest. However, behavioral ethics suggests that human behavior is influenced by psychological, social, and contextual factors, leading to departures from pure rationality.
Historically, philosophical perspectives on morality have primarily relied on theoretical analysis and introspection, often with minimal consideration of real-life human conduct. Models of human morality advanced by behavioral ethics based on the fact that morality is a new and still developing quality of the evolutionary dynamic that leads to our species.
The rational actor model states that rational people make their decisions based on how much the consequences of said decision will benefit them. This would imply that the individuals will assess all available options, weighing them against their personal objectives, and ultimately selecting the most favorable one. Consistently opting for the best choice contrasts with behavioral ethics, where decisions may be swayed by a broader array of considerations, including moral and ethical principles.
Moreover, the rational actor model's focus on rationality as the primary factor shaping human decision-making fails to recognize the complexities of moral behavior. Studies in behavioral ethics have revealed that individuals frequently display systematic departures from rationality, such as framing effects and overconfidence biases, which can profoundly impact moral decision-making and behavior.
Socio-cultural factors, including social norms and group dynamics, significantly influence moral behavior, underscoring the limitations of solely individualistic and rational perspectives. By recognizing these intricacies and integrating findings from behavioral science, behavioral ethics provides a more nuanced comprehension of human morality, overcoming the limitations of the rational actor model's assumptions. This holistic approach considers how social contexts, psychological biases, and emotional responses intertwine to shape ethical decision-making.
Thought experiments
The trolley problem
The trolley problem, first introduced in 1967 by Philippa Foot, is a classic ethical dilemma. In the problem, there is a runaway trolley headed straight for five people who are restrained and unable to move, however, you have the option to pull a lever and divert the trolley to another track where there is only one restrained individual. The ethically 'correct' decision would be to pull the lever killing one individual and saving the five, as the lives of five would outweigh the life of one.
However, when viewing the trolley problem through the lens of behavioral ethics, it becomes more difficult to make an ethically 'correct' decision. Variations of the original problem have been proposed, leading to more blame or guilt being placed on the decision-maker, causing individuals to deviate from the expected response of pulling the lever. One example of this is Frank Chapman Sharp's version of the trolley problem (1905), where the railway's switchman controls the lever and the single restrained individual is the switchman's child.
The prisoner's dilemma
The prisoner's dilemma, while not necessarily an ethical dilemma, but a game theory thought experiment, can still be viewed using behavioral ethics. It features two prisoners who can each either testify or stay silent. If one of the prisoners testifies, the other is arrested and given a long sentence. If neither testify, they are both given a shorter sentence, and if they both testify, they are both arrested and given a medium sentence. Both prisoners are unable to speak or communicate in any way with the other. While the most correct decision would be for both prisoners to stay silent to receive the shortest sentence, fear of the other prisoner testifying typically leads to both prisoners testifying and both receiving the medium sentence.
The trolley problem and the prisoner's dilemma both place individuals in decision-making situations that carry ethical questions. In each, an individual is asked to make a decision that affects another person. In the prisoner's dilemma, the principles of reciprocity and cooperation come into play, but not all who participate behave in the same manner. In the trolley problem an individual has to choose which group of people to save. Both of these experiments shed light on how people behave when confronted with ethical dilemmas.
Sometimes, the small details get missed in strict research. For instance, there was a study in 2008 that said students were less likely to cheat after thinking about moral rules like the Ten Commandments. Based on this, some people suggested using moral reminders to stop cheating. But when another study tried to repeat the same thing later on, they didn't get the same results. This teaches us that things can be more complicated than they seem, even if they're well-known findings.
Behavioral ethics in practice
Education
In ethics teaching and research, ethics is arguably the "next big thing" because its investigation agenda has generated many knowledge on why and how people choose and act when being confronted with ethical subject, which was unknown previously. Based on the extant body of ethics course books and course plans from fields such as medicine, teaching, accounting, and journalism, "moral reasoning" - along with associated skills - is often an established objective. Behavioral ethics, however, is distinguished from the concept of moral reasoning because ethical behavior is primarily driven by a diverse set of intuitive processes over which individuals have little conscious control. Within education it is important to ensure ethics are taken into consideration. Ethical conduct in teaching is a vital factor in making sure students are taught fairly. An individual's "preconceived notions and opinions can emerge through our language choice, teaching methods, grading practices, and accessibility practices and it can have a tremendous impact on our students' learning and connection to school". Biases can influence actions and perceptions of students therefore teaching should be free of bias. Behavioral ethics calls for a model of ethics in education that focuses not on directly modeling good ethical reasoning but on the way people think clearly and impartially about ethical problems.
Behavioral law and economics
Clarifying the difference between behavioral law and economics (BLE) and behavioral ethics (BE) is of importance. Compared to BLE, BE has reduced its ability of influencing broad legal academic circles. In addition, unlike BLE, BE was advanced as piece of the management literature, which is less related to legal scholarship than BLE is, and thus less likely to have impact on it. In terms of law, behavioral ethics plays an important role in ensuring social order. According to studies conducted by Simon Gachter, there are very interesting conclusions that can be drawn regarding conditional cooperation. "Conditional cooperation is the tendency of individuals to engage in cooperation depending on the degree of cooperation of other individuals, and is argued to be one of the main sources of high contributions in social dilemmas". Gachter's figure 3 is a great example of how delicate the concept of social order is. "Several psychological mechanisms support conditional cooperation. Conditional cooperation is a likely pattern of behavior because various psychological mechanisms predict it". When it comes to ethics and economics, behavioral economics allows an intersection between the philosophical foundation of behavior as well as the complexity of economics.
Justice
Behavioral ethics offers a fascinating perspective on the field of justice, exploring how individuals' moral decision-making processes intersect with legal and ethical frameworks. In the context of justice, behavioral ethics sheds light on the psychological, social, and cognitive factors that influence how individuals perceive fairness, make ethical judgments, and behave within legal systems.
One key aspect of behavioral ethics in justice is the recognition of cognitive biases that can shape individuals' perceptions of fairness and influence their decision-making. For example, research has shown that individuals may exhibit biases such as the fairness heuristic bias, where they rely on superficial cues to judge the fairness of a situation, rather than considering objective criteria. These biases can impact how individuals perceive legal proceedings, sentencing decisions, and the outcomes of judicial processes.
Behavioral ethics researchers have delved into the correlation between employees' perceptions of justice and their engagement in ethical or unethical conduct. Since the 1990s, organizational justice has emerged as a prominent area of study within organizational psychology. Coined by Jerald Greenberg in 1987, organizational justice encompasses employees' perceptions of the fairness of organizational events, policies, and practices. This concept has been further amplified through influential research on distributive, procedural, and interactional justice, focusing on both theoretical advancements and empirical investigations into the formation and consequences of these justice perceptions.
The exploration of justice perceptions has yielded insights into their profound impact on various employee attitudes and behaviors. Positive correlations have been observed between perceptions of justice and factors such as trust, job satisfaction, and organizational commitment. Conversely, perceptions of injustice have been associated with detrimental outcomes, including increased turnover rates and engagement in counterproductive behaviors such as theft and unethical conduct, which unfortunately are not uncommon occurrences within organizational settings.
By integrating insights from behavioral ethics into discussions of justice, policymakers, legal professionals, and scholars can gain a deeper understanding of the psychological and social dynamics that influence legal decision-making and behavior. This understanding can inform efforts to promote fairness, equity, and ethical conduct within legal systems, ultimately contributing to the realization of justice for all individuals within society.
Public health
Behavior ethics in public health emphasizes the importance of thoroughly evaluating the ethics of policies aimed at encouraging healthy behavior change. It suggests three key considerations: valuing well-being alongside physical health, ensuring equitable distribution of health benefits, and monitoring for unintended consequences. By prioritizing these aspects, we can create policies that genuinely enhance overall health and well-being for everyone.
Some health policies designed to encourage healthy behaviors might not always have the positive impact we hope for. They could end up being unfair or causing unintended harm, especially if they mainly benefit people who are already healthy. It's important for those making decisions about public health to carefully consider these factors and use clear guidelines to ensure that their policies are fair and truly beneficial for everyone.
Medicine
Researchers in behavioral ethics have identified the healthcare sector as particularly susceptible to the influence of healthcare providers on individuals' decisions. Researchers in ethical behavior emphasize the responsibility of physicians and medical practitioners to clearly communicate with patients about their condition and treatment options, without bias. Failure to do so can lead patients to make decisions based on skewed information resulting in undesired outcomes. Informed consent, which entails providing patients with unbiased information about their condition and treatment options, plays a crucial role in guiding patients' choices. An individual may feel pressured or forced by a physician to undergo a specific treatment that they might not feel comfortable undertaking for religious or personal reasons.
Additionally, behavioral ethics in healthcare extends to the allocation and development of essential resources. Clinical trials for new medicines and vaccines often involve participants from disadvantaged backgrounds particularly from poor and developing countries, who may feel compelled to participate for financial reasons. This exploitation of vulnerable individuals raises significant ethical concerns within the field.
Similarly, the distribution of medical resources to impoverished regions raises ethical dilemmas. Limited resources force relief organizations to prioritize recipients, leading to debates over who receives life-saving aid. Ethical researchers characterize this situation as a crisis, as it involves choosing which individuals will receive vital resources and which will not.
Business
If firms are able to utilize the principles of behavioral psychology to alter consumer's behavior and thus increase sales and governments can change people's behavior and hence promote policy target using those same principles, then individuals and their employers can apply related principles of behavioral ethics to promote ethical behavior in the company and in society.
At most business conference, employees are required to report on the progress of their missions. It can lead to an ethical dilemma because they may report their performance better than it is due to external pressure.
Reporting progress, employees are typically asked to create reports on their progress/success they have had. As simple as this request is, it can cause an ethical problem if not respected. Employees need to ensure they are giving correct information, not only to protect the integrity of the company but to ensure their coworkers/managers have correct information. Maintaining correct reports.
In the workplace, ethical and unethical behavior has a major impact on the culture of the company. It is important to have company-wide appreciation for principles such as honesty, transparency, and integrity. These are critical traits that an ethically behaving employee should have, as it highly reflects on the personal integrity of an individual and overall the company.
Unethical behavior
Unethical behavior is an action that falls outside of what is thought morally appropriate for a person, a job or a company. Many experts would define unethical behavior as any harmful action or sequence of actions that would violate the moral normality's of the entire community within the appropriate actions. Individuals can act unethically, as can businesses, professionals and politicians.
Research results have further shown "that people low in moral character are likely to eventually dominate cheating-enabling environments, where they then cheat extensively".
Unethical behavior in the workplace is a very important and consequential issue that has the ability to decrease employee morale and productivity of the individual, group, or company in many organizations. Some examples of unethical behavior could be gossiping about a colleague behind their back, taking time off work by lying about a sickness, work hours or time sheet manipulation, and even just taking care of one's own personal work or business on work time.
Why does unethical behavior typically occur? There may be too much pressure to succeed, and individuals may turn to unethical actions to attain unrealistic expectations or goals. Individuals could also be afraid to speak up. There could also be a lack of training from one's organization. There could even be no policy for reporting so one would ever know that this behavior has even been occurring. Or managers could just be setting a bad example and with a bad leader will come bad workers.
Unethical behavior in business
Unethical behavior in a business context covers actions that do not obey the acceptable criterion of business operations beyond simply legal requirements, but morally accepted ones.
Unethical behavior can be intended to benefit solely the perpetrator, or the entire business organization. Regardless, participating in unethical behavior can lead to negative morale and an overall negative work culture.
Examples of unethical behavior in business and environment can include:
Deliberate deception
Violation of conscience
Failure to honor commitments
Unlawful conduct
Disregard of company policy
Yahoo!
Scott Thompson the former CEO of Yahoo!, was accused and found guilty of embellishing his resume. He claimed to have a degree in both Accounting and Computer Science when he only received a degree in Accounting. CEOs have significantly higher ethical standards to maintain and resultingly, Thompson was quickly replaced by Ross Levinsohn on an interim basis.
Apple
Apple was accused of equipping iPhones with batteries that over time would purposely deteriorate. This would cause sales of iPhones to increase by millions each year. When Apple was first accused of this ethical issue they initially ignored it before denying the claims. Once Apple lost, they agreed to pay up to $113 million to settle claims.
General Motors
General Motors was forced to recall more than 3 million cars that they released to the public for ignition issues in 2014. The issue with their specific scenario was that employees in the company were found to know about the issues, or what may cause these issues since 2005. The violation was not taken seriously by executives until customers began to complain.
Unethical behavior in public health
Researchers and managers are two separate groups within the pharmaceutical company that need awareness of their unethical behavior. For researchers, their misconduct is changing their main focus of a trial after it is done for the reason that they didn't find the answer to their hypothesis, as well as being able to find other positive outcomes of the trial that they present in the report instead.
Unethical behavior in justice
Unethical behavior within the context of justice encompasses a wide range of actions that contravene moral principles or legal norms, often resulting in harm or injustice to others. Despite the overarching goal of justice to uphold fairness and equity, unethical behavior can manifest in various forms within legal systems, challenging the integrity and effectiveness in the pursuit of justice.
One example of unethical behavior in the realm of justice is corruption, where individuals abuse their positions of power or authority for personal gain or advantage. This can include bribery, embezzlement, or extortion, all of which undermine the impartiality and legitimacy of legal institutions and erode public trust in the fairness of the justice system.
Another example is the abuse of discretion by legal professionals, such as judges or prosecutors, who may engage in biased decision-making or selective enforcement of laws based on personal prejudices or external influences. This can lead to unequal treatment under the law and perpetuate systemic injustices, particularly for marginalized or vulnerable populations.
Unethical behavior can occur within legal organizations themselves, such as lawyers engaging in unethical practices, such as conflicts of interest, or within law enforcement agencies where officers may engage in misconduct, such as evidence tampering or abuse of power.
Addressing unethical behavior within the justice system requires a varied approach that includes promoting ethical awareness and accountability among legal professionals, implementing effective oversight mechanisms to prevent and detect misconduct, and fostering a culture of integrity and transparency within legal institutions. By upholding ethical standards and ensuring the impartial and equitable administration of justice, society can uphold the fundamental principles of fairness, equality, and the rule of law.
Unethical behavior through the rational actor model
Unethical behavior in practice often stems from the idealized assumptions of the rational actor model, which assumes that individuals make decisions based on rational decisions to maximize their self-interest. Despite the model's premise, real-world behaviors frequently demonstrate departures from rationality due to various cognitive, social, and emotional factors.
Cognitive biases, such as overconfidence bias or framing effects, can distort individuals' perceptions of ethicality and lead them to justify unethical behavior. Social factors, such as peer pressure or organizational culture, can also play a significant role in promoting or condoning unethical conduct, even among individuals who may otherwise prioritize ethical considerations.
Moreover, emotional factors, such as fear, greed, or anger, can cloud individuals' judgment and lead them to prioritize short-term gains over long-term ethical considerations. In some cases, individuals may engage in unethical behavior due to a sense of moral disengagement, where they mentally distance themselves from the consequences of their actions or rationalize their behavior through cognitive distortions.
Overall, while the rational actor model provides a theoretical framework for understanding decision-making, it often fails to capture the complexities of human behavior, particularly in the realm of ethics. Unethical behavior in practice highlights the need for a more nuanced understanding of decision-making processes, one that considers the interplay of cognitive, social, and emotional factors in shaping individuals' ethical judgments and actions.
References
Descriptive ethics
Human behavior | Behavioral ethics | Biology | 4,697 |
8,591,795 | https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Phoenix | This is the list of stars in the constellation Phoenix.
See also
List of stars by constellation
References
List
Phoenix | List of stars in Phoenix | Astronomy | 23 |
249,402 | https://en.wikipedia.org/wiki/Computer%20terminal | A computer terminal is an electronic or electromechanical hardware device that can be used for entering data into, and transcribing data from, a computer or a computing system. Most early computers only had a front panel to input or display bits and had to be connected to a terminal to print or input text through a keyboard. Teleprinters were used as early-day hard-copy terminals and predated the use of a computer screen by decades. The computer would typically transmit a line of data which would be printed on paper, and accept a line of data from a keyboard over a serial or other interface. Starting in the mid-1970s with microcomputers such as the Sphere 1, Sol-20, and Apple I, display circuitry and keyboards began to be integrated into personal and workstation computer systems, with the computer handling character generation and outputting to a CRT display such as a computer monitor or, sometimes, a consumer TV, but most larger computers continued to require terminals.
Early terminals were inexpensive devices but very slow compared to punched cards or paper tape for input; with the advent of time-sharing systems, terminals slowly pushed these older forms of interaction from the industry. Related developments were the improvement of terminal technology and the introduction of inexpensive video displays. Early Teletypes only printed out with a communications speed of only 75 baud or 10 5-bit characters per second, and by the 1970s speeds of video terminals had improved to 2400 or 9600 . Similarly, the speed of remote batch terminals had improved to at the beginning of the decade and by the end of the decade, with higher speeds possible on more expensive terminals.
The function of a terminal is typically confined to transcription and input of data; a device with significant local, programmable data-processing capability may be called a "smart terminal" or fat client. A terminal that depends on the host computer for its processing power is called a "dumb terminal" or a thin client. In the era of serial (RS-232) terminals there was a conflicting usage of the term "smart terminal" as a dumb terminal with no user-accessible local computing power but a particularly rich set of control codes for manipulating the display; this conflict was not resolved before hardware serial terminals became obsolete.
A personal computer can run terminal emulator software that replicates functions of a real-world terminal, sometimes allowing concurrent use of local programs and access to a distant terminal host system, either over a direct serial connection or over a network using, e.g., SSH. Today few if any dedicated computer terminals are being manufactured, as time sharing on large computers has been replaced by personal computers, handheld devices and workstations with graphical user interfaces. User interactions with servers use either software such as Web browsers, or terminal emulators, with connections over high-speed networks.
History
The console of Konrad Zuse's Z3 had a keyboard in 1941, as did the Z4 in 1942–1945. However, these consoles could only be used to enter numeric inputs and were thus analogous to those of calculating machines; programs, commands, and other data were entered via paper tape. Both machines had a row of display lamps for results.
In 1956, the Whirlwind Mark I computer became the first computer equipped with a keyboard-printer combination with which to support direct input of data and commands and output of results. That device was a Friden Flexowriter, which would continue to serve this purpose on many other early computers well into the 1960s.
Categories
Hard-copy terminals
Early user terminals connected to computers were, like the Flexowriter, electromechanical teleprinters/teletypewriters (TeleTYpewriter, TTY), such as the Teletype Model 33, originally used for telegraphy; early Teletypes were typically configured as Keyboard Send-Receive (KSR) or Automatic Send-Receive (ASR). Some terminals, such as the ASR Teletype models, included a paper tape reader and punch which could record output such as a program listing. The data on the tape could be re-entered into the computer using the tape reader on the teletype, or printed to paper. Teletypes used the current loop interface that was already used in telegraphy. A less expensive Read Only (RO) configuration was available for the Teletype.
Custom-designs keyboard/printer terminals that came later included the IBM 2741 (1965) and the DECwriter (1970). Respective top speeds of teletypes, IBM 2741 and the LA30 (an early DECwriter) were 10, 15 and 30
characters per second. Although at that time "paper was king" the speed of interaction was relatively limited.
The DECwriter was the last major printing-terminal product. It faded away after 1980 under pressure from video display units (VDUs), with the last revision (the DECwriter IV of 1982) abandoning the classic teletypewriter form for one more resembling a desktop printer.
Video display unit
A video display unit (VDU) displays information on a screen rather than printing text to paper and typically uses a cathode-ray tube (CRT). VDUs in the 1950s were typically designed for displaying graphical data rather than text and were used in, e.g., experimental computers at institutions like MIT; computers used in academia, government and business, sold under brand names like DEC, ERA, IBM and UNIVAC; military computers supporting specific defence applications such as ballistic missile warning systems and radar/air defence coordination systems like BUIC and SAGE.
Two early landmarks in the development of the VDU were the Univac Uniscope and the IBM 2260, both in 1964. These were block-mode terminals designed to display a page at a time, using proprietary protocols; in contrast to character-mode devices, they enter data from the keyboard into a display buffer rather than transmitting them immediately. In contrast to later character-mode devices, the Uniscope used synchronous serial communication over an EIA RS-232 interface to communicate between the multiplexer and the host, while the 2260 used either a channel connection or asynchronous serial communication between the 2848 and the host. The 2265, related to the 2260, also used asynchronous serial communication.
The Datapoint 3300 from Computer Terminal Corporation, announced in 1967 and shipped in 1969, was a character-mode device that emulated a Model 33 Teletype. This reflects the fact that early character-mode terminals were often deployed to replace teletype machines as a way to reduce operating costs.
The next generation of VDUs went beyond teletype emulation with an addressable cursor that gave them the ability to paint two-dimensional displays on the screen. Very early VDUs with cursor addressibility included the VT05 and the Hazeltine 2000 operating in character mode, both from 1970. Despite this capability, early devices of this type were often called "Glass TTYs". Later, the term "glass TTY" tended to be restrospectively narrowed to devices without full cursor addressibility.
The classic era of the VDU began in the early 1970s and was closely intertwined with the rise of time sharing computers. Important early products were the ADM-3A, VT52, and VT100. These devices used no complicated CPU, instead relying on individual logic gates, LSI chips, or microprocessors such as the Intel 8080. This made them inexpensive and they quickly became extremely popular input-output devices on many types of computer system, often replacing earlier and more expensive printing terminals.
After 1970 several suppliers gravitated to a set of common standards:
ASCII character set (rather than, say, EBCDIC or anything specific to one company), but early/economy models often supported only capital letters (such as the original ADM-3, the Data General model 6052 – which could be upgraded to a 6053 with a lower-case character ROM – and the Heathkit H9)
RS-232 serial ports (25-pin, ready to connect to a modem, yet some manufacturer-specific pin usage extended the standard, e.g. for use with 20-mA current loops)
24 lines (or possibly 25 – sometimes a special status line) of 72 or 80 characters of text (80 was the same as IBM punched cards). Later models sometimes had two character-width settings.
Some type of cursor that can be positioned (with arrow keys or "home" and other direct cursor address setting codes).
Implementation of at least 3 control codes: Carriage Return (Ctrl-M), Line-Feed (Ctrl-J), and Bell (Ctrl-G), but usually many more, such as escape sequences to provide underlining, dim or reverse-video character highlighting, and especially to clear the display and position the cursor.
The experimental era of serial VDUs culminated with the VT100 in 1978. By the early 1980s, there were dozens of manufacturers of terminals, including Lear-Siegler, ADDS, Data General, DEC, Hazeltine Corporation, Heath/Zenith, Hewlett-Packard, IBM, TeleVideo, Volker-Craig, and Wyse, many of which had incompatible command sequences (although many used the early ADM-3 as a starting point).
The great variations in the control codes between makers gave rise to software that identified and grouped terminal types so the system software would correctly display input forms using the appropriate control codes; In Unix-like systems the termcap or terminfo files, the stty utility, and the TERM environment variable would be used; in Data General's Business BASIC software, for example, at login-time a sequence of codes were sent to the terminal to try to read the cursor's position or the 25th line's contents using a sequence of different manufacturer's control code sequences, and the terminal-generated response would determine a single-digit number (such as 6 for Data General Dasher terminals, 4 for ADM 3A/5/11/12 terminals, 0 or 2 for TTYs with no special features) that would be available to programs to say which set of codes to use.
The great majority of terminals were monochrome, manufacturers variously offering green, white or amber and sometimes blue screen phosphors. (Amber was claimed to reduce eye strain). Terminals with modest color capability were also available but not widely used; for example, a color version of the popular Wyse WY50, the WY350, offered 64 shades on each character cell.
VDUs were eventually displaced from most applications by networked personal computers, at first slowly after 1985 and with increasing speed in the 1990s. However, they had a lasting influence on PCs. The keyboard layout of the VT220 terminal strongly influenced the Model M shipped on IBM PCs from 1985, and through it all later computer keyboards.
Although flat-panel displays were available since the 1950s, cathode-ray tubes continued to dominate the market until the personal computer had made serious inroads into the display terminal market. By the time cathode-ray tubes on PCs were replaced by flatscreens after the year 2000, the hardware computer terminal was nearly obsolete.
Character-oriented terminals
A character-oriented terminal is a type of computer terminal that communicates with its host one character at a time, as opposed to a block-oriented terminal that communicates in blocks of data. It is the most common type of data terminal, because it is easy to implement and program. Connection to the mainframe computer or terminal server is achieved via RS-232 serial links, Ethernet or other proprietary protocols.
Character-oriented terminals can be "dumb" or "smart". Dumb terminals are those that can interpret a limited number of control codes (CR, LF, etc.) but do not have the ability to process special escape sequences that perform functions such as clearing a line, clearing the screen, or controlling cursor position. In this context dumb terminals are sometimes dubbed glass Teletypes, for they essentially have the same limited functionality as does a mechanical Teletype. This type of dumb terminal is still supported on modern Unix-like systems by setting the environment variable to . Smart or intelligent terminals are those that also have the ability to process escape sequences, in particular the VT52, VT100 or ANSI escape sequences.
Text terminals
A text terminal, or often just terminal (sometimes text console) is a serial computer interface for text entry and display. Information is presented as an array of pre-selected formed characters. When such devices use a video display such as a cathode-ray tube, they are called a "video display unit" or "visual display unit" (VDU) or "video display terminal" (VDT).
The system console is often a text terminal used to operate a computer. Modern computers have a built-in keyboard and display for the console. Some Unix-like operating systems such as Linux and FreeBSD have virtual consoles to provide several text terminals on a single computer.
The fundamental type of application running on a text terminal is a command-line interpreter or shell, which prompts for commands from the user and executes each command after a press of . This includes Unix shells and some interactive programming environments. In a shell, most of the commands are small applications themselves.
Another important application type is that of the text editor. A text editor typically occupies the full area of display, displays one or more text documents, and allows the user to edit the documents. The text editor has, for many uses, been replaced by the word processor, which usually provides rich formatting features that the text editor lacks. The first word processors used text to communicate the structure of the document, but later word processors operate in a graphical environment and provide a WYSIWYG simulation of the formatted output. However, text editors are still used for documents containing markup such as DocBook or LaTeX.
Programs such as Telix and Minicom control a modem and the local terminal to let the user interact with remote servers. On the Internet, telnet and ssh work similarly.
In the simplest form, a text terminal is like a file. Writing to the file displays the text and reading from the file produces what the user enters. In Unix-like operating systems, there are several character special files that correspond to available text terminals. For other operations, there are special escape sequences, control characters and termios functions that a program can use, most easily via a library such as ncurses. For more complex operations, the programs can use terminal specific ioctl system calls. For an application, the simplest way to use a terminal is to simply write and read text strings to and from it sequentially. The output text is scrolled, so that only the last several lines (typically 24) are visible. Unix systems typically buffer the input text until the Enter key is pressed, so the application receives a ready string of text. In this mode, the application need not know much about the terminal. For many interactive applications this is not sufficient. One of the common enhancements is command-line editing (assisted with such libraries as readline); it also may give access to command history. This is very helpful for various interactive command-line interpreters.
Even more advanced interactivity is provided with full-screen applications. Those applications completely control the screen layout; also they respond to key-pressing immediately. This mode is very useful for text editors, file managers and web browsers. In addition, such programs control the color and brightness of text on the screen, and decorate it with underline, blinking and special characters (e.g. box-drawing characters). To achieve all this, the application must deal not only with plain text strings, but also with control characters and escape sequences, which allow moving the cursor to an arbitrary position, clearing portions of the screen, changing colors and displaying special characters, and also responding to function keys. The great problem here is that there are many different terminals and terminal emulators, each with its own set of escape sequences. In order to overcome this, special libraries (such as curses) have been created, together with terminal description databases, such as Termcap and Terminfo.
Block-oriented terminals
A block-oriented terminal or block mode terminal is a type of computer terminal that communicates with its host in blocks of data, as opposed to a character-oriented terminal that communicates with its host one character at a time. A block-oriented terminal may be card-oriented, display-oriented, keyboard-display, keyboard-printer, printer or some combination.
The IBM 3270 is perhaps the most familiar implementation of a block-oriented display terminal, but most mainframe computer manufacturers and several other companies produced them. The description below is in terms of the 3270, but similar considerations apply to other types.
Block-oriented terminals typically incorporate a buffer which stores one screen or more of data, and also stores data attributes, not only indicating appearance (color, brightness, blinking, etc.) but also marking the data as being enterable by the terminal operator vs. protected against entry, as allowing the entry of only numeric information vs. allowing any characters, etc. In a typical application the host sends the terminal a preformatted panel containing both static data and fields into which data may be entered. The terminal operator keys data, such as updates in a database entry, into the appropriate fields. When entry is complete (or ENTER or PF key pressed on 3270s), a block of data, usually just the data entered by the operator (modified data), is sent to the host in one transmission. The 3270 terminal buffer (at the device) could be updated on a single character basis, if necessary, because of the existence of a "set buffer address order" (SBA), that usually preceded any data to be written/overwritten within the buffer. A complete buffer could also be read or replaced using the READ BUFFER command or WRITE command (unformatted or formatted in the case of the 3270).
Block-oriented terminals cause less system load on the host and less network traffic than character-oriented terminals. They also appear more responsive to the user, especially over slow connections, since editing within a field is done locally rather than depending on echoing from the host system.
Early terminals had limited editing capabilities – 3270 terminals, for example, only could check entries as valid numerics. Subsequent "smart" or "intelligent" terminals incorporated microprocessors and supported more local processing.
Programmers of block-oriented terminals often used the technique of storing context information for the transaction in progress on the screen, possibly in a hidden field, rather than depending on a running program to keep track of status. This was the precursor of the HTML technique of storing context in the URL as data to be passed as arguments to a CGI program.
Unlike a character-oriented terminal, where typing a character into the last position of the screen usually causes the terminal to scroll down one line, entering data into the last screen position on a block-oriented terminal usually causes the cursor to wrap— move to the start of the first enterable field. Programmers might "protect" the last screen position to prevent inadvertent wrap. Likewise a protected field following an enterable field might lock the keyboard and sound an audible alarm if the operator attempted to enter more data into the field than allowed.
Common block-oriented terminals
Hard-copy
IBM 1050
IBM 2740
Remote job entry
IBM 2770
IBM 2780
IBM 3770
IBM 3780
Display
Graphical terminals
A graphical terminal can display images as well as text. Graphical terminals are divided into vector-mode terminals, and raster mode.
A vector-mode display directly draws lines on the face of a cathode-ray tube under control of the host computer system. The lines are continuously formed, but since the speed of electronics is limited, the number of concurrent lines that can be displayed at one time is limited. Vector-mode displays were historically important but are no longer used.
Practically all modern graphic displays are raster-mode, descended from the picture scanning techniques used for television, in which the visual elements are a rectangular array of pixels. Since the raster image is only perceptible to the human eye as a whole for a very short time, the raster must be refreshed many times per second to give the appearance of a persistent display. The electronic demands of refreshing display memory meant that graphic terminals were developed much later than text terminals, and initially cost much more.
Most terminals today are graphical; that is, they can show images on the screen. The modern term for graphical terminal is "thin client". A thin client typically uses a protocol like X11 for Unix terminals, or RDP for Microsoft Windows. The bandwidth needed depends on the protocol used, the resolution, and the color depth.
Modern graphic terminals allow display of images in color, and of text in varying sizes, colors, and fonts (type faces).
In the early 1990s, an industry consortium attempted to define a standard, AlphaWindows, that would allow a single CRT screen to implement multiple windows, each of which was to behave as a distinct terminal. Unfortunately, like I2O, this suffered from being run as a closed standard: non-members were unable to obtain even minimal information and there was no realistic way a small company or independent developer could join the consortium.
Intelligent terminals
An intelligent terminal does its own processing, usually implying a microprocessor is built in, but not all terminals with microprocessors did any real processing of input: the main computer to which it was attached would have to respond quickly to each keystroke. The term "intelligent" in this context dates from 1969.
Notable examples include the IBM 2250, predecessor to the IBM 3250 and IBM 5080, and IBM 2260, predecessor to the IBM 3270, introduced with System/360 in 1964.
Most terminals were connected to minicomputers or mainframe computers and often had a green or amber screen. Typically terminals communicate with the computer via a serial port via a null modem cable, often using an EIA RS-232 or RS-422 or RS-423 or a current loop serial interface. IBM systems typically communicated over a Bus and Tag channel, a coaxial cable using a proprietary protocol, a communications link using Binary Synchronous Communications or IBM's SNA protocol, but for many DEC, Data General and NCR (and so on) computers there were many visual display suppliers competing against the computer manufacturer for terminals to expand the systems. In fact, the instruction design for the Intel 8008 was originally conceived at Computer Terminal Corporation as the processor for the Datapoint 2200.
From the introduction of the IBM 3270, and the DEC VT100 (1978), the user and programmer could notice significant advantages in VDU technology improvements, yet not all programmers used the features of the new terminals (backward compatibility in the VT100 and later TeleVideo terminals, for example, with "dumb terminals" allowed programmers to continue to use older software).
Some dumb terminals had been able to respond to a few escape sequences without needing microprocessors: they used multiple printed circuit boards with many integrated circuits; the single factor that classed a terminal as "intelligent" was its ability to process user-input within the terminal—not interrupting the main computer at each keystroke—and send a block of data at a time (for example: when the user has finished a whole field or form). Most terminals in the early 1980s, such as ADM-3A, TVI912, Data General D2, DEC VT52, despite the introduction of ANSI terminals in 1978, were essentially "dumb" terminals, although some of them (such as the later ADM and TVI models) did have a primitive block-send capability. Common early uses of local processing power included features that had little to do with off-loading data processing from the host computer but added useful features such as printing to a local printer, buffered serial data transmission and serial handshaking (to accommodate higher serial transfer speeds), and more sophisticated character attributes for the display, as well as the ability to switch emulation modes to mimic competitor's models, that became increasingly important selling features during the 1980s especially, when buyers could mix and match different suppliers' equipment to a greater extent than before.
The advance in microprocessors and lower memory costs made it possible for the terminal to handle editing operations such as inserting characters within a field that may have previously required a full screen-full of characters to be re-sent from the computer, possibly over a slow modem line. Around the mid-1980s most intelligent terminals, costing less than most dumb terminals would have a few years earlier, could provide enough user-friendly local editing of data and send the completed form to the main computer. Providing even more processing possibilities, workstations like the TeleVideo TS-800 could run CP/M-86, blurring the distinction between terminal and Personal Computer.
Another of the motivations for development of the microprocessor was to simplify and reduce the electronics required in a terminal. That also made it practicable to load several "personalities" into a single terminal, so a Qume QVT-102 could emulate many popular terminals of the day, and so be sold into organizations that did not wish to make any software changes. Frequently emulated terminal types included:
Lear Siegler ADM-3A and later models
TeleVideo 910 to 950 (these models copied ADM3 codes and added several of their own, eventually being copied by Qume and others)
Digital Equipment Corporation VT52 and VT100
Data General D1 to D3 and especially D200 and D210
Hazeltine Corporation H1500
Tektronix 4014
Wyse W50, W60 and W99
The ANSI X3.64 escape code standard produced uniformity to some extent, but significant differences remained. For example, the VT100, Heathkit H19 in ANSI mode, Televideo 970, Data General D460, and Qume QVT-108 terminals all followed the ANSI standard, yet differences might exist in codes from function keys, what character attributes were available, block-sending of fields within forms, "foreign" character facilities, and handling of printers connected to the back of the screen.
In the 21st century,
the term Intelligent Terminal can now refer to a retail Point of Sale computer.
Contemporary
While early IBM PCs had single-color green screens, these screens were not terminals. The screen of a PC did not contain any character generation hardware; all video signals and video formatting were generated by the video display card in the PC, or (in most graphics modes) by the CPU and software. An IBM PC monitor, whether it was the green monochrome display or the 16-color display, was technically much more similar to an analog TV set (without a tuner) than to a terminal. With suitable software a PC could, however, emulate a terminal, and in that capacity it could be connected to a mainframe or minicomputer. The Data General/One could be booted into terminal emulator mode from its ROM. Eventually microprocessor-based personal computers greatly reduced the market demand for conventional terminals.
In the 1990s especially, "thin clients" and X terminals have combined economical local processing power with central, shared computer facilities to retain some of the advantages of terminals over personal computers:
Today, most PC telnet clients provide emulation of the most common terminal, the DEC VT100, using the ANSI escape code standard X3.64, or could run as X terminals using software such as Cygwin/X under Microsoft Windows or X.Org Server software under Linux.
Since the advent and subsequent popularization of the personal computer, few genuine hardware terminals are used to interface with computers today. Using the monitor and keyboard, modern operating systems like Linux and the BSD derivatives feature virtual consoles, which are mostly independent from the hardware used.
When using a graphical user interface (or GUI) like the X Window System, one's display is typically occupied by a collection of windows associated with various applications, rather than a single stream of text associated with a single process. In this case, one may use a terminal emulator application within the windowing environment. This arrangement permits terminal-like interaction with the computer (for running a command-line interpreter, for example) without the need for a physical terminal device; it can even run multiple terminal emulators on the same device.
System console
One meaning of system console, computer console, root console, operator's console, or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a printer or screen, and traditionally is a text terminal, but may also be a graphical terminal.
Another, older, meaning of system console, computer console, hardware console, operator's console or simply console is a hardware component used by an operator to control the hardware, typically some combination of front panel, keyboard/printer and keyboard/display.
History
Prior to the development of alphanumeric CRT system consoles, some computers such as the IBM 1620 had console typewriters and front panels while the very first electronic stored-program computer, the Manchester Baby, used a combination of electromechanical switches and a CRT to provide console functions—the CRT displaying memory contents in binary by mirroring the machine's Williams-Kilburn tube CRT-based RAM.
Some early operating systems supported either a single keyboard/print or keyboard/display device for controlling the OS. Some also supported a single alternate console, and some supported a hardcopy console for retaining a record of commands, responses and other console messages. However, in the late 1960s it became common for operating systems to support many more consoles than 3, and operating systems began appearing in which the console was simply any terminal with a privileged user logged on.
On early minicomputers, the console was a serial console, an RS-232 serial link to a terminal such as a ASR-33 or, later, a terminal from Digital Equipment Corporation (DEC), e.g., DECWriter, VT100. This terminal was usually kept in a secured room since it could be used for certain privileged functions such as halting the system or selecting which media to boot from. Large midrange systems, e.g. those from Sun Microsystems, Hewlett-Packard and IBM, still use serial consoles. In larger installations, the console ports are attached to multiplexers or network-connected multiport serial servers that let an operator connect a terminal to any of the attached servers. Today, serial consoles are often used for accessing headless systems, usually with a terminal emulator running on a laptop. Also, routers, enterprise network switches and other telecommunication equipment have RS-232 serial console ports.
On PCs and workstations, the computer's attached keyboard and monitor have the equivalent function. Since the monitor cable carries video signals, it cannot be extended very far. Often, installations with many servers therefore use keyboard/video multiplexers (KVM switches) and possibly video amplifiers to centralize console access. In recent years, KVM/IP devices have become available that allow a remote computer to view the video output and send keyboard input via any TCP/IP network and therefore the Internet.
Some PC BIOSes, especially in servers, also support serial consoles, giving access to the BIOS through a serial port so that the simpler and cheaper serial console infrastructure can be used. Even where BIOS support is lacking, some operating systems, e.g. FreeBSD and Linux, can be configured for serial console operation either during bootup, or after startup.
Starting with the IBM 9672, IBM large systems have used a Hardware Management Console (HMC), consisting of a PC and a specialized application, instead of a 3270 or serial link. Other IBM product lines also use an HMC, e.g., System p.
It is usually possible to log in from the console. Depending on configuration, the operating system may treat a login session from the console as being more trustworthy than a login session from other sources.
Emulation
A terminal emulator is a piece of software that emulates a text terminal. In the past, before the widespread use of local area networks and broadband internet access, many computers would use a serial access program to communicate with other computers via telephone line or serial device.
When the first Macintosh was released, a program called MacTerminal was used to communicate with many computers, including the IBM PC.
The Win32 console on Windows does not emulate a physical terminal that supports escape sequences so SSH and Telnet programs (for logging in textually to remote computers) for Windows, including the Telnet program bundled with some versions of Windows, often incorporate their own code to process escape sequences.
The terminal emulators on most Unix-like systems—such as, for example, gnome-terminal, Konsole, QTerminal, xterm, and Terminal.app—do emulate physical terminals including support for escape sequences; e.g., xterm can emulate the VT220 and Tektronix 4010 hardware terminals.
Modes
Terminals can operate in various modes, relating to when they send input typed by the user on the keyboard to the receiving system (whatever that may be):
Character mode ( character-at-a-time mode): In this mode, typed input is unbuffered and sent immediately to the receiving system.
Line mode ( line-at-a-time mode): In this mode, the terminal is buffered, provides a local line editing function, and sends an entire input line, after it has been locally edited, when the user presses an, e.g., , , key. A so-called "line mode terminal" operates solely in this mode.
Block mode ( screen-at-a-time mode): In this mode (also called block-oriented), the terminal is buffered and provides a local full-screen data function. The user can enter input into multiple fields in a form on the screen (defined to the terminal by the receiving system), moving the cursor around the screen using keys such as and the arrow keys and performing editing functions locally using , , and so forth. The terminal sends only the completed form, consisting of all the data entered on the screen, to the receiving system when the user presses an key.
There is a distinction between the and the keys. In some multiple-mode terminals, that can switch between modes, pressing the key when not in block mode does not do the same thing as pressing the key. Whilst the key will cause an input line to be sent to the host in line-at-a-time mode, the key will rather cause the terminal to transmit the contents of the character row where the cursor is currently positioned to the host, host-issued prompts and all. Some block-mode terminals have both an and local cursor moving keys such as and .
Different computer operating systems require different degrees of mode support when terminals are used as computer terminals. The POSIX terminal interface, as provided by Unix and POSIX-compliant operating systems, does not accommodate block-mode terminals at all, and only rarely requires the terminal itself to be in line-at-a-time mode, since the operating system is required to provide canonical input mode, where the terminal device driver in the operating system emulates local echo in the terminal, and performs line editing functions at the host end. Most usually, and especially so that the host system can support non-canonical input mode, terminals for POSIX-compliant systems are always in character-at-a-time mode. In contrast, IBM 3270 terminals connected to MVS systems are always required to be in block mode.
See also
Blit (computer terminal)
Data terminal equipment
IBM 3101
Micro-Term ERGO-201
Minitel
Text user interface
TV Typewriter
Videotex
Virtual console (PC)
Communication endpoint
End system
Node (networking)
Terminal capabilities
Terminal emulator
Visual editor
VT05
Notes
References
External links
The Terminals Wiki, an encyclopedia of computer terminals.
Text Terminal HOWTO from tldp.org
The TTY demystified from linussakesson.net
Directive 1999/5/EC of the European Parliament and of the Council of 9 March 1999 on radio equipment and telecommunications terminal equipment and the mutual recognition of their conformity (R&TTE Directive)
List of Computer Terminals from epocalc.net
VTTEST – VT100/VT220/XTerm test utility A terminal test utility by Thomas E. Dickey
User interfaces
History of human–computer interaction
Operating system technology
Block-oriented terminal | Computer terminal | Technology | 7,650 |
35,408,783 | https://en.wikipedia.org/wiki/Indus%20river%20dolphin | The Indus river dolphin (Platanista minor) is a species of freshwater dolphin in the family Platanistidae. It is endemic to the Indus River basin in Pakistan and Beas River in northwestern India. This dolphin was the first discovered side-swimming cetacean. It is patchily distributed in five small, sub-populations that are separated by irrigation barrages.
From the 1970s until 1998, the Ganges River dolphin (Platanista gangetica) and the Indus dolphin were regarded as separate species; however, in 1998, their classification was changed from two separate species to subspecies of a single species. However, more recent studies support them being distinct species. It has been named as the national mammal of Pakistan and the state aquatic animal of Punjab, India.
Taxonomy
The Indus river dolphin was described in 1853 by Richard Owen under the name Platanista gangetica, var. minor, based on a dolphin skull, which was smaller than skulls of the Ganges river dolphin.
The Indus and Ganges river dolphins were initially classified as a single species, Platanista gangetica. In the 1970s, they were considered to be distinct species, but again grouped as a single species in the 1990s. However, more recent studies of genes, divergence time, and skull structure support both being distinct species.
The Ganges river dolphin split from the Indus river dolphin during the Pleistocene, around 550,000 years ago.
Description
The Indus dolphin has the long, pointed nose characteristic of all river dolphins. The teeth are visible in both the upper and lower jaws even when the mouth is closed. The teeth of young animals are almost an inch long, thin and curved; however, as animals age the teeth undergo considerable changes and in mature adults become square, bony, flat disks. The snout thickens towards its end. The species does not have a crystalline eye lens, rendering it effectively blind, although it may still be able to detect the intensity and direction of light. Navigation and hunting are carried out using echolocation.
The body is a brownish color and stocky at the middle. The species has a small triangular lump in place of a dorsal fin. The flippers and tail are thin and large in relation to the body size, which is about in males and in females. The oldest recorded animal was a 28-year-old male in length. Mature adult females are larger than males. Sexual dimorphism is expressed after females reach about ; the female rostrum continues to grow after the male rostrum stops growing, eventually reaching approximately longer.
Distribution
The Indus river dolphin presently only occurs in the Indus River system. These dolphins occupied about 3,400 km of the Indus River and the tributaries attached to it in the past. But today, its only found in one fifth of this previous range. Its effective range today has declined by 80% since 1870. It no longer exists throughout the tributaries, and its home range is only 690 km of the river. This dolphin prefers a freshwater habitat with a water depth greater than 1 meter and that have more than 700 meters squared of cross-sectional area. Today this species can only be found in the Indus River's main stem, along with a remnant population in the Beas River. A population can be found in the Harike Wetland located in Punjab, India.
Since the two originally inhabited river systems – between the Sukkur and Guddu barrage in Pakistan's Sindh Province, and in the Punjab and Khyber Pakhtunkhwa Provinces – are not connected in any way, how they were colonized remains unknown. The river dolphins are unlikely to have travelled from one river to another through the sea route, since the two estuaries are very far apart. A possible explanation is that several north Indian rivers such as the Sutlej and Yamuna changed their channels in ancient times while retaining their dolphin populations.
Behaviour and ecology
It is thought that the Indus river dolphin swims on its side to efficiently navigate shallow waters during the dry season.
Threats
The Indus river dolphin has been very adversely affected by human use of the river systems in the subcontinent. Entanglement in fishing nets can cause significant damage to local population numbers. Some dolphins are still caught each year for their oil and meat that is used as a liniment, as an aphrodisiac and as bait for catfish. Irrigation has also lowered water levels throughout their ranges. Poisoning of the water supply from industrial and agricultural chemicals may have also contributed to population decline. Perhaps the most significant issue is the building of dozens of dams along many rivers, causing the segregation of populations and a narrowed gene pool in which dolphins can breed. There are currently three sub-populations of Indus dolphins considered capable of long-term survival if protected.
Conservation status
The Indus river dolphin is protected under Appendix I of the Convention on the International Trade of Endangered Species which prohibits the commercial international trade of the species (including parts and derivatives).
It is listed as Endangered on the IUCN Red List, and by the U.S. government National Marine Fisheries Service under the U.S. Endangered Species Act. It is the second most endangered cetacean in the world. As of 2017 it is estimated that there are only about 1,800 individuals remaining (up from 1,200 estimated in 2001). A demonstrable increase in the main river population of the Indus subspecies between 1974 and 2008 may have been driven by permanent immigration from upstream tributaries, where the species no longer occurs.
It is threatened by extensive fishing that reduces their prey availability. Accidentally entangling in fishing nets causes fatalities. Deforestation along the river basins is causing sedimentation which degrades the dolphin's habitat. Another factor for its decline is the construction of cross-river structures such as dams and barrages causing more isolation of the already small sub-populations. A major threat is human induced water pollution through industrial and human waste, or agricultural run-off containing high amounts of chemical fertilizers and poisonous pesticides.
Studies suggest that a better understanding of this species ecology is needed in order to develop good conservation plans. Regular monitoring is necessary to assess the population's status and factors causing its decline. A satellite tagging effort was begun in 2022.
See also
South Asian river dolphin
Project Dolphin (India)
Amazon river dolphin
References
Further reading
External links
South Asian river dolphin
Mammals of Pakistan
Mammals of India
National symbols of Pakistan
EDGE species
Mammals described in 1853
Taxa named by Richard Owen
Apex predators | Indus river dolphin | Biology | 1,305 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.