id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
57,552,007 | https://en.wikipedia.org/wiki/Fostriecin | Fostriecin is a type I polyketide synthase (PKS) derived natural product, originally isolated from the soil bacterium Streptomyces pulveraceus. It belongs to a class of natural products which characteristically contain a phosphate ester, an α,β-unsaturated lactam and a conjugated linear diene or triene chain produced by Streptomyces. This class includes structurally related compounds cytostatin and phoslactomycin. Fostriecin is a known potent and selective inhibitor of protein serine/threonine phosphatases, as well as DNA topoisomerase II. Due to its activity against protein phosphatases PP2A and PP4 (IC50 1.5nM and 3.0nM, respectively) which play a vital role in cell growth, cell division, and signal transduction, fostriecin was looked into for its antitumor activity in vivo and showed in vitro activity against leukemia, lung cancer, breast cancer, and ovarian cancer. This activity is thought to be due to PP2A's assumed role in regulating apoptosis of cells by activating cytotoxic T-lymphocytes and natural killer cells involved in tumor surveillance, along with human immunodeficiency virus-1 (HIV-1) transcription and replication.
Biosynthesis
The gene cluster for fostriecin consists of 21 open reading frames (ORFs) that encode for six modular type I polyketide synthases and seven tailoring enzymes. These enzymes within the fos gene have the nomenclature FosA-FosM, though not all are used in the biosynthesis of the target molecule. The six modular PKSs encoded in FosA-FosF give rise to a loading module, eight elongation modules, and a thioesterase domain.
Conventional polyketide synthase pathway
The loading module starts out the biosynthetic pathway by tethering to an acyl group of the acyl carrier protein (ACP).This starter unit then gets carried to module 1 where a malonyl group gets added on by malonyl-CoA followed by the β-carbonyl getting reduced to a hydroxyl group by a ketoreductase (KR) enzyme that then undergoes dehydration to a trans double bond by a dehydratase enzyme (DH). In this first module, there is an enoyl reductase enzyme (ER), but it is inactive. This chain then goes through two more elongations in module 2 and module 3 which are similar to module 1, only they result in cis double bonds at their respective β-carbons. The now 8-carbon chain then goes through another extension by malonyl-CoA in module 4, followed by a reduction by a KR domain. In module 5, an elongation is performed by methylmalonyl-CoA in which the chain undergoes another reduction to a hydroxyl group. The polyketide intermediate is then passed on to module 6 where another malonyl-CoA is loaded, and here, a reduction, then dehydration of the β-carbon form the final double bond of the linear chain. The growing chain is almost completely synthesized at this point and is taken through two elongations of 2 carbons in modules 7 and 8, each of which contain KRs to produce the final two hydroxyl groups in the alkyl chain. The linear carbon chain is now fully synthesized and hydrolyzed off of ACP by a thioesterase domain (TE) to undergo post-synthetic tailoring steps.
Post-synthetic tailoring
The post-synthetic tailoring enzymes for the production of fostriecin add on the four functionalities not originally included in the PKS pathway of the linear molecule. These enzymes include two cytochrome P450 enzymes (FosJ and FosK), one homoserine kinase (FosH), and one NAD-dependent epimerase/dehydratase family protein (FosM). The first step after the thioesterase domain, which forms a six-membered lactone ring, is the oxidation at C8 by FosJ, followed by a phosphorylation at C9 by FosH. This is then followed by an oxidation at the terminal carbon by FosK, and finally, a loss of malonic acid by FosM yields the desired natural product.
References
Polyketides
Phospholipids | Fostriecin | Chemistry | 960 |
73,196,367 | https://en.wikipedia.org/wiki/Celloscope%20automated%20cell%20counter | Celloscope automated cell counter was developed in the 1950s for enumeration of erythrocytes, leukocytes, and thrombocytes in blood samples. Together with the Coulter counter, the Celloscope analyzer can be considered one of the predecessors of today's automated hematology analyzers, as the principle of the electrical impedance method is still utilized in cell counters installed in clinical laboratories around the world.
History
The Celloscope was developed for the Swedish company AB Lars Ljungberg & Co under the direction of engineer Erik Öhlin at Linson Instrument AB. In an interview published in the Clinical Biochemistry in the Nordics, a membership magazine for the Nordic Association for Clinical Chemistry, Lars Ljungberg explains that he and his coworkers had been considering different solutions for counting blood cells for some time when they came across a method presented by the American Navy on how particles could be counted when allowed to pass a capillary hole through which a weak direct current was passed simultaneously. The Celloscope method exploits the feature of blood cells not being conductive and therefore make interruptions (pulses) to the current, which then can be counted. What Ljungberg and coworkers did not know was that Wallace H. Coulter in Chicago had applied for and received a patent on the particle count principle in 1953.
When presented at a German tradeshow in September 1957, the Celloscope counter was examined by Dr. George Brecher, the first author of one of the NIH evaluations of the Coulter counter. In a letter to Coulter, Brecher reported about what he thought was a close functional copy of the Coulter counter, yet with simpler electronics and an integrated sample stand, creating a both smaller and less costly instrument for use in clinical applications.
When the Celloscope was introduced to the market in the early 60s, a lawsuit was filed by Coulter Electronics Inc against AB Lars Ljungberg & Co for alleged infringement of the American patent. After many and long negotiations, the companies came to the agreement to compensate Coulter for the sales that had been made in USA and some European countries where he had the patent and that AB Lars Ljungberg & Co was free to sell their analyzer in other regions.
Method principle
Before the introduction of the first automated cell counters, hematologists were referred to manual cell count under the microscope. The Celloscope method for automated counting of blood cells was described in an article by Öhlin in 1958. In the described method, cells in a saline (conductive) solution are allowed to pass through a capillary with a length and diameter corresponding to the size of blood cells. At the same time, an electric current passes the capillary, and each cell then gives rise to an electric pulse through the increase in resistance that it causes in the electric circuit. The number of pulses is recorded and corresponds to the number of cells in a certain volume. Diluting the blood sample to a sufficient extent for the distance between the cells when passing through the capillary to be greater than the dimension of the cells and capillary ensures that each cell is counted individually. As cells are counted in an absolute volume of the suspension, the number of cells in mm3 of whole blood can be calculated using the dilution factor.
The described automated Celloscope cell count method enabled an improved accuracy compared with manual examination by microscopy, while decreasing manual work for the operator. The method allows 50 000 cells to be counted in about 45 seconds, with high accuracy.
The Celloscope counter was also equipped with a discriminator, or electrical threshold, which allows only pulses above a certain size to be counted, enabling different blood cells to be counted. For example, set to a threshold of 3 μm, all cells are counted. A re-count at a threshold of 4 μm allows calculation of the number of cells of a size between 3 and 4 μm from the total counts.
Diluting the blood sample 1/80 000 in a physiological saline solution allows enumeration of the erythrocytes, as the number of leukocytes does not affect the result more than by about 1/1 000 000. For the leukocyte count, cells in the same sample are hemolyzed with saponin or cetrimide so that only the nuclei of the leukocytes are counted. For platelet count, a smaller capillary diameter is used.
To identify cell morphologies and variants that the counter cannot detect, microscopy remains an essential complement to the automated cell count method.
Successor cell counters
The impedance method used in the Celloscope analyzer has been further developed to allow counting also of leukocyte subgroups. In addition to the cell counts, modern hematology analyzers are also capable of reporting parameters related to cell size, hemoglobin concentration, as well as a range of calculated parameters, for a complete blood count (CBC). These analyzers were initially intended to be used in hospital laboratories, as they required a skilled staff and a high sample load to justify their relatively high cost, however, with the increasing need for decentralized healthcare, the demand for simpler analyzers emerged and prompted the development of benchtop cell counters that could be used in a near-patient clinical setting with a minimum of training. In 1969, Erik Öhlin founded Swelab Instrument AB (today, Boule Medical AB), and later, the Swelab AutoCounter AC-series was launched to meet the needs of the smaller clinical laboratories.
The cell counters of that time used LED screens for result review. In 1982, Medonic AB, another Swedish company with focus on hematology, was founded. The founders, Ingemar Berndtsson and Abraham Bottema, both had a long history and experience in hematology, clinical chemistry, and blood banking engineering. In 1985, Medonic AB launched the Cellanalyzer CA 480 system, its first own-developed cell counter with a built-in display that also showed the cell histograms. When computers began to be incorporated into the analyzers, other brands, like the Swelab analyzers, also came with a display.
Both targeting the smaller clinical laboratories, Swelab Instrument AB and Medonic AB were competitors on the decentralized hematology testing market. In the late 90s, both Swelab Instrument AB and Medonic AB were acquired by Boule Diagnostics AB. The company has kept the parallel brands and the analyzers are still manufactured from its facilities in Stockholm, Sweden and supplied under the Swelab and Medonic trademarks for the decentralized hematology testing market.
When Coulter was acquired by Beckman, former Coulter employees Dr. Harold R Crews, Andrew C Swanson, and Donald Grantham founded Clinical Diagnostic Solutions, Inc. (CDS) in 1997, focusing on the development and production of generic reagents and control material. In 2004, CDS was acquired by Boule. By this acquisition, Boule came to master the skills of the development and production of both instruments and the consumables included in a complete hematology system.
References
External links
• Boule web site
Scientific instruments
Laboratory equipment | Celloscope automated cell counter | Technology,Engineering | 1,469 |
55,287,698 | https://en.wikipedia.org/wiki/Bagel%20machine | A bagel machine is a machine that automatically produces bagels. It rolls, presses, and shapes dough into perfect circles.
History
In 1950, Daniel Thompson, son of Mickey Thompson, began designing an automatic bagel-making machine when he was 29. His father had been trying to make a working automatic bagel-making machine for most of his life to advance his baking business, and Daniel was following in his footsteps.
In 1958, Daniel Thompson started construction of the first successful automatic bagel-making machine in his garage in Cheviot Hills, Los Angeles. He called it “The Thompson Bagel Machine”. A few years later, he patented his design. One year later, in 1961, he and his wife established the "Thompson Bagel Machine Manufacturing Corporation". Five years after he started construction of the bagel machine, he leased the successful and perfected design to Murray Lender, who owned a family-run bagel-making business.
The first automated bagel-making machines were introduced to New Haven, Connecticut, Buffalo, New York, and St. Louis, Missouri in 1963, producing between 200 and 400 bagels per hour. The largest machines could produce as many as 5,000 bagels in an hour. Bagels had very quickly become extremely popular and were in an abundance supermarkets and restaurants because of Daniel Thompson’s bagel making machine. The price of the bagel has dropped significantly because they are now being mass produced everywhere in the world.
Impact
Before Thompson's automatic bagel-making machine was invented, making bagels was a slow process. Very few bakers actually made them. After the automatic bagel machine was invented, many more bagels were produced and the bagel became much more common. The automatic bagel-making machine popularized the bagel, leading to it being featured in many supermarkets and restaurants around the world. Today, far more bagels are being produced than ever before.
Use and features
The bagel machine automatically mass creates bagels by rolling the bagels, shaping the bagels, and pressing the bagels until they are a circular shape.
References
Bagels
Food preparation
Machines | Bagel machine | Physics,Technology,Engineering | 435 |
20,436,774 | https://en.wikipedia.org/wiki/Sensitization%20%28immunology%29 | In immunology, the term sensitization is used for the following concepts:
Immunization by inducing an adaptive response in the immune system. In this sense, sensitization is the term more often in usage for induction of allergic responses.
To bind antibodies to cells such as erythrocytes in advance of performing an immunological test such as a complement-fixation test or a Coombs test. The antibodies are bound to the cells in their Fab regions in the preparation.
To bind antibodies or soluble antigens chemically or by adsorption to appropriate biological entities such as erythrocytes or particles made of gelatin or latex for passive aggregation tests.
Those particles themselves are biologically inactive except for serving as antigens against the primary antibodies or as carriers of the antigens. When antibodies are used in the preparation, they are bound to the erythrocyte or particles in their Fab regions. Thus the step follows requires the secondary antibodies against those primary antibodies, that is, the secondary antibodies must have binding specificity to the primary antibodies including to their Fc regions.
References
Immunology | Sensitization (immunology) | Biology | 230 |
73,396,228 | https://en.wikipedia.org/wiki/Computing%20in%20Science%20%26%20Engineering | Computing in Science & Engineering (CiSE) is a bimonthly technical magazine published by the IEEE Computer Society. It was founded in 1999 from the merger of two publications: Computational Science & Engineering (CS&E) and Computers in Physics (CIP), the first published by IEEE and the second by the American Institute of Physics (AIP). The founding editor-in-chief was George Cybenko, known for proving one of the first versions of the universal approximation theorem of neural networks.
The magazine is interdisciplinary and covers topics such as numerical simulation, modeling, and data analysis and visualization. CiSE aims to provide its readers with practical information on the latest developments in computational methods and their applications in science and engineering. Computing in Science & Engineering publishes peer-reviewed technical articles, special issues, editorials, and departments (regular columns).
Notable articles
One of the most notable articles published in CiSE is "Matplotlib: A 2D Graphics Environment," by the late John D. Hunter. It shows more than 22 thousand full-text views and more than 17 thousand citations in IEEE Xplore, and more than 27 thousand citations in Google Scholar (checked August 14, 2023). A very popular department article is "What is the Blockchain?" by member of the editorial board Massimo DiPierro. Other notable articles include "Python for Scientific Computing" by Travis Oliphant, which has more than 15 thousand views in Xplore, and "The NumPy Array: A Structure for Efficient Numerical Computation," by Stefan van der Walt et al., with nearly 7 thousand citations and 12 thousand views in Xplore.
The winner of the CiSE 2021 Best Paper Award was "Jupyter: Thinking and Storytelling With Code and Data," by Brian E. Granger and Fernando Pérez.
Notable editors
Among the editors emeritus, who served close to twenty years in the editorial board, is Jack Dongarra, Distinguished Professor of Computer Science at the University of Tennessee, and recipient of the IEEE Computer Society 2020 Computer Pioneer Award, and the 2021 ACM Alan Turing Award, among many other accolades. Cleve Moler, chairman and cofounder of MathWorks, was area editor for Software and a member of the editorial board from 1999. The precursor magazine, IEEE Computational Science & Engineering (CS&E), was founded by Ahmed Sameh, known for his contributions to parallel algorithms in numerical linear algebra, who remained in the CiSE board for several years. Dianne O'Leary, emeritus professor of computer science at the University of Maryland, was editor of the Your Homework Assignment column for several years starting on 2003. She compiled and expanded her columns into a book, "Scientific Computing with Case Studies," published by SIAM in 2009.
References
Academic journals of the United States
Electrical and electronic engineering journals
English-language journals | Computing in Science & Engineering | Engineering | 581 |
9,019,594 | https://en.wikipedia.org/wiki/Pantsing | Pantsing, also known as depantsing, debagging, dacking, flagging, sharking, and scanting, is the act of pulling down a person's trousers and sometimes underpants, usually against their wishes, and typically as a practical joke or a form of bullying, but in other instances as a sexual fetish.
Pantsing is a more common prank and occurs mainly in schools. Some U.S. colleges before World War II were the scenes of large-scale "depantsing" scraps between freshman and sophomore males, often involving more than 2,000 participants. It is also an initiation rite in fraternities and seminaries. It was cited in 1971 by Gail Sheehy as a form of assault against grade school girls, which did not commonly get reported, although it might include improper touching and indecent exposure by the perpetrators. The United States legal system has prosecuted it as a form of sexual harassment of children.
Alternative names
In Britain, especially historically at Oxford and Cambridge Universities in England, the term is known as (derived from Oxford bags, a loose-fitting baggy form of trousers). In Northern England the dialect renders the word "dekekking" or "dekecking" where "keks" is a local word for underwear.
A corresponding term in Australia (aside from pantsing) is , or , which originated from DAKS Simpson, a clothing brand that became a generic term for pants and underwear. The term double-dacking is used when both the pants and underwear are pulled down. In Scotland the process is often known as or from the word breeks meaning 'trousers'. In New Zealand the act is known as giving someone a down-trou (though this can have a more specific meaning, relating to loser-shaming in pool playing and other competitive games); in Ireland, it is , or ; in the north and south-west of England (or ).
An alternative term is , which usually implies a sexual assault on a stranger rather than a prank or bullying between peers, and is sometimes applied more broadly to the pulling down of blouses and other top clothing.
Another prank, in which the victim's underpants are yanked upward rather than downward, is called a wedgie.
Bullying
Pantsing can be used as a form of bullying and is technically the crime of simple assault. The practice has been viewed as a form of ritual emasculation. In 2007, British Secretary of State for Education and Skills Alan Johnson, in a speech to the National Association of Schoolmasters Union of Women Teachers, criticized such bullying and criticized YouTube for hosting a movie (since removed) of a teacher being pantsed, saying that such bullying "is causing some [teachers] to consider leaving the profession because of the defamation and humiliation they are forced to suffer" and that "Without the online approval which appeals to the innate insecurities of the bully, such sinister activities would have much less attraction."
Juanita Ross Epp is highly critical of teachers who regard pupils pantsing one another as normal behavior, saying that pantsing makes pupils feel intimidated and uncomfortable and that "normal is not the same as right".
See also
Kanchō
References
Assault
Harassment and bullying
Practical jokes
Sexual harassment
Terminology of the University of Cambridge
Terminology of the University of Oxford
Sexual violence
Sexual fetishism | Pantsing | Biology | 681 |
1,945,510 | https://en.wikipedia.org/wiki/Ansco%20Panda | The Ansco Panda was a simple child's box camera made by the Ansco camera corporation of Binghamton, New York in the 1940s. Its appearance is quite similar to the Kodak Baby Brownie and was designed to compete directly with it. The camera features a black plastic body with cream accents around the lenses, a cream colored wind knob and a TLR style viewing lens above the taking lens. Images are composed via a waist level viewfinder and taken by means of a traditional Ansco red shutter trigger button depressed by the right index finger.
The camera produces 12 square photographs on a single roll of 620 format film (Ansco No. 20 film). Focal length of the camera is 60 mm and it has an f/16 lens. Focus is fixed and objects from about 6' to infinity are in focus.
References
Cameras | Ansco Panda | Technology | 169 |
354,102 | https://en.wikipedia.org/wiki/TI-92%20series | The TI-92 series are a line of graphing calculators produced by Texas Instruments. They include: the TI-92 (1995), the TI-92 II (1996), the TI-92 Plus (1998, 1999) and the Voyage 200 (2002). The design of these relatively large calculators includes a QWERTY keyboard. Because of this keyboard, it was given the status of a "computer" rather than "calculator" by American testing facilities and cannot be used on tests such as the SAT or AP Exams while the similar TI-89 can be.
TI-92
The TI-92 was originally released in 1995, and was the first symbolic calculator made by Texas Instruments. It came with a computer algebra system (CAS) based on Derive, geometry based on Cabri II, and was one of the first calculators to offer 3D graphing. The TI-92 was not allowed on most standardized tests due mostly to its QWERTY keyboard. Its larger size was also rather cumbersome compared to other graphing calculators. In response to these concerns, Texas Instruments introduced the TI-89 which is functionally similar to the original TI-92, but featured Flash ROM and 188 KB RAM, and a smaller design without the QWERTY keyboard. The TI-92 was then replaced by the TI-92 Plus, which was essentially a TI-89 with the larger QWERTY keyboard design of the TI-92. Eventually, TI released the Voyage 200, which is a smaller, lighter version of the TI-92 Plus with more Flash ROM.
The TI-92 is no longer sold through TI or its dealers, and is very hard to come by in stores.
TI-92 II
The TI-92 II was released in 1996, and was the first successor to the TI-92.
The TI-92 II was available both as a stand-alone product, and as a user-installable II module which could be added to original TI-92 units to gain most of the feature improvements. The TI-92 II module was introduced early in 1996 and added the choice of 5 user languages (English, French, German, Italian and Spanish) and an additional 128k User memory. Along with the TI-92, the TI-92 II was replaced by the TI-92 Plus in 1999, which offered even more Flash ROM and RAM.
TI-92 Plus
The TI-92 Plus (or TI-92+) was released in 1998, slightly after the creation of the almost-identical (in terms of software) TI-89, while physically looking exactly like its predecessor, the TI-92 (which lacked flash memory). Besides increased memory over its predecessor, the TI-92 Plus also featured a sharper "black" screen, which had first appeared on the TI-89 and which eases viewing.
The TI-92 Plus was available both as a stand-alone product, and as a user-installable Plus module which could be added to original TI-92 and TI-92 II units to gain most of the feature improvements, most notably Flash Memory. A stand-alone TI-92 Plus calculator was functionally similar to the HW2 TI-89, while a module-upgraded TI-92 was functionally similar to the HW1 TI-89. Both versions could run the same releases of operating system software.
As of 2002, the TI-92 Plus was succeeded by the Voyage 200 and is no longer sold through TI or its dealers.
The TI-92 Plus is now available in an online emulator, featuring a list of frequently used commands.
Voyage 200
Voyage 200 (also V200 and Voyage 200 PLT) was released in 2002, being the replacement for the TI-92 Plus, with its only hardware upgrade over that calculator being an increase in the amount of flash memory available (2.7 megabytes for the Voyage 200 vs. 702 kilobytes for the TI-92 Plus). It also features a somewhat smaller and more rounded case design.
Like its predecessor, Voyage 200 is an advanced calculator that supports plotting multiple functions on the same graph, parametric, polar, 3D, and differential equation graphing as well as sequence representations. Its symbolic calculation system is based on a trimmed version of the calculation software Derive. In addition to its algebra and calculus capabilities, the Voyage 200 is packaged with list, spreadsheet, and data processing applications and can perform curve fitting to a number of standard functions and other statistical analysis operations. The calculator can also run most programs written for the TI-89 and TI-92 as well as programs specifically written for it. A large number of applications, ranging from games to interactive periodic tables can be found online.
The V200 is easily mistaken for a PDA or a small computer because of its large enclosure and its full QWERTY keyboard — a feature which disqualifies the calculator for use in many tests and examinations, including the American ACT and SAT. The TI-89 Titanium offers exactly the same functionality in a smaller format that is also legal on the SAT test, but not the ACT test.
Features
Technical specifications
See also
Comparison of Texas Instruments graphing calculators
References
External links
Official documentation: features of the Voyage 200.
Graphing calculators
Texas Instruments programmable calculators
Computer algebra systems
68k-based mobile devices
Products introduced in 1995
Products introduced in 1998
Products introduced in 2002 | TI-92 series | Mathematics | 1,116 |
75,471,835 | https://en.wikipedia.org/wiki/KPD%200005%2B5106 | KPD 0005+5106 is a helium-rich white dwarf located 1,350 light-years from Earth. As a "pre-white dwarf", it is believed to still be in the helium-burning phase, just before nuclear fusion finally stops. It is one of the hottest known white dwarfs, with a temperature of 200,000 K.
Possible companion object
KPD 0005+5106 has been observed to emit high-energy X-rays that regularly increase and descrease in luminosity every 4 hours and 42 minutes. This indicates that the star possibly has a companion orbiting it. Simulations show that a Jupiter-mass object can exceed the roche lobe and is the more likely companion for KPD 0005+5106. The white dwarf pulls material from its companion into a disk around itself, before it slams into its north and south poles. The concentration of material at the poles causes the creation of two bright spots emitting high-energy X-rays.
References
White dwarfs
Cassiopeia (constellation)
Astronomical objects discovered in 2008 | KPD 0005+5106 | Astronomy | 216 |
28,872,395 | https://en.wikipedia.org/wiki/Methoxetamine | Methoxetamine (MXE) is a dissociative hallucinogen that has been sold as a designer drug. It differs from many dissociatives such as ketamine and phencyclidine (PCP) that were developed as pharmaceutical drugs for use as general anesthetics in that it was designed specifically to increase the antidepressant effects of ketamine.
MXE is an arylcyclohexylamine. It acts mainly as an NMDA receptor antagonist, similarly to other arylcyclohexylamines like ketamine and PCP.
Recreational use
Effects
MXE is reported to have a similar effect to ketamine. It was often believed to possess opioid properties due to its structural similarity to 3-HO-PCP, but this assumption is not supported by data, which shows insignificant affinity for the μ-opioid receptor by the compound. Recreational use of MXE has been associated with hospitalizations from high and/or combined consumption in the US and UK. Acute reversible cerebellar toxicity has been documented in three cases of hospital admission due to MXE overdose, lasting for between one and four days after exposure.
MXE was designed in part in an attempt to avoid the urotoxicity associated with ketamine abuse; it was thought the compound's increased potency and reduced dose would limit the accumulation of urotoxic metabolites in the bladder. Like ketamine, MXE has been found to produce bladder inflammation and fibrosis after high dose chronic administration in mice, although the dosages used were quite large. Reports of urotoxicity in humans have yet to appear in the medical literature.
Pharmacology
Pharmacodynamics
MXE acts mainly as a selective and high-affinity NMDA receptor antagonist, specifically of the dizocilpine (MK-801) site (Ki = 257 nM). It produces ketamine-like effects. In addition to antagonism of the NMDA receptor, MXE has been found to act as a serotonin reuptake inhibitor (Ki = 479 nM; IC50 = 2,400 nM). Conversely, it shows little or no effect on the reuptake of dopamine and norepinephrine (Ki and IC50 > 10,000 nM). Nonetheless, MXE has been found to activate dopaminergic neurotransmission, including in the mesolimbic reward pathway. This is a characteristic that it shares with other NMDA receptor antagonists, including ketamine, PCP, and dizocilpine (MK-801). Animal studies suggest MXE may have rapidly-acting antidepressant effects similar to those of ketamine. A study that assessed binding of MXE at 56 sites including neurotransmitter receptors and transporters found that MXE had Ki values of >10,000 nM for all sites except the dizocilpine site of the NMDA receptor and the serotonin transporter (SERT).
Pharmacokinetics
MXE has a longer duration of action than that of ketamine.
Chemistry
MXE is an arylcyclohexylamine and a derivative of eticyclidine (PCE). It can also be thought of as the β-Keto-derivative of 3-methoxyeticyclidine (3-MeO-PCE), or the N-ethyl homologue of methoxmetamine (MXM) and methoxpropamine (MXPr). It is closely related structurally to ketamine, and more distantly to PCP.
MXE hydrochloride is soluble in ethanol up to 10 mg/ml at 25 °C.
Detection in body fluids
A forensic standard of MXE is available, and the compound has been posted on the Forendex website of potential drugs of abuse.
History
The qualitative effects of MXE were first described online in May 2010 and the compound became commercially available on a small scale in September 2010. By November the use and sale of the MXE had increased enough for it to be formally identified by the European Monitoring Centre for Drugs and Drug Addiction. By July 2011, the EMCDDA had identified 58 websites selling the compound at a cost of 145–195 euros for 10 grams.
Society and culture
Media coverage
Mixmag reported in January 2012, that people in the dance music and clubbing community have given MXE the slang name 'roflcoptr'. Vice commented that it was likely that the phrase will only be used by "the same politicians, parents and journalists" who called mephedrone 'meow meow'. After being called mexxy in UK Home Office press releases, the media adopted the name.
A literature review was published in March 2012 which looked at scientific literature and information on the web. It concluded that "the online availability of information on novel psychoactive drugs, such as MXE, may constitute a pressing public health challenge. Better international collaboration levels and novel forms of intervention are necessary to tackle this fast-growing phenomenon."
Legal status
Brazil
MXE became classified as a narcotic in Brazil in February 2014.
Canada
As of January 2010 MXE is a controlled substance in Canada.
China
As of October 2015 MXE is a controlled substance in China.
European Union
On 16 June 2014, the European Commission proposed that MXE be banned across the European Union, subjecting those in violation to criminal sanctions. This is following the procedure for risk-assessment and control of new psychoactive substances set up by the council: Decision 2005/387/JHA.
Finland
Scheduled in "government decree on narcotic substances, preparations and plants" and is hence illegal.
Israel
MXE became classified as an illegal narcotic in Israel in May 2012.
Japan
MXE became a controlled substance in Japan from 1 July 2012, by amendment to the Pharmaceutical Affairs Law.
Poland
MXE is a controlled substance (group II-P) making it illegal to produce, sell or possess in The Republic of Poland as of 1 July 2015.
Russia
MXE has been a controlled substance in Russia since October 2011.
Sweden
MXE became classified as a narcotic in Sweden in late February 2012.
Switzerland
MXE has been illegal in Switzerland since December 2011.
United Kingdom
Prior to March 2012, MXE was not controlled by the UK's Misuse of Drugs Act. In March 2012, the Home Office referred MXE to the Advisory Council on the Misuse of Drugs for possible temporary controlling under the powers given in the Police Reform and Social Responsibility Act 2011. The ACMD gave their advice on 23 March, with the chair commenting that "the evidence shows that the use of methoxetamine can cause harm to users and the ACMD advises that it should be subject to a temporary class drug order." In April 2012, MXE was placed under temporary class drug control, which prohibited its import and sale for 12 months.
Theresa May commented in her reply to the ACMD that "the next step in this process is for the ACMD to undertake a full assessment of MXE for consideration for its permanent control under the 1971 Act." She goes on to say that she hopes the ACMD will do this as a part of the review of ketamine, "including its analogues" and that this review will be completed "within the 12 months from the making of the current order".
On 18 October 2012 the ACMD released a report about MXE, saying that the "harms of methoxetamine are commensurate with Class B of the Misuse of Drugs Act (1971)", despite the fact that the act does not classify drugs based on harm. The report went on to suggest that all analogues of MXE should also become class B drugs and suggested a catch-all clause covering both existing and unresearched arylcyclohexamines.
MXE ceased to be covered by the temporary prohibition on 26 February 2013, when it became classified as a Class B drug.
United Nations
MXE was made a schedule II drug in November 2016.
United States
On June 6, 2022, the U.S. Drug Enforcement Administration published a final rule placing MXE in Schedule I of the Controlled Substances Act.
References
External links
Erowid.org – Methoxetamine Information
Arylcyclohexylamines
Designer drugs
Dissociative drugs
Euphoriants
Ketones
NMDA receptor antagonists
3-Methoxyphenyl compounds
Serotonin reuptake inhibitors | Methoxetamine | Chemistry | 1,782 |
56,141,758 | https://en.wikipedia.org/wiki/Sabine%20Brunswicker | Sabine Brunswicker is a Full Professor for Digital Innovation at Purdue University, West Lafayette, United States, and the Founder and Director of Research Center for Open Digital Innovation (RCODI). She is a computational social scientist with a particular focus on open digital innovation who engages with an interdisciplinary group of researchers to predict individual and collective outcomes in open digital innovation. She has written numerous research papers and books chapters on Open Innovation, and is an internationally recognized authority in the field. She chaired the World Economic Forum workshop for a session titled "Open innovation as a driver of business and economic transformation" in 2014. She is known for pioneering Purdue IronHacks, an iterative hacking initiative (www.ironhacks.com) that encourages experiential learning at Purdue University.
Education and career
Brunswicker holds a Bachelor of Science in Engineering and Management Sciences from Technische Universität Darmstadt, Darmstadt, Germany, a Master of Commerce from University of New South Wales, Australia, a Master of Science in Engineering and Management Sciences from University of Technology, Darmstadt, Germany, and a Phd in Engineering Sciences with highest honor from University of Stuttgart, Germany. She won an award from the publisher John Wiley & Sons and the International Society for Professional Innovation Management, ISPIM, for the doctoral dissertation from an engineer named the Best Dissertation Award in 2012.
She is currently a full professor and director of Research Center for Open Digital Innovation (RCODI) at Purdue University. She is also an adjunct professor of Digital Innovation in the School of Information Systems at Queensland University of Technology, Brisbane Australia. Until 2016, she was Visiting Professor for Digital Innovation at ESADE Business School. Prior to joining Purdue she was Head of Open Innovation at the Fraunhofer Institute for Industrial Engineering in Stuttgart, Germany.
She has won numerous awards throughout her professional careers, including runner-up emerging scholar award World Open Innovation Conference, ESADE Business School, Spain, John P. Lisack Early-Career Engagement Award from Purdue Polytechnic Institute, top researcher 2012 from Fraunhofer Society due to her accomplishments in the area of open innovation.
Selected publications (book chapters)
Kremser, W., Pentland, B., & Brunswicker, S. (2019). The Continuous Transformation of Interdependence in Networks of Routines. In Book Series: Research in Sociology of Organizations. Emerald Insight, invited publication.
Brunswicker, S., Majchrzak, A., Almirall, E., & Tee, R. (2016). Co-creating value from open data: from incentivizing developers to inducing co-creation in open data ecosystems. In S. Nambisan (Ed.): Open Innovation and Innovation Networks (Vol. 1). World Scientific Publishing.
Brunswicker, S. (2016). Managing open innovation in small and medium-sized firms in the tourism sector. In W. Egger, I. Gula, & D. Walcher (Eds.), Open tourism: Open innovation, crowdsourcing, and collaborative consumption challenging the tourism industry. Berlin: Springer.
Bagherzadeh, M., & Brunswicker, S. (2016). Governance of Knowledge Flows in Open Exploration: The Role of Behavioral Control. In Das, T.K. (Ed.), Decision Making in Behavioral Strategy (DMBS), Information Age Publishing (IAP).
Brunswicker, S., & Johnson, J. (2015). From governmental open data toward governmental open innovation (GOI). In D. Archibugi & A. Filippetti (Eds.), The handbook of global science, technology, and innovation (1 ed., pp. 504–524): New Jersey: John Wiley & Sons, Ltd. 2
Brunswicker, S., & van de Vrande, V. (2014). Exploring open innovation in small and medium-sized enterprises. In H. Chesbrough, W. Vanhaverbeke, & J. West (Eds.), New frontiers in open innovation (1 ed., pp. 135–156). Oxford, United Kingdom: Oxford University Press.
Selected publications (referred journals)
Brunswicker, S., & Chesbrough, H. (2018). The Adoption of Open Innovation in Large Firms: Practices, Measures, and Risks. Research Technology Management.
Brunswicker, S., Bilgram, V., & Fueller, J. (2017). Taming wicked civic challenges with an innovative crowd. Business Horizons, 60(2), Bogers, M., Zobel, A.-K., Afuah, A., Almirall, E., Brunswicker, S., Dahlander, L., ... Magnussen, M. (2017). The Open Innovation Landscape: Established Perspectives and Emerging Themes Across Different Levels of Analysis. Industry & Innovation, 24(1), 8–40.
Brunswicker, S., Matei, S. A., Zentner, M., Zentner, L., & Klimeck, G. (2017). Creating impact in the digital space: digital practice dependency in communities of digital scientific innovations. Scientometrics, 110(1), 417–426.
Brunswicker, S., & Vanhaverbeke, W. (2015). Open innovation in small and medium-sized enterprises (SMEs): External knowledge sourcing strategies and internal organizational facilitators. Journal of Small Business Management, 53(4), 1241-1263.
Brunswicker, S., Bertino, E., & Matei, S. (2015). Big data for open digital innovation – A research roadmap. Big Data Research, 2(2), 53-58. Chesbrough, H., & Brunswicker, S. (2014). A Fad or a Phenomenon? The Adoption of Open Innovation Practices in Large Firms. Research Technology Management, 57(2), 16–25.
Koch, G., Füller, J., & Brunswicker, S. (2011). Online crowdsourcing in the public sector: How to design open government platforms. Online Communities and Social Computing, 6778, 203-212. doi:10.1007/978-3-642-21796-8_22
Brunswicker, S., & Hutschek, U. (2010). Crossing horizons: Leveraging cross-industry innovation search in the front-end of the innovation process. International Journal of Innovation Management, 14(04), 683-702.
Research grants
08/2016 to 07/2017 Balancing the Grid through Energy Monitoring Systems: Information Visualization for Collective Awareness; Deans Graduate Assistant Award for Outstanding Research Proposals; $20,000; Principal Investigator
08/2016 to 10/2016 Biomedical Big Data Hacking for Civic Health Awareness; NIH grant; $2000; Co-Principal Investigator (with Bethany McGowan as Principal Investigator) 07/2015-06/2017 Creating Impact from Governmental Open Data (OD): Innovation Process Transparency in OD Contest Design; NSF grant; Science of Science and Innovation Policy (SciPI); 24 months grant; $238,641,29; Principal Investigator (with Ann Majchrzak, USC as Co-Principal Investigator)
06/2015 – 05/2017 Red Hat® Doctoral Researcher on Open Innovation Communities; Donation received from Red Hat Inc., Raleigh, North Carolina $100,000; Principal Investigator
04/2015 to 12/2016 Managing Open Innovation in Large Firms: Case Study Analysis; Sponsored Research Project; Sponsor: Accenture High Performance Institute\f; Chicago; $63,531,32 Principal Investigator 01/2015-08/2015 Conceptualization of the Social and Innovation Opportunities of Data Analysis; NSF grant; CIF21 DIBBS; 7months grant; $99,718; Co-Principal Investigator (with Mike Zentner; Principal Investigator; Purdue University)
07/2014 – 11/215 Global Open Innovation Executive Survey 2015; Sponsored Research Grant; Sponsor: University of Berkeley, Garwood Center for Corporate Innovation; $25,000; Principal Investigator
03/2015 – Present Open Innovation Community Research; Donation received from Landcare Research, Gerald Street, Lincoln, New Zealand 7608; $10,000 Principal Investigator (with Jeremiah Johnson and Ann Majchrzak)
2014 Research on CyberInfrastructures and Behavioral analytics, Purdue Internal Funding through nanoHUB.org NCN supported RCODI discovery efforts of a doctoral student with $18,201. Principal Investigator
2014-2015 Exploratory research in the social sciences; $50,000. Executive office of the Vice President (EVPR) Co-Principal Investigator (with Sorin Matei, Curriculum Vitae – April 2016 16 Communications and Gerhard Klimeck, ECE) $22,496.
2014 Open Strategies; Internal Funding from PCRD (Purdue Center for Regional Development); $10,066; Principal Investigator
References
Living people
21st-century German social scientists
University of New South Wales alumni
University of Stuttgart alumni
Purdue University faculty
Academic staff of Queensland University of Technology
Year of birth missing (living people)
Technische Universität Darmstadt alumni
Information systems researchers | Sabine Brunswicker | Technology | 1,922 |
29,102,236 | https://en.wikipedia.org/wiki/Biodilution | Biodilution, sometimes referred to as bloom dilution, is the decrease in concentration of an element or pollutant with an increase in trophic level. This effect is primarily observed during algal blooms whereby an increase in algal biomass reduces the concentration of pollutants in organisms higher up in the food chain, like zooplankton or daphnia.
The primary elements and pollutants of concern are heavy metals such as mercury, cadmium, and lead. These toxins have been shown to bioaccumulate up a food web. In some cases, metals, such as mercury, can biomagnify. This is a major concern since methylmercury, the most toxic mercury species, can be found in high concentrations in human-consumed fish and other aquatic organisms.
Numerous studies have linked lower mercury concentrations in zooplankton found in eutrophic (nutrient-rich and highly productive) as compared to oligotrophic (low nutrient) aquatic environments. Nutrient enrichment (mainly phosphorus and nitrogen) reduce the input of mercury, and other heavy metals, into aquatic food webs through this biodilution effect. Primary producers, such as phytoplankton, uptake these heavy metals and accumulate them into their cells. The higher the population of phytoplankton, the less concentrated these pollutants will be in their cells. Once consumed by primary consumers, such as zooplankton, these phytoplankton-bound pollutants are incorporated into the consumer’s cells. Higher phytoplankton biomass means a lower concentration of pollutants accumulated by the zooplankton, and so on up the food web. This effect causes an overall dilution of the original concentration up the food web. That is, the concentration of a pollutant will be lower in the zooplankton than the phytoplankton in a high bloom condition.
Although most biodilution studies have been on freshwater environments, biodilution has been shown to occur in the marine environment as well. The Northwater Polynya, located in Baffin Bay, was found to have a negative correlation of cadmium, lead, and nickel with an increase in trophic level Cadmium and lead are both non-essential metals that will compete for calcium within an organism, which is detrimental for organism growth.
Most studies measure bioaccumulation and biodilution using the δ15N isotope of nitrogen. The δ15N isotopic signature is enriched up the food web. A predator will have a higher δ15N as compared to its prey. This trend allows the trophic position of an organism to be derived. Coupled to the concentration of a specific pollutant, such as mercury, the concentration verses trophic position can be accessed.
While most heavy metals bioaccumulate, under certain conditions, heavy metals and organic pollutants have the potential to biodilute, making a higher organism less exposed to the toxin.
References
Ecology terminology
Pollutants | Biodilution | Biology | 632 |
18,632,796 | https://en.wikipedia.org/wiki/Hacking%20at%20Random | Hacking at Random was an outdoor hacker conference that took place in the Netherlands from August 13 to August 16, 2009. It had an attendance of 2300 people.
It was situated on a large camp-site near the small town Vierhouten in the Netherlands called the Paasheuvel.
This conference was the second most recent event in a sequence that began with the Galactic Hacker Party in 1989, followed by Hacking at the End of the Universe in 1993, Hacking In Progress in 1997, Hackers At Large in 2001, and What the Hack in 2005, and succeeded by Observe. Hack. Make. in 2013, Still Hacking Anyway in 2017 and May Contain Hackers in 2022. Pre-event announcement by a Hackaday contributor "Eliot" stated it was brought by the same people as What the Hack 2005.
Like the previous Dutch hacker cons this event thrived by using its volunteers, and called everyone including the visitor sponsors a volunteer. Everyone was expected to do their part in making the event a success.
With over 170 talks and 3 large lecture halls, this edition was by far the largest in the series of quadrennial Dutch events.
The special side tents offering off-the-tracks program added to the open atmosphere which was manly driven by mixing technology, art and social aspects together. A custom camp currency (being copy-cat'ed using 3D printers), illuminated flying objects at night and lock picking contests during the day where accompanied by techno-DJs generating baselines from raw-network modulation data.
References
External links
Official website
Free-software events
Hacker conventions
Nunspeet | Hacking at Random | Technology | 322 |
61,503,405 | https://en.wikipedia.org/wiki/Watergen | Watergen Inc. (formerly Water-Gen) is an Israel-based global company that develops atmospheric water generator (AWG) systems. Its systems generate water from air at 250 Wh per liter.
History
Watergen was founded in 2009 by entrepreneur and former military commander Arye Kohavi and a team of engineers with the goal of providing freely accessible water to troops around the world.
Following the acquisition of Watergen by billionaire Michael Mirilashvili, in 2016, the company turned its attention to addressing water scarcity and responding to the needs of people in the aftermath of natural catastrophes.
Since then, Watergen has created a series of products that are appropriate for a variety of applications, ranging from remote rural locations to commercial office complexes. and private homes. It has been used around the world by armies as well as the public and private sectors in the United States, Latin America, India, Vietnam, Uzbekistan and the African continent.
The company's headquarters are in Petah Tikva. It has a subsidiary in the United States.
In May 2020, the company installed a water-from-air device at the Al-Rantisi Hospital in Gaza. This initiative, a result of a collaboration with the Palestinian power company Mayet Al Ahel, aimed to provide clean and safe off-grid drinking water for the pediatric hospital's staff and patients. The project, led by Mirilashvili, intended to address water scarcity in the Gaza Strip.
References
External links
Official Website
Israeli companies established in 2009
Israeli inventions
Water industry | Watergen | Environmental_science | 313 |
31,537 | https://en.wikipedia.org/wiki/Transuranium%20element | The transuranium (or transuranic) elements are the chemical elements with atomic number greater than 92, which is the atomic number of uranium. All of them are radioactively unstable and decay into other elements. Except for neptunium and plutonium, which have been found in trace amounts in nature, none occur naturally on Earth and they are synthetic.
Overview
Of the elements with atomic numbers 1 to 92, most can be found in nature, having stable isotopes (such as oxygen) or very long-lived radioisotopes (such as uranium), or existing as common decay products of the decay of uranium and thorium (such as radon). The exceptions are technetium, promethium, astatine, and francium; all four occur in nature, but only in very minor branches of the uranium and thorium decay chains, and thus all save francium were first discovered by synthesis in the laboratory rather than in nature.
All elements with higher atomic numbers have been first discovered in the laboratory, with neptunium and plutonium later discovered in nature. They are all radioactive, with a half-life much shorter than the age of the Earth, so any primordial (i.e. present at the Earth's formation) atoms of these elements, have long since decayed. Trace amounts of neptunium and plutonium form in some uranium-rich rock, and small amounts are produced during atmospheric tests of nuclear weapons. These two elements are generated by neutron capture in uranium ore with subsequent beta decays (e.g. U + n → U → Np → Pu).
All elements beyond plutonium are entirely synthetic; they are created in nuclear reactors or particle accelerators. The half-lives of these elements show a general trend of decreasing as atomic numbers increase. There are exceptions, however, including several isotopes of curium and dubnium. Some heavier elements in this series, around atomic numbers 110–114, are thought to break the trend and demonstrate increased nuclear stability, comprising the theoretical island of stability.
Transuranic elements are difficult and expensive to produce, and their prices increase rapidly with atomic number. As of 2008, the cost of weapons-grade plutonium was around $4,000/gram, and californium exceeded $60,000,000/gram. Einsteinium is the heaviest element that has been produced in macroscopic quantities.
Transuranic elements that have not been discovered, or have been discovered but are not yet officially named, use IUPAC's systematic element names. The naming of transuranic elements may be a source of controversy.
Discoveries
So far, essentially all transuranium elements have been discovered at four laboratories: Lawrence Berkeley National Laboratory (LBNL) in the United States (elements 93–101, 106, and joint credit for 103–105), the Joint Institute for Nuclear Research (JINR) in Russia (elements 102 and 114–118, and joint credit for 103–105), the GSI Helmholtz Centre for Heavy Ion Research in Germany (elements 107–112), and RIKEN in Japan (element 113).
The Radiation Laboratory (now LBNL) at University of California, Berkeley, led principally by Edwin McMillan, Glenn Seaborg, and Albert Ghiorso, during 1945-1974:
93. neptunium, Np, named after the planet Neptune, as it follows uranium and Neptune follows Uranus in the planetary sequence (1940).
94. plutonium, Pu, named after Pluto, following the same naming rule as it follows neptunium and Pluto follows Neptune in the Solar System (1940).
95. americium, Am, named because it is an analog to europium, and so was named after the continent where it was first produced (1944).
96. curium, Cm, named after Pierre and Marie Curie, scientists who separated out the first radioactive elements (1944), as its lighter analog gadolinium was named after Johan Gadolin.
97. berkelium, Bk, named after Berkeley, where the University of California, Berkeley is located (1949).
98. californium, Cf, named after California, where the university is located (1950).
99. einsteinium, Es, named after Albert Einstein (1952).
100. fermium, Fm, named after Enrico Fermi, the physicist who produced the first controlled chain reaction (1952).
101. mendelevium, Md, named after Russian chemist Dmitri Mendeleev, credited for being the primary creator of the periodic table of the chemical elements (1955).
102. nobelium, No, named after Alfred Nobel (1958). The element was originally claimed by a team at the Nobel Institute in Sweden (1957) – though it later became apparent that the Swedish team had not discovered the element, the LBNL team decided to adopt their name nobelium. This discovery was also claimed by JINR, which doubted the LBNL claim, and named the element joliotium (Jl) after Frédéric Joliot-Curie (1965). IUPAC concluded that the JINR had been the first to convincingly synthesize the element (1965), but retained the name nobelium as deeply entrenched in the literature.
103. lawrencium, Lr, named after Ernest Lawrence, a physicist best known for development of the cyclotron, and the person for whom Lawrence Livermore National Laboratory and LBNL (which hosted the creation of these transuranium elements) are named (1961). This discovery was also claimed by the JINR (1965), which doubted the LBNL claim and proposed the name rutherfordium (Rf) after Ernest Rutherford. IUPAC concluded that credit should be shared, retaining the name lawrencium as entrenched in the literature.
104. rutherfordium, Rf, named after Ernest Rutherford, who was responsible for the concept of the atomic nucleus (1969). This discovery was also claimed by JINR, led principally by Georgy Flyorov: they named the element kurchatovium (Ku), after Igor Kurchatov. IUPAC concluded that credit should be shared, and adopted the LBNL name rutherfordium.
105. dubnium, Db, an element that is named after Dubna, where JINR is located. Originally named hahnium (Ha) in honor of Otto Hahn by the Berkeley group (1970). This discovery was also claimed by JINR, which named it nielsbohrium (Ns) after Niels Bohr. IUPAC concluded that credit should be shared, and renamed the element dubnium to honour the JINR team.
106. seaborgium, Sg, named after Glenn T. Seaborg. This name caused controversy because Seaborg was still alive, but it eventually became accepted by international chemists (1974). This discovery was also claimed by JINR. IUPAC concluded that the Berkeley team had been the first to convincingly synthesize the element.
The Gesellschaft für Schwerionenforschung (Society for Heavy Ion Research) in Darmstadt, Hessen, Germany, led principally by Gottfried Münzenberg, Peter Armbruster, and Sigurd Hofmann, during 1980-2000:
107. bohrium, Bh, named after Danish physicist Niels Bohr, important in the elucidation of the structure of the atom (1981). This discovery was also claimed by JINR. IUPAC concluded that the GSI had been the first to convincingly synthesise the element. The GSI team had originally proposed nielsbohrium (Ns) to resolve the naming dispute on element 105, but this was changed by IUPAC as there was no precedent for using a scientist's first name in an element name.
108. hassium, Hs, named after the Latin form of the name of Hessen, the German Bundesland where this work was performed (1984). This discovery was also claimed by JINR. IUPAC concluded that the GSI had been the first to convincingly synthesize the element, while acknowledging the pioneering work at JINR.
109. meitnerium, Mt, named after Lise Meitner, an Austrian physicist who was one of the earliest scientists to study nuclear fission (1982).
110. darmstadtium, Ds, named after Darmstadt, Germany, the city in which this work was performed (1994). This discovery was also claimed by JINR, which proposed the name becquerelium after Henri Becquerel, and by LBNL, which proposed the name hahnium to resolve the dispute on element 105 (despite having protested the reusing of established names for different elements). IUPAC concluded that GSI had been the first to convincingly synthesize the element.
111. roentgenium, Rg, named after Wilhelm Röntgen, discoverer of X-rays (1994).
112. copernicium, Cn, named after astronomer Nicolaus Copernicus (1996).
RIKEN in Wakō, Saitama, Japan, led principally by Kōsuke Morita:
113. nihonium, Nh, named after Japan (Nihon in Japanese) where the element was discovered (2004). This discovery was also claimed by JINR. IUPAC concluded that RIKEN had been the first to convincingly synthesize the element.
JINR in Dubna, Russia, led principally by Yuri Oganessian, in collaboration with several other labs including Lawrence Livermore National Laboratory (LLNL), since 2000:
114. flerovium, Fl, named after Soviet physicist Georgy Flyorov, founder of JINR (1999).
115. moscovium, Mc, named after Moscow Oblast, where the element was discovered (2004).
116. livermorium, Lv, named after Lawrence Livermore National Laboratory, a collaborator with JINR in the discovery (2000).
117. tennessine, Ts, after Tennessee, where the berkelium target needed for the synthesis of the element was manufactured (2010).
118. oganesson, Og, after Yuri Oganessian, who led the JINR team in its discovery of elements 114 to 118 (2002).
Superheavy elements
Superheavy elements, (also known as superheavies, or superheavy atoms, commonly abbreviated SHE) usually refer to the transactinide elements beginning with rutherfordium (atomic number 104). (Lawrencium, the first 6d element, is sometimes but not always included as well.) They have only been made artificially and currently serve no practical purpose because their short half-lives cause them to decay after a very short time, ranging from a few hours to just milliseconds, which also makes them extremely hard to study.
Superheavies have all been created since the latter half of the 20th century and are continually being created during the 21st century as technology advances. They are created through the bombardment of elements in a particle accelerator, in quantities on the atomic scale, and no method of mass creation has been found.
Applications
Transuranic elements may be used to synthesize superheavy elements. Elements of the island of stability have potentially important military applications, including the development of compact nuclear weapons. The potential everyday applications are vast; americium is used in devices such as smoke detectors and spectrometers.
See also
Bose–Einstein condensate (also known as superatom)
Minor actinide
Deep geological repository, a place to deposit transuranic waste
References
Further reading
Eric Scerri, A Very Short Introduction to the Periodic Table, Oxford University Press, Oxford, 2011.
The Superheavy Elements
Annotated bibliography for the transuranic elements from the Alsos Digital Library for Nuclear Issues.
Transuranium elements
Super Heavy Elements network official website (network of the European integrated infrastructure initiative EURONS)
Darmstadtium and beyond
Christian Schnier, Joachim Feuerborn, Bong-Jun Lee: Traces of transuranium elements in terrestrial minerals? (Online, PDF-Datei, 493 kB)
Christian Schnier, Joachim Feuerborn, Bong-Jun Lee: The search for super heavy elements (SHE) in terrestrial minerals using XRF with high energy synchrotron radiation. (Online, PDF-Datei, 446 kB)
Nuclear physics
Sets of chemical elements | Transuranium element | Physics | 2,568 |
58,105,226 | https://en.wikipedia.org/wiki/Cavg | {{DISPLAYTITLE:Cavg}}
Cavg is the average concentration of a drug in the central circulation during a dosing interval in steady state. It is calculated by
where is the area under the curve and the dosing interval.
See also
Area under the curve (pharmacokinetics)
Cmax (pharmacology)
References
Pharmacokinetic metrics | Cavg | Chemistry | 83 |
68,724,006 | https://en.wikipedia.org/wiki/IPhone%2013%20Pro | The iPhone 13 Pro and iPhone 13 Pro Max are smartphones developed and marketed by Apple Inc. They were the flagship smartphones in the fifteenth generation of the iPhone, succeeding the iPhone 12 Pro and iPhone 12 Pro Max respectively. The devices were unveiled alongside the iPhone 13 and iPhone 13 Mini at an Apple Special Event at Apple Park in Cupertino, California on September 14, 2021, and became available ten days later, on September 24. They were discontinued on September 7, 2022, as well as the iPhone 11 and iPhone 12 mini, following the announcement of the iPhone 14 and iPhone 14 Pro.
Major upgrades over its predecessor include improved battery life, improved cameras and computational photography, rack focus for video in a new "Cinematic Mode" at 1080p 30 fps, Apple ProRes video recording, a new A15 Bionic system on a chip, and a variable 10–120 Hz display, marketed as ProMotion.
History
Before announcement
The successor to the iPhone 12 Pro models began in development to make the size of the notch 20% smaller as thanks to the front-firing speaker placed into the upper edge from the True-Depth sensor housing and utilizing the display refresh rate up to 120 Hz for smoother motion found. According to the early released rumors, the color options of the iPhone 13 Pro models were including Sunset Gold (a new Gold color option), Rosé (a rename of Gold), Pearl (rename of the Silver) and Matte Black. However, Apple Inc. announced that no Sunset Gold color option of the iPhone 13 Pro and iPhone 13 Pro Max would be unveiled, which Sierra Blue color option of the iPhone 13 Pro and iPhone 13 Pro Max would be instead unveiled on the September Event.
After announcement
The iPhone 13 Pro and iPhone 13 Pro Max were officially announced alongside the ninth-generation iPad, 6th generation iPad Mini, Apple Watch Series 7, iPhone 13, and iPhone 13 Mini by a virtual press event filmed and recorded at Apple Park in Cupertino, California on September 14, 2021. Pre-orders began on September 17 at 5:00 AM PST. Pricing starts at US$999 for the iPhone 13 Pro and US$1099 for the iPhone 13 Pro Max, the same as their respective previous generations.
On September 7, 2022, Apple removed the iPhone 13 Pro and iPhone 13 Pro Max as well as the iPhone 11 and iPhone 12 Mini from their official website following the release of the iPhone 14, iPhone 14 Plus, iPhone 14 Pro and iPhone 14 Pro Max.
In March 2023, Apple began selling refurbished iPhone 13 Pro models on their official website.
Design
The iPhone 13 Pro and iPhone 13 Pro Max's design is mostly unchanged from their respective predecessors. However, the rear camera module now covers a larger area due to the larger lenses. The Face ID and camera module on the front display, or "notch", is now 20% smaller than in previous generations.
The back side of the iPhone 13 Pro is made of a matte glass finish and the front is protected by Gorilla Glass.
The iPhone 13 Pro and 13 Pro Max are available in five colors: Silver, Graphite, Gold, Sierra Blue, and Alpine Green. Sierra Blue is a new color replacing Pacific Blue.
On March 8, 2022, at Apple's Special Event "Peek Performance", Apple revealed a new Alpine Green color option, which became available on March 18, 2022.
Specifications
Hardware
The iPhone 13 Pro and Pro Max use an Apple-designed A15 Bionic processor featuring a 16-core neural engine, 6-core CPU (with 2 performance cores and 4 efficiency cores), and 5-core GPU. The A15 Bionic also contains a new image processor.
The iPhone 13 Pro got an Antutu Score, or Antutu Benchmark Score of 846,433, which makes its graphic loading smooth.
More 5G bands are available to support more carriers, especially outside the US.
Display
The iPhone 13 Pro has a 6.06 inch (154 mm) (marketed as ) OLED display with a resolution of 2532 × 1170 pixels (2.9 megapixels) at 460 PPI, while the iPhone 13 Pro Max has a 6.68 inch (170 mm) (marketed as ) OLED display with a resolution of 2778 × 1284 pixels (3.5 megapixels) at 458 PPI. Both models have the Super Retina XDR OLED display with improved typical brightness up to 1,000 nits from 800 nits, and max brightness up to 1,200 nits, and a variable 10–120 Hz ProMotion display, which can also go as low as 10 Hz to preserve battery. The ProMotion name was previously used on the iPad Pro (2nd Generation) and later models.
Batteries
Apple claims up to 1.5 more hours of battery life on the iPhone 13 Pro, and 2.5 more hours on the 13 Pro Max than their respective predecessors. Rated capacities are 11.97 Wh (3,095 mAh) on the 13 Pro an increase from the 10.78 Wh (2,815 mAh) battery found in the iPhone 12 Pro, while the 13 Pro Max is rated at 16.75 Wh (4,352 mAh) another increase from the 14.13 Wh (3,687 mAh) battery found in the iPhone 12 Pro Max. Both models can charge with MagSafe up to 15 W, Qi wireless charging up to 7.5 W, and Lightning up to 20-23 W for the Pro model, 20-27 W for the Pro Max model.
Cameras
The iPhone 13 Pro features four cameras: one front-facing camera for selfie and three rear-facing cameras which includes a telephoto, wide, and ultra-wide camera. The rear-facing cameras all contain larger sensors than the iPhone 12 Pro, allowing for more light-gathering. The wide and ultra-wide also have larger apertures to capture more light and increase low-light performance. The ultra-wide camera also has autofocus for the first time. The 77 mm telephoto has a smaller aperture than the 12 Pro's, but has the advantage of being able to use Night Mode. The larger telephoto also increases the digital zoom capability to 15x.
The cameras use a new computational photography engine, called Smart HDR 4. Smart HDR 4 processes recognized faces in photos separately using local adjustments. Users can also choose from a range of photographic styles during capture, including rich contrast, vibrant, warm, and cool. Apple clarifies this is different than a filter because it works intelligently with the image processing algorithm during capture to apply local adjustments to an image.
The camera app contains a new mode called Cinematic Mode, which allows users to rack focus between subjects and create a shallow depth of field using software algorithms. It is supported on wide, telephoto, and front-facing cameras in 1080p at 30 fps. Apple also added in iOS 15.1 the ability to record in Apple ProRes 4K at 30 fps and 1080p at 60 fps for models with at least 256 GB of storage, however base models with 128 GB of storage will be limited to ProRes recording at 1080p at 30 fps.
The camera features a macro mode that can focus as close as 2 centimeters from a subject. It utilizes the autofocus from the ultra-wide camera and is automatically enabled when close enough to a subject.
Software
iPhone 13 Pro and iPhone 13 Pro Max originally shipped with iOS 15. They received the iOS 16 update, which was released on September 12, 2022, and iOS 17, which was released on September 18, 2023. The Qi2 wireless charging standard has been added to the iPhone 13 Pro and iPhone 13 Pro Max with the update to iOS 17.2. It is also compatible with iOS 18 released in late 2024
Reception
The iPhone 13 Pro and iPhone 13 Pro Max were praised by reviewers and journalists for its marked improvement in battery life, improved set of cameras, and the addition of ProMotion to the iPhone. The devices have repeatedly been said to have "the best camera in a smartphone."
See also
Comparison of smartphones
History of the iPhone
List of iPhone models
Timeline of iPhone models
References
External links
– official site
Mobile phones introduced in 2021
Products and services discontinued in 2022
Mobile phones with 4K video recording
Mobile phones with multiple rear cameras
Discontinued flagship smartphones | IPhone 13 Pro | Technology | 1,727 |
45,049,970 | https://en.wikipedia.org/wiki/GLACIER%20%28refrigerator%29 | GLACIER (General Laboratory Active Cryogenic ISS Experiment Refrigerator) was designed and developed by University of Alabama at Birmingham (UAB) Center for Biophysical Sciences and Engineering (CBSE) for NASA Cold Stowage. Glacier was originally designed for use on board the Space Shuttle, but is now used for storing scientific samples on ISS in the EXpedite the PRocessing of Experiments to Space Station (EXPRESS) rack, and transporting samples to/from orbit via the SpaceX Dragon or Cygnus spacecraft. GLACIER is a double middeck locker equivalent payload designed to provide thermal control between +4 °C and -160 °C.
Development
In 2002 NASA began development of several spaceflight cold stowage systems to work in conjunction with the large (ISS Rack sized) ESA MELFI and Cryosystem freezers. One of these was for a system capable of rapidly freezing bagged irregularly shaped science samples to below -160°C in as fast as 1°C/min for a 100ml sample, being able to maintain a complement of frozen samples without electrical power for several hours, and to be of a compact double middeck locker format, to enable transfer between the ISS and the Space Shuttle Orbiter cabin for transport to/from orbit. The combination of these goals presented several significant technical challenges, and prompted NASA to implement a two-phase development approach. In the first phase, two competing designs were matured through Preliminary Design Review (PDR) and completed functional demonstration of key freezer components. In the second phase one of the designs was then developed through to the completed flight freezers. NASA awarded a contract to UAB CBSE to build the GLACIER freezer system in 2005. The first GLACIER freezers flew on STS-126 in 2008.
Description
GLACIER can use air or water to reject heat depending on the temperatures required for the scientific samples.
GLACIER can maintain temperatures from +4 to -95 °C using only air cooling, and can cool to -160 °C when connected to the water supply.
GLACIER's pump creates a low vacuum within the cool box in order to improve electrical and cooling efficiency.
GLACIER can accommodate up to 36 lb of sample mass.
Additional Cold Storage
GLACIER is one of multiple units available for storage on the ISS and/or transportation to and from the ISS.
Minus Eighty Degree Laboratory Freezer for ISS (MELFI)
+4 °C to -80 °C
MERLIN (Microgravity Experiment Research Locker/ Incubator)
+48 °C to -20 °C
Polar (Research Refrigerator for ISS)
+4 °C to -95 °C
See also
Scientific research on the ISS
International Space Station
SpaceX Dragon
References
Cryogenics
University of Alabama at Birmingham
Cooling technology | GLACIER (refrigerator) | Physics | 537 |
6,317,457 | https://en.wikipedia.org/wiki/258%20%28number%29 | 258 (two hundred [and] fifty-eight) is the natural number following 257 and preceding 259.
In mathematics
258 is:
a sphenic number
a nontotient
the sum of four consecutive prime numbers because 258 = 59 + 61 + 67 + 71
63 + 62 + 6
an Ulam number
References
Integers | 258 (number) | Mathematics | 66 |
574,775 | https://en.wikipedia.org/wiki/Abstraction%20layer | In computing, an abstraction layer or abstraction level is a way of hiding the working details of a subsystem. Examples of software models that use layers of abstraction include the OSI model for network protocols, OpenGL, and other graphics libraries, which allow the separation of concerns to facilitate interoperability and platform independence.
In computer science, an abstraction layer is a generalization of a conceptual model or algorithm, away from any specific implementation. These generalizations arise from broad similarities that are best encapsulated by models that express similarities present in various specific implementations. The simplification provided by a good abstraction layer allows for easy reuse by distilling a useful concept or design pattern so that situations, where it may be accurately applied, can be quickly recognized. Just composing lower-level elements into a construct doesn't count as an abstraction layer unless it shields users from its underlying complexity.
A layer is considered to be on top of another if it depends on it. Every layer can exist without the layers above it, and requires the layers below it to function. Frequently abstraction layers can be composed into a hierarchy of abstraction levels. The OSI model comprises seven abstraction layers. Each layer of the model encapsulates and addresses a different part of the needs of digital communications, thereby reducing the complexity of the associated engineering solutions.
A famous aphorism of David Wheeler is, "All problems in computer science can be solved by another level of indirection." This is often deliberately misquoted with "abstraction" substituted for "indirection." It is also sometimes misattributed to Butler Lampson. Kevlin Henney's corollary to this is, "...except for the problem of too many layers of indirection."
Computer architecture
In a computer architecture, a computer system is usually represented as consisting of several abstraction levels such as:
software
programmable logic
hardware
Programmable logic is often considered part of the hardware, while the logical definitions are also sometimes seen as part of a device's software or firmware. Firmware may include only low-level software, but can also include all software, including an operating system and applications. The software layers can be further divided into hardware abstraction layers, physical and logical device drivers, repositories such as filesystems, operating system kernels, middleware, applications, and others. A distinction can also be made from low-level programming languages like VHDL, machine language, assembly language to a compiled language, interpreter, and script language.
Input and output
In the Unix operating system, most types of input and output operations are considered to be streams of bytes read from a device or written to a device. This stream of bytes model is used for file I/O, socket I/O, and terminal I/O in order to provide device independence. In order to read and write to a device at the application level, the program calls a function to open the device, which may be a real device such as a terminal or a virtual device such as a network port or a file in a file system. The device's physical characteristics are mediated by the operating system which in turn presents an abstract interface that allows the programmer to read and write bytes from/to the device. The operating system then performs the actual transformation needed to read and write the stream of bytes to the device.
Graphics
Most graphics libraries such as OpenGL provide an abstract graphical device model as an interface. The library is responsible for translating the commands provided by the programmer into the specific device commands needed to draw the graphical elements and objects. The specific device commands for a plotter are different from the device commands for a CRT monitor, but the graphics library hides the implementation and device-dependent details by providing an abstract interface which provides a set of primitives that are generally useful for drawing graphical objects.
See also
Application programming interface (API)
Application binary interface (ABI)
Compiler, a tool for abstraction between source code and machine code
Hardware abstraction
Information hiding
Layer (object-oriented design)
Namespace violation
Protection ring
Operating system, an abstraction layer between a program and computer hardware
Software engineering
References
Computer architecture
Abstraction | Abstraction layer | Technology,Engineering | 845 |
53,697,574 | https://en.wikipedia.org/wiki/Teresa%20W.%20Haynes | Teresa W. Haynes (born 1953) is an American professor of mathematics and statistics at East Tennessee State University known for her research in graph theory and particularly on dominating sets.
Education and career
Haynes earned three degrees from Eastern Kentucky University: a B.S. in mathematics and education in 1975, M.A. in mathematics and education in 1978, and M.S. in mathematical sciences in 1984. She completed her Ph.D. in computer science in 1988 from the University of Central Florida. Her dissertation was On --Insensitive Domination and was supervised by Robert C. Brigham.
Haynes worked as a mathematics teacher from 1975 to 1978 and as a telephone engineer from 1978 to 1981. She became a mathematics and computer science instructor at Pikeville College in 1981, and moved to Prestonburg Community College in 1983.
After completing her doctorate in 1988, she became an assistant professor at East Tennessee State, and she was promoted to full professor there in 1999.
Books
Haynes is the author of two books on dominating sets in graph theory:
References
External links
1953 births
Living people
American women mathematicians
Mathematicians from Tennessee
Eastern Kentucky University alumni
University of Central Florida alumni
University of Pikeville faculty
East Tennessee State University faculty
Graph theorists
21st-century American women | Teresa W. Haynes | Mathematics | 247 |
454,486 | https://en.wikipedia.org/wiki/Windows%209x | Windows 9x is a generic term referring to a line of discontinued Microsoft Windows operating systems from 1995 to 2000, which were based on the Windows 95 kernel and its underlying foundation of MS-DOS, both of which were updated in subsequent versions. The first version in the 9x series was Windows 95, which was succeeded by Windows 98 and then Windows Me, which was the third and last version of Windows on the 9x line, until the series was superseded by Windows XP.
Windows 9x is predominantly known for its use in home desktops. In 1998, Windows made up 82% of operating system market share.
The internal release number for versions of Windows 9x is 4.x. The internal versions for Windows 95, 98, and Me are 4.0, 4.1, and 4.9, respectively. Previous MS-DOS-based versions of Windows used version numbers of 3.2 or lower. Windows NT, which was aimed at professional users such as networks and businesses, used a similar but separate version number between 3.1 and 4.0. All versions of Windows from Windows XP onwards are based on the Windows NT codebase.
History
Windows prior to 95
The first independent version of Microsoft Windows, version 1.0, released on November 20, 1985, achieved little popularity. Its name was initially "Interface Manager", but Rowland Hanson, the head of marketing at Microsoft, convinced the company that the name Windows would be more appealing to consumers. Windows 1.0 was not a complete operating system, but rather an "operating environment" that extended MS-DOS. Consequently, it shared the inherent flaws and problems of MS-DOS.
The second installment of Microsoft Windows, version 2.0, was released on December 9, 1987, and used the real-mode memory model, which confined it to a maximum of 1 megabyte of memory. In such a configuration, it could run under another multitasking system like DESQview, which used the 286 Protected Mode.
Microsoft Windows scored a significant success with Windows 3.0, released in 1990. In addition to improved capabilities given to native applications, Windows also allowed users to better multitask older MS-DOS-based software compared to Windows/386, thanks to the introduction of virtual memory.
Microsoft developed Windows 3.1, which included several minor improvements to Windows 3.0, primarily consisting of bugfixes and multimedia support. It also excluded support for Real mode, and only ran on an Intel 80286 or better processor. Windows 3.1 was released on April 6, 1992. In November 1993 Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which included all of the patches and updates that followed the release of Windows 3.1 in early 1992.
Meanwhile, Microsoft continued to develop Windows NT. The main architect of the system was Dave Cutler, one of the chief architects of VMS at Digital Equipment Corporation. Microsoft hired him in August 1988 to create a successor to OS/2, but Cutler created a completely new system instead based on his MICA project at Digital. The first version of Windows NT, Windows NT 3.1, would be released on July 27, 1993 and used Windows 3.1's interface.
About a year before the development of Windows 3.1's successor (Windows 95, code-named Chicago) began, Microsoft announced at its 1991 Professional Developers Conference that they would be developing a successor to Windows NT code-named Cairo, which some viewed it as succeeding both Windows NT and Windows 3.1's successor under one unified system. Microsoft publicly demonstrated Cairo at the 1993 Professional Developers Conference, complete with a demo system running Cairo for all attendees to use.
Based on the Windows NT kernel, Cairo was a next-generation operating system that was to feature as many new technologies into Windows, including a new user interface with an object-based file system (this new user interface would officially debut with Windows 95 nearly 4 years later while the object-based file system would later be adopted as WinFS during the development of Windows Vista). According to Microsoft's product plan at the time, Cairo was planned to be released as late as July 1996 following its development.
However, it had become apparent that Cairo was a much more difficult project than Microsoft had anticipated, and the project was subsequently cancelled 5 years into development. A subset of features from Cairo were eventually added into Windows NT 4.0 released on August 24, 1996, albeit without the object file system. Windows NT and Windows 9x would not be truly unified until Windows XP nearly 5 years later, when Microsoft began to merge its consumer and business line of Windows under a singular brand name based on Windows NT.
Windows 95
After Windows 3.11, Microsoft began to develop a new consumer oriented version of the operating system code-named Chicago. Chicago was designed to have support for 32-bit preemptive multitasking, that of which was available in OS/2 and Windows NT, although a 16-bit kernel would remain for the sake of backward compatibility. The Win32 API first introduced with Windows NT was adopted as the standard 32-bit programming interface, with Win16 compatibility being preserved through a technique known as "thunking". A new GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped (and indeed after Cairo was cancelled 5 years in development).
Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance and development time. Additionally it was necessary to carry over design decisions from earlier versions of Windows for reasons of backwards compatibility, even if these design decisions no longer matched a more modern computing environment. These factors immediately began to impact the operating system's efficiency and stability.
Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on August 24, 1995.
Microsoft went on to release five different versions of Windows 95:
Windows 95 – original release (RTM)
Windows 95 A – included Windows 95 OSR1 slipstreamed into the installation.
Windows 95 B – (OSR2) included several major enhancements, Internet Explorer (IE) 3.0 and full FAT32 file system support.
Windows 95 B USB – (OSR2.1) included basic USB support.
Windows 95 C – (OSR2.5) included all the above features, plus IE 4.0. This was the last 95 version produced.
OSR2, OSR2.1, and OSR2.5 ("OSR" being an initialism for "OEM Service Release") were not released to the general public, rather, they were available only to OEMs that would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity).
The first Microsoft Plus! add-on pack was sold for Windows 95.
Windows 98
On June 25, 1998, Microsoft released Windows 98, code-named "Memphis" during development. It included new hardware drivers and better support for the FAT32 file system which allows support for disk partitions larger than the 2 GB maximum accepted by Windows 95. The USB support in Windows 98 was more robust than the basic support provided by the OEM editions of Windows 95. It also introduces the controversial integration of the Internet Explorer 4 web browser into the Windows shell and File Explorer (then known as Windows Explorer at the time).
On June 10, 1999, Microsoft released Windows 98 Second Edition (also known as Windows 98 SE), an interim release whose notable features were the addition of Internet Connection Sharing and improved WDM audio and modem support. Internet Connection Sharing is a form of network address translation, allowing several machines on a LAN (Local Area Network) to share a single Internet connection. It also includes Internet Explorer 5 as opposed to Internet Explorer 4 in the original version. Windows 98 Second Edition also has certain improvements over the original release, and hardware support through device drivers was increased. Many minor problems present in the original release of Windows 98 were also found and fixed. These changes, among others, makes it (according to many) the most stable release of Windows 9x family—to the extent that some commentators used to say that Windows 98's beta version was more stable than Windows 95's final (gamma) version.
Like with Windows 95, Windows 98 received the Microsoft Plus! add-on in the form of Plus! 98.
Windows Me
On September 14, 2000, Microsoft introduced Windows Me (Millennium Edition; also known as Windows ME), which upgraded Windows 98 with enhanced multimedia and Internet features. Code-named "Millennium", It was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Windows XP (then code-named Whistler at the time). It borrowed some features from the business-oriented Windows 2000 into the Windows 9x series, and introduced the first version of System Restore, which allowed users to revert their system state to a previous "known-good" point in the case of a system failure. Windows Me also introduced the first release of Windows Movie Maker and included Windows Media Player 7. Internet Explorer 5.5 came shipped with Windows Me. Many of the new features from Windows Me were also available as updates for older Windows versions such as Windows 98 via Windows Update. The role of MS-DOS has also been greatly reduced compared to previous versions of Windows, with Windows Me no longer allowing real mode DOS to be accessed.
Windows Me initially gained a positive reception upon its release, but later on it was heavily criticized by users for its instability and unreliability, due to frequent freezes and crashes. Windows Me has been viewed by many as one of the worst operating systems of all time, both in critical and in retrospect. PC World was highly critical of Windows Me months after it was released (and indeed when it was no longer available), with their article infamously describing Windows Me as "Mistake Edition" and placing it 4th in their "Worst Tech Products of All Time" feature in 2006. Consequently, many home users that were affected by Windows Me's instabilities (as well as those who negatively viewed Windows Me) ultimately stuck with the more reliable Windows 98 Second Edition for the remainder of Windows Me's lifecycle until the release of Windows XP in 2001. A small number of Windows Me owners moved over to the business-oriented Windows 2000 Professional during that same time period.
The inability of users to easily boot into real mode MS-DOS like in Windows 95 and 98 led users to quickly figure out how to hack their Windows Me installations to provide this missing functionality back into the operating system.
Windows Me never received a dedicated Microsoft Plus! add-on like with Windows 95 and Windows 98.
Decline
The release of Windows 2000 marked a shift in the user experience between the Windows 9x series and the Windows NT series. Windows NT 4.0, while based on the Windows 95 interface, suffered from a lack of support for USB, Plug and Play and DirectX versions after 3.0, preventing its users from playing contemporary games. Windows 2000 on the other hand, while primarily made towards business and server users, featured an updated user interface and better support for both Plug and Play and USB, as well as including built-in support for DirectX 7.0. The release of Windows XP in late 2001 confirmed the change of direction for Microsoft, bringing the consumer and business operating systems together under Windows NT.
After the release of Windows XP, Microsoft stopped selling Windows 9x releases to end users (and later to OEMs) in the early 2000s. By March 2004, it was impossible to purchase any versions of the Windows 9x series.
End of support
Over time, support for the Windows 9x series ended. Windows 95 had lost its mainstream support in December 31, 2000, and extended support was dropped from Windows 95 on December 31, 2001 (which also ended support for older Windows versions prior to Windows 95 on that same day). Windows 98 and Windows 98 Second Edition had its mainstream support end on June 30, 2002, and mainstream support for Windows Me ended on December 31, 2003. Microsoft then continued to support the Windows 9x series until July 11, 2006, when extended support ended for Windows 98, Windows 98 Second Edition (SE), and Windows Millennium Edition (Me) – 4 years after extended support for Windows 95 ended on December 31, 2001.
Microsoft DirectX, a set of standard gaming APIs, stopped being updated on Windows 95 at version 8.0a. It also stopped being updated on Windows 98 and Me after the release of Windows Vista in 2006, making DirectX 9.0c the last version of DirectX to support these operating systems.
Support for Microsoft Internet Explorer on all Windows 9x releases have also ended. Windows 95, Windows 98 and Windows Me all lost security patches for Internet Explorer when the respective operating systems reached their end of support date. Internet Explorer 5.5 with Service Pack 2 is the last version of Internet Explorer compatible with Windows 95, while Internet Explorer 6 with Service Pack 1 is the last version compatible with latter releases of Windows 9x (i.e. 98 and Me). While Internet Explorer 6 for Windows XP did receive security patches up until it lost support, this is not the case for IE6 under Windows 98 and Me. Due to its age, Internet Explorer 7, the first major update to Internet Explorer 6 in half a decade, was only available for Windows XP SP2 and Windows Vista.
The Windows Update website continued to be available for Windows 98, Windows 98 SE, and Windows Me after their end of support date; however, during 2011, Microsoft retired the Windows Update v4 website and removed the updates for Windows 98, Windows 98 SE, and Windows Me from its servers.
Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows Me (and XP) would end on July 31, 2019 (and for Windows 7 on January 22, 2020).
Current usage
The growing number of important updates caused by the end of life service for these operating systems have slowly made Windows 9x even less practical for everyday use. Today, even open source projects such as Mozilla Firefox will not run on Windows 9x without major rework.
RetroZilla is a fork of Gecko 1.8.1 aimed at bringing "improved compatibility on the modern web" for versions of Windows as old as Windows 95 and NT 4.0. The latest version, 2.2, was released in February 2019 and added support for TLS 1.2.
Design
Kernel
Windows 9x is a series of monolithic 16/32-bit operating systems.
Like most operating systems, Windows 9x consists of kernel space and user space memory. Although Windows 9x features some memory protection, it does not protect the first megabyte of memory from userland applications for compatibility reasons. This area of memory contains code critical to the functioning of the operating system, and by writing into this area of memory an application can crash or freeze the operating system. This was a source of instability as faulty applications could accidentally write into this region, potentially corrupting important operating system memory, which usually resulted in some form of system error and halt.
User mode
The user-mode parts of Windows 9x consist of three subsystems: the Win16 subsystem, the Win32 subsystem and MS-DOS.
Windows 9x/Me set aside two blocks of 64 KiB memory regions for GDI and heap resources. By running multiple applications, applications with numerous GDI elements or by running applications over a long span of time, it could exhaust these memory areas. If free system resources dropped below 10%, Windows would become unstable and likely crash.
Kernel mode
The kernel mode parts consist of the Virtual Machine Manager (VMM), the Installable File System Manager (IFSHLP), the Configuration Manager, and in Windows 98 and later, the WDM Driver Manager (NTKERN).
As a 32-bit operating system, virtual memory space is 4 GiB, divided into a lower 2 GiB for applications and an upper 2 GiB for kernel per process.
Registry
Like Windows NT, Windows 9x stores user-specific and configuration-specific settings in a large information database called the Windows registry. Hardware-specific settings are also stored in the registry, and many device drivers use the registry to load configuration data. Previous versions of Windows used files such as AUTOEXEC.BAT, CONFIG.SYS, WIN.INI, SYSTEM.INI and other files with an .INI extension to maintain configuration settings. As Windows became more complex and incorporated more features, .INI files became too unwieldy for the limitations of the then-current FAT filesystem. Backwards-compatibility with .INI files was maintained until Windows XP succeeded the 9x and NT lines.
Although Microsoft discourages using .INI files in favor of Registry entries, a large number of applications (particularly 16-bit Windows-based applications) still use .INI files. Windows 9x supports .INI files solely for compatibility with those applications and related tools (such as setup programs). The AUTOEXEC.BAT and CONFIG.SYS files also still exist for compatibility with real-mode system components and to allow users to change certain default system settings such as the PATH environment variable.
The registry consists of two files: User.dat and System.dat. In Windows Me, Classes.dat was added.
Virtual Machine Manager
The Virtual Machine Manager (VMM) is the 32-bit protected mode kernel at the core of Windows 9x. Its primary responsibility is to create, run, monitor and terminate virtual machines.
The VMM provides services that manage memory, processes, interrupts and protection faults. The VMM works with virtual devices (loadable kernel modules, which consist mostly of 32-bit ring 0 or kernel mode code, but may include other types of code, such as a 16-bit real mode initialisation segment) to allow those virtual devices to intercept interrupts and faults to control the access that an application has to hardware devices and installed software. Both the VMM and virtual device drivers run in a single, 32-bit, flat model address space at privilege level 0 (also called ring 0). The VMM provides multi-threaded, preemptive multitasking. It runs multiple applications simultaneously by sharing CPU (central processing unit) time between the threads in which the applications and virtual machines run.
The VMM is also responsible for creating MS-DOS environments for system processes and Windows applications that still need to run in MS-DOS mode. It is the replacement for WIN386.EXE in Windows 3.x, and the file vmm32.vxd is a compressed archive containing most of the core VxD, including VMM.vxd itself and ifsmgr.vxd (which facilitates file system access without the need to call the real mode file system code of the DOS kernel).
Software support
Unicode
Partial support for Unicode can be installed on Windows 9x through the Microsoft Layer for Unicode.
File systems
Windows 9x does not natively support NTFS or HPFS; however, there are third-party solutions available for Windows 9x that allows read-only access to NTFS volumes. Early versions of Windows 95 did not support FAT32.
Like Windows for Workgroups 3.11, Windows 9x provides support for 32-bit file access based on IFSHLP.SYS. Unlike Windows 3.x, Windows 9x has support for the VFAT file system, allowing file names with a maximum of 255 characters instead of having 8.3 filenames.
Event logging and tracing
Windows 9x has no support for event logging and tracing or error reporting that the Windows NT family of operating systems has, although software like Norton CrashGuard can be used to achieve similar capabilities on Windows 9x.
Security
Windows 9x is designed as a single-user system. Thus, the security model is much less effective than the one in Windows NT. One reason for this is the FAT file systems (including FAT12/FAT16/FAT32), which are the only ones that Windows 9x supports officially, though Windows NT also supports FAT12 and FAT16 (but not FAT32; which wouldn’t be supported until Windows 2000) and Windows 9x can be extended to read and write NTFS volumes using third-party Installable File System drivers. FAT systems have very limited security; every user that has access to a FAT drive also has access to all files on that drive. The FAT file systems provide no access control lists and file-system level encryption like NTFS.
Some operating systems that were available at the same time as Windows 9x are either multi-user or have multiple user accounts with different access privileges, which allows important system files (such as the kernel image) to be immutable under most user accounts. In contrast, while Windows 95 and later operating systems offer the option of having profiles for multiple users, they have no concept of access privileges, making them roughly equivalent to a single-user, single-account operating system; this means that all processes can modify all files on the system that aren't open, in addition to being able to modify the boot sector and perform other low-level hard drive modifications. This enables viruses and other clandestinely installed software to integrate themselves with the operating system in a way that is difficult for ordinary users to detect or undo. The profile support in the Windows 9x family is meant for convenience only; unless some registry keys are modified, the system can be accessed by pressing "Cancel" at login, even if all profiles have a password. Windows 95's default login dialog box also allows new user profiles to be created without having to log in first.
Users and software can render the operating system unable to function by deleting or overwriting important system files from the hard disk. Users and software are also free to change configuration files in such a way that the operating system is unable to boot or properly function. This phenomenon is not exclusive to Windows 9x; many other operating systems are also susceptible to these vulnerabilities, either by viruses, malware or by the user’s consent.
Installation software often replaced and deleted system files without properly checking if the file was still in use or of a newer version. This created a phenomenon often referred to as DLL hell. Windows Me introduced System File Protection and System Restore to handle common problems caused by this issue.
Network sharing
Windows 9x offers share-level access control security for file and printer sharing as well as user-level access control if a Windows NT-based operating system is available on the network. In contrast, Windows NT-based operating systems offer only user-level access control but integrated with the operating system's own user account security mechanism.
Hardware support
Drivers
Device drivers in Windows 9x can be virtual device drivers or (starting with Windows 98) WDM drivers. VxDs usually have the filename extension .vxd or .386, whereas WDM compatible drivers usually use the extension .sys. The 32-bit VxD message server (msgsrv32) is a program that is able to load virtual device drivers (VxDs) at startup and then handle communication with the drivers. Additionally, the message server performs several background functions, including loading the Windows shell (such as Explorer.exe or Progman.exe).
Another type of device drivers are .DRV drivers. These drivers are New Executable format and are loaded in user-mode, and are commonly used to control devices such as multimedia devices. To provide access to these devices, a dynamic link library is required (such as MMSYSTEM.DLL).
Windows 9x retains backwards compatibility with many drivers made for Windows 3.x and MS-DOS. Using MS-DOS drivers can limit performance and stability due to their use of conventional memory and need to run in real mode which requires the CPU to switch in and out of protected mode.
Drivers written for Windows 9x are loaded into the same address space as the kernel. This means that drivers can by accident or design overwrite critical sections of the operating system. Doing this can lead to system crashes, freezes and disk corruption. Faulty operating system drivers were a source of instability for the operating system.
Other monolithic and hybrid kernels, like Linux and Windows NT, are also susceptible to malfunctioning drivers impeding the kernel's operation.
Often the software developers of drivers and applications had insufficient experience with creating programs for the 'new' system, thus causing many errors which have been generally described as "system errors" by users, even if the error is not caused by parts of Windows or DOS. Microsoft has repeatedly redesigned the Windows Driver architecture since the release of Windows 95 as a result.
CPU and bus technologies
Windows 9x has no native support for hyper-threading, Data Execution Prevention, symmetric multiprocessing, APIC, or multi-core processors.
Windows 9x has no native support for SATA host bus adapters (and neither do Windows 2000 nor Windows XP for that matter), or USB drives (except for Windows Me). There are, however, many SATA-I controllers for which Windows 98/Me drivers exist (and indeed Windows 2000 and Windows XP also provided SATA support via third-party drivers as well), and USB mass storage support has been added to Windows 95 OSR2 and Windows 98 through third party drivers. Hardware driver support for Windows 98/Me began to decline in 2005, most notably with motherboard chipsets and video cards.
Early versions of Windows 95 had no support for USB or AGP acceleration (including lack of Infrared support for Windows 95 RTM). Windows 95 had preliminary support for ATAPI CD-ROMs, albeit with buggy ATAPI implementation. Windows 95 prior to OSR2 also had buggy support for processors implementing MMX as well as processors based on the P6 microarchitecture.
MS-DOS
Windows 95 was able to reduce the role of MS-DOS in Windows much further than had been done in Windows 3.1x and earlier. According to Microsoft developer Raymond Chen, MS-DOS served two purposes in Windows 95: as the boot loader, and as the 16-bit legacy device driver layer.
When Windows 95 started up, MS-DOS loaded, processed CONFIG.SYS, launched COMMAND.COM, ran AUTOEXEC.BAT and finally ran WIN.COM. The WIN.COM program used MS-DOS to load the virtual machine manager, read SYSTEM.INI, load the virtual device drivers, and then turn off any running copies of EMM386 and switch into protected mode. Once in protected mode, the virtual device drivers (VxDs) transferred all state information from MS-DOS to the 32-bit file system manager, and then shut off MS-DOS. These VxDs allow Windows 9x to interact with hardware resources directly, as providing low-level functionalities such as 32-bit disk access and memory management. All future file system operations would get routed to the 32-bit file system manager. In Windows Me, win.com was no longer executed during the startup process; instead it went directly to execute VMM32.VXD from IO.SYS.
The second role of MS-DOS (as the 16-bit legacy device driver layer) was as a backward compatibility tool for running DOS programs in Windows. Many MS-DOS programs and device drivers interacted with DOS in a low-level way, for example, by patching low-level BIOS interrupts such as int 13h, the low-level disk I/O interrupt. When a program issued an int 21h call to access MS-DOS, the call would go first to the 32-bit file system manager, which would attempt to detect this sort of patching. If it detects that the program has tried to hook into DOS, it will jump back into the 16-bit code to let the hook run. A 16-bit driver called IFSMGR.SYS would previously have been loaded by CONFIG.SYS, the job of which was to hook MS-DOS first before the other drivers and programs got a chance, then jump from 16-bit code back into 32-bit code, when the DOS program had finished, to let the 32-bit file system manager continue its work. According to Windows developer Raymond Chen, "MS-DOS was just an extremely elaborate decoy. Any 16-bit drivers and programs would patch or hook what they thought was the real MS-DOS, but which was in reality just a decoy. If the 32-bit file system manager detected that somebody bought the decoy, it told the decoy to quack."
MS-DOS Virtualization
Windows 9x can run MS-DOS applications within itself using a method called "Virtualization", where an application is run on a Virtual DOS machine.
MS-DOS Mode
Windows 95 and Windows 98 also offer backwards compatibility for DOS applications in the form of being able to boot into a native "DOS Mode" (MS-DOS can be booted without booting Windows, but not putting the CPU in protected mode). Through Windows 9x's memory managers and other post-DOS improvements, the overall system performance and functionality is improved. Some old applications or games may not run properly in the virtual DOS environment within Windows and require real DOS Mode.
Having a command line mode outside of the GUI also offers the ability to fix certain system errors without entering the GUI. For example, if a virus is active in GUI mode it can often be safely removed in DOS mode, by deleting its files, which are usually locked while infected in Windows. Similarly, corrupted registry files, system files or boot files can be restored from real mode DOS. Windows 95 and Windows 98 can be started from DOS Mode by typing 'WIN' at the command prompt and then hitting "Enter", akin to earlier versions of Windows such as Windows 3.1.
User interface
Users can control a Windows 9x-based system through a command-line interface (or CLI) or a graphical user interface (or GUI). The default mode for Windows is usually the graphical user interface, whereas the CLI is available through MS-DOS windows. The GUI provides a means to control the placement and appearance of individual application windows, and interacts with the window system.
The GDI, which is a part of the Win32 and Win16 subsystems, is also a module that is loaded in user mode, unlike Windows NT where the GDI is loaded in kernel mode. Alpha compositing and therefore transparency effects, such as fade effects in menus, are not supported by the GDI in Windows 9x, unlike with Windows NT releases since Windows 2000.
Windows Explorer is the default user interface for the GUI; however, a variety of additional Windows shell replacements exist. Other GUIs include LiteStep, bbLean and Program Manager.
In popular culture
The sheer popularity of the Windows 9x series led to several web-based projects being created in the 2010s that aimed to replicate the look and feel of Windows 9x (and indeed an actual operating system as a whole) on a single web browser while also invoking nostalgia.
Windows 93 (stylized as "WINDOWS93" in the title) is a web-based parody site created by two French musicians and programmers who go by the names of jankenpopp and Zombectro. Designed to look and feel like an actual operating system, It is also a parody of the Windows 9x series. It features several web applications which reference and features various internet memes from the late 1990s up to the early 2000s.
EmuOS is another web-based site that aims to replicate the look and feel of Windows 9x as a whole, featuring 3 themes based on all major Windows 9x releases starting from Windows 95 up to Windows Me. It was created by Emupedia, a video game preservation and computer history community-based site, and was designed to play retro games and applications within a web browser. The aforementioned Windows 93 parody site is also featured.
Windows 98 has been recreated in web-based format under the name 98.js (also known as Windows 98 Online). It featured web-based versions of several classic Windows applications.
See also
Comparison of operating systems
Architecture of Windows 9x
MS-DOS 7
References
External links
Computing platforms
9x
Discontinued versions of Microsoft Windows | Windows 9x | Technology | 6,696 |
25,178,379 | https://en.wikipedia.org/wiki/Hydrographer%20of%20the%20Navy | The Hydrographer of the Navy is the principal hydrographical Royal Naval appointment. From 1795 until 2001, the post was responsible for the production of charts for the Royal Navy, and around this post grew the United Kingdom Hydrographic Office (UKHO).
In 2001, the post was disassociated from UKHO, and the Hydrographer of the Navy is now a title bestowed upon the current captain—hydrography and meteorology—on the staff of the Devonport Flotilla at HMNB Devonport.
History
Before the establishment of the post, captains of Royal Navy ships were responsible for the provision of their own charts. In practice this meant that ships often sailed with inadequate information for safe navigation, and that when new areas were surveyed, the data rarely reached all those who needed it. The Admiralty appointed Alexander Dalrymple as hydrographer on 12 August 1795, with a remit to gather and distribute charts to HM Ships. Within a year existing charts had been collated, and the first catalogue published. It was five years before the first chart—of Quiberon Bay in Brittany—was produced by the Hydrographer.
Under Dalrymple's successor, Captain Thomas Hurd, Admiralty charts were sold to the general public, and by 1825, there were 736 charts listed in the catalogue. In 1829, the first Sailing Directions were published, and in 1833, under Rear-Admiral Sir Francis Beaufort—of the eponymous Beaufort scale—the tide tables were first published. Notices to Mariners came out in 1834, allowing for the timely correction of charts already in use. Beaufort was certainly responsible for a step change in output; by the time he left the office in 1855, the Hydrographic Office had a catalogue of nearly 2,000 charts and was producing over 130,000 charts, of which about half were provided to the Royal Navy and half sold.
In 1939, on the outbreak of World War II, the Hydrographic Office moved to Taunton, and the post of hydrographer moved with it. In 2001, a chief executive was appointed to run the United Kingdom Hydrographic Office as a profit-making agency of the British government, and at this time the roles of National Hydrographer and Hydrographer of the Navy were divided. The title of hydrographer devolved to Captain (hydrography and meteorology), a senior officer on the staff of the Commodore of the Devonport Flotilla, and the senior Royal Navy officer within the HM branch. , the post has been renamed Captain (HM Ops), but continues to carry the title Hydrographer of the Navy.
List of hydrographers
1795–1808: Alexander Dalrymple
1808–1823: Captain Thomas Hurd
1823–1829: Rear Admiral Sir Edward Parry
1829–1855: Rear Admiral Sir Francis Beaufort
1855–1863: Rear Admiral John Washington
1863–1874: Vice Admiral Sir George Richards
1874–1884: Captain Sir Frederick Evans
1884–1904: Rear Admiral Sir William Wharton
1904–1909: Rear Admiral Sir Arthur Mostyn Field
1909–1914: Rear Admiral Sir Herbert Purey-Cust
1914–1919: Rear Admiral Sir John Parry
1919–1924: Vice Admiral Sir Frederick Learmonth
1924–1932: Vice Admiral Sir Percy Douglas
1932–1945: Vice Admiral Sir John Edgell
1945–1950: Rear Admiral Arthur Norris Wyatt
1950–1955: Vice Admiral Sir Archibald Day
1955–1960: Rear Admiral Kenneth Collins
1960–1966: Rear Admiral Sir Edmund Irving
1966–1971: Rear Admiral Steve Ritchie
1971–1975: Rear Admiral Geoffrey Hall
1975–1985: Rear Admiral Sir David Haslam
1985–1990: Rear Admiral Roger Morris
1990–1994: Rear Admiral John Myres
1994–1996: Rear Admiral Sir Nigel Essenhigh
1996–2001: Rear Admiral John Clarke
2001–2003: Captain Mike Barritt
2003–2005: Captain David Lye
2005–2007: Captain Ian Turner
2007–2010: Captain Robert Stewart
2010–2012: Captain Vaughan Nail
2012–2013: Captain Stephen Malcolm
2013–2016: Captain David Robertson
2016–2017: Captain Matt Syrett
2017–2019: Captain Gary Hesling
2019–2021: Captain Derek Rae
2021–2023: Commander Mathew J Warren
2023-: Rear Admiral Angus Essenhigh
Notes
References
External links
Hydrography
National hydrographic offices
Royal Navy appointments | Hydrographer of the Navy | Environmental_science | 858 |
1,988,772 | https://en.wikipedia.org/wiki/Tatung%20Company | Tatung Company () (Tatung; ) is a multinational corporation established in 1918 and headquartered in Zhongshan, Taipei, Taiwan.
Established in 1918 and headquartered in Taipei, Tatung Company holds 3 business groups, which includes 8 business units: Industrial Appliance BU, Motor BU, Wire & Cable BU, Solar BU, Smart Meter BU, System Integration BU, Appliance BU, and Advanced Electronics BU. As a conglomerate, Tatung's investees involve in some major industries such as optoelectronics, energy, system integration, industrial system, branding retail channel, and asset development.
History
Xie Zhi Business Enterprise, the forerunner of Tatung Company, was established in 1918 by Shang-Zhi Lin. It was involved in high-profile construction projects, including the Tamsui River embankment project and the Executive Yuan building.
In 1939, Tatung Iron Works was established as the company ventured into iron and steel manufacturing. Following the arrival of the ROC administration in 1945, Tatung Iron Works was renamed to Tatung Steel and Machinery Manufacturing Company. The company began mass production of electrical motors and appliances 10 years later in 1949.
In 1962 the company became publicly listed on the Taiwan Stock Exchange, and was renamed to Tatung Company in 1968. A year later, Tatung began production of color TVs, and adopted the "Tatung Boy" mascot, which became a Taiwanese cultural symbol.
Timeline
1970
Revenues exceeded NT$2.2 billion, making Tatung Taiwan's foremost private company.
1972
W. S. Lin, the grandson of Shang-Zhi Lin, was appointed as president of Tatung. Shortly thereafter he was implicated in a case of embezzlement at Tatung which would take more than ten years to litigate.
1977
Participated in the Ten Major Construction Projects with the construction of a slag treatment facility for China Steel and provision for Chiang Kai-shek International Airport's power control station
2000
Chunghwa Picture Tubes was listed on the OTC market
2001
Global Management Division set up at the company's headquarters
Chunghwa Picture Tubes was listed on the Taiwan Stock Exchange
2003
Mass production of LCD and PDP TVs
Lin Cheng-yuan resigns as head of Chunghwa Picture Tubes.
2004
Set up a new subsidiary Toes Opto-Mechatronics Company
Established SeQual Technologies Co. to produce the oxygen generator for clinical therapy and home health care uses
Established Tatung Compressors (Zhongshan) Co. in China
2005
Consolidated Tatung's Desktop PC Business Unit with Elitegroup Computer Systems (ECS), making Tatung the largest shareholder of ECS
Established Tatung Wire & Cable Technology (Wujiang) Co. in China
Death of T.S. Lin
2006
Mr. W. S. Lin was elected as chairman and president of Tatung
Green Energy Technology started trading on the emerging stock market
2007
Forward Electronics invested in Apollo Solar Energy Co. to expand its scope into the market of solar cell module
2008
Green Energy Technology was listed on the Taiwan Stock Exchange as 25 January
Established Tatung Electric Technology (Vietnam) Co. for the manufacturing and sales of wires and cables
2009
Tatung University launched WiMAX network, the first wireless broadband network ever built in campus alike
To help the victims of Typhoon Morakot, Tatung initiated a Special Service Project in which 1,000 technicians and 70 service trucks were mobilized in and around the affected areas to help handle damaged home appliance items. The employees of Tatung Group together with the staff of Tatung University and Tatung High School also donated their one-day earnings totaling to NT$10 million to those in need.
2010
New Energy Business Unit set up a crystal growth center to manufacture multicrystalline silicon bricks, a milestone for HQ's involvement in the crystal-related business
Luxury condominium, "Tatung Noble Residences (Phase II of Tatung Tomorrow World)", the 2nd project in Nangang by Shan Chih Asset Development, was under construction
2011
Mrs. W.Y. Lin was appointed President of Tatung
2012
Lithium iron phosphate cathode material by Tatung Fine Chemicals successfully entered into Japanese market of energy storage
2015
Lin Cheng-yuan, indicted in the US on charges of price fixing.
2020
Lin family loses control of the Tatung board of directors. Lin Wen-yuan (no relation) appointed as chairman of the new board.
2021
Lu Ming-kuang, Lin Wen-yuan's successor as chairman, resigns the position.
Affiliations
Tatung F.C. (until 2022)
Tatung University
Tatung Institute of Commerce and Technology
Tatung High School
Sponsorship
Tatung sponsored Wolverhampton Wanderers Football Club, an English Football League side, from 1982 to 1985.
See also
List of companies of Taiwan
References
1918 establishments in Taiwan
Computer hardware companies
Computer systems companies
Electronics companies of Taiwan
Electronics companies established in 1918
Heating, ventilation, and air conditioning companies
Home appliance manufacturers of Taiwan
Manufacturing companies based in Taipei
Taiwanese brands
Videotelephony | Tatung Company | Technology | 1,008 |
73,540,660 | https://en.wikipedia.org/wiki/Male%20heir | A male heir (sometimes heirs male)—usually describing the first-born son (primogeniture) or oldest surviving son of a family—has traditionally been the recipient of the residue of the estate, titles, wealth and responsibilities of his father in a patrilineal system. This system may vary by region but has ancient, perhaps prehistoric, origins, and appears in the Code of Hammurabi: "Since daughters marry strangers and thereby cut themselves off from their family, only sons inherit the paternal estate. It is they who perpetuate the family name, and preserve the ancestral property."
Absence or inadequacy of a male heir has thus been periodically problematic, resulting in succession crises, corporate upheaval, and the occasional war. The presence or absence of a male heir may alter the decision-making patterns of fathers.
See also
Heir and spare
Son preference
Birth order
Order of succession
Line of hereditary succession
Heir apparent
Estate planning
Historical inheritance systems
Partible inheritance
Patrilineality
Patronymic
Human Y-chromosome DNA haplogroup
Salic law
References
Heirs to the throne
Patriarchy
Inheritance
Legal history
Real property law
Succession
Hereditary monarchy
Order of succession
Kinship and descent
Sibling | Male heir | Biology | 242 |
27,367,408 | https://en.wikipedia.org/wiki/Tibor%20Erdey-Gr%C3%BAz | Tibor Erdey-Grúz (27 October 1902 – 16 August 1976) was a Hungarian chemist and politician, who served as Minister of Higher Education between 1952 and 1953 and after that as Minister of Education from 1953 to 1956.
References
Magyar Életrajzi Lexikon
1902 births
1976 deaths
People from Budapest
Members of the Hungarian Working People's Party
Ministers of education of Hungary
Members of the National Assembly of Hungary (1953–1958)
20th-century Hungarian chemists
Electrochemists
Members of the Hungarian Academy of Sciences
Members of the German Academy of Sciences at Berlin | Tibor Erdey-Grúz | Chemistry | 116 |
28,707,577 | https://en.wikipedia.org/wiki/Motorola%20APCOR | The Motorola APCOR (Advanced Portable Coronary Observation Radio) was a 12 watt, paramedic telemetry radio produced by the Motorola company during the 1970s and 1980s. The Motorola APCOR could transmit voice and EKG simultaneously and was battery operated. There were three or more versions of the Motorola APCOR, the first was similar to the Biophone, the second was significantly smaller and was white. The third version of the Motorola APCOR was all white.
There was also a 1 watt version used in conjunction with a vehicle mounted repeater. Additionally many units were optionally equipped with a keypad on the right side of the carry handle. This was used to send DTMF tones to the repeater to turn it on and change channels. The feature was known as "steering".
The Motorola APCOR was discontinued in the 1980s and was soon phased out in favor of cellular phones that could transmit EKG and voice. Motorola mistakenly thought that the LA County Fire Department's code was for the radios to be orange, and created an orange radio for its first version . The radio had the Motorola MX 300 embedded inside of the radio and had 10 MED channels. It was slightly ahead of the Biophone, in portability, but both had the same specifications. The APCOR became very popular during the late 1970s and 1980s; it was widely adopted by fire departments and emergency medical services agencies across the United States and had a significant impact on pre-hospital medical care.
Gallery
See also
Biophone
References
https://web.archive.org/web/20111005020443/http://forums.qrz.com/archive/index.php/t-19508.html
https://books.google.com/books?id=7dlhz8It-LYC&dq=motorola+apcor&pg=PA4
http://weblink.ci.plainview.tx.us/WebLink8/DocView.aspx?id=8515&page=2&dbid=0
http://arlingtonfirejournal.blogspot.com/2005/03/first-medics.html
External links
http://www.usmra.com/guardian/history_of_paramedics.htm
https://web.archive.org/web/20110711082756/http://www.general-devices.com/files/learning_pdf/From_BioCom_to_Bluetooth.pdf
https://archive.today/20130414193331/http://www.emsmuseum.org/virtual-museum/equipment/articles/398790-1974-Paramedic-UHF-Telemetry-Radio-Development
Medical equipment
Telephony equipment
Telemetry
Apcor | Motorola APCOR | Biology | 590 |
40,624,233 | https://en.wikipedia.org/wiki/Postmenopausal%20confusion | Postmenopausal confusion, also commonly referred to as postmenopausal brain fog, is a group of symptoms of menopause in which women report problems with cognition at a higher frequency during postmenopause than before.
Multiple studies on cognitive performance following menopause have reported noticeable declines in greater than 60% of the patients. The common issues presented included impairments in reaction time and attention, difficulty recalling numbers or words, and forgetting reasons for involvement in certain behaviors. Association between subjective cognitive complaints and objective measures of performance show a significant impact on health-related quality of life for postmenopausal women.
Treatment primarily involves symptom management through non-pharmacological treatment strategies. This includes involvement in physical activity and following medically supervised diets, especially those that contain phytoestrogens or resveratrol. Pharmacological interventions in treating postmenopausal confusion are currently being researched. Hormone replacement therapy (HRT) is currently not indicated for the treatment of postmenopausal confusion due to inefficacy. The use of HRT for approved indications has identified no significant negative effect on postmenopausal cognition.
Although much of the literature references women, all people who undergo menopause, including those who do not self-identify as women, may experience symptoms of postmenopausal confusion.
History
Research on menopause as a whole declined with the end of the Women's Health Initiative (WHI) studies, but research on the treatment of symptoms associated with menopause—especially the treatment of cognitive decline—continues. The Study of Women's Health Across the Nation (SWAN), first started in 1996, continues to publish progress reports which include cognitive symptoms associated with menopausal transition, including those in postmenopause. , SWAN indicated, "Approximately 60% of midlife women report problems with memory during the [menopause transition], yet studies of measured cognitive performance during the transition are rare."
Although there are many relationships between hormone levels in postmenopause and cognitive function, the previously favored HRT therapies (estrogen therapies) have been shown to be ineffective in specifically treating postmenopausal confusion. The use of hormone replacement therapies, once considered detrimental to cognition in postmenopausal women, has now been shown to have no negative effect when used properly for approved indications. There are no conclusive studies to support any pharmacological agents, but several potential drug candidates are still being explored.
Presentation
Menopause is a natural decline in the ovarian function of women who reach the age between 45 and 54 years. "About 25 million women pass through menopause worldwide each year, and it has been estimated that, by the year 2030, the world population of menopausal and postmenopausal women will be 1.2 billion, with 47 million new entrants each year."
Postmenopause begins immediately following menopause (one year after the final menstrual cycle). Postmenopausal confusion is often manifested through the following cognitive symptoms: memory problems, forgetfulness, and poor concentration. Confusion which is otherwise unexplained and coincides with the onset of postmenopause may be postmenopausal confusion.
Causes
Risk factors
Hypertension
A 2019 literature review identified hypertension and history of pre-eclampsia as significant risk factors for the accelerated decline of cognitive function in women during midlife. Although the mechanism remains unclear, neuroimaging studies included in the review found that those with hypertension have evident structural changes in their brains; specifically, gray matter brain volume decreased and white matter hyperintensity volume increased.
Atherosclerosis and comorbidities
Atherosclerosis and comorbidities such as hyperlipidemia and diabetes mellitus have long been considered risk factors for cognitive decline because they have the propensity to cause the formation of amyloid plaques (aggregates of misfolded, deleterious proteins) in the brain.
Insomnia
Many postmenopausal women report insomnia. Studies have shown "associations between poor sleep quality and cognitive decline" in postmenopausal women as those with insufficient sleep, or with difficulty falling or staying asleep, reported decreased cognitive performance including "verbal memory, attention, and general cognition."
Depression
There is evidence linking depression and cognitive decline in postmenopausal women. Research suggests that increased cortisol levels from depressive episodes may affect the hippocampus, area of the brain responsible for episodic memory. Studies have also shown a correlation between depression and decreased cognitive performance including "processing speed, verbal memory, and working memory" in postmenopausal women.
Hot flashes
There are studies indicating a correlation between frequency of hot flashes in postmenopausal women and a deficit in verbal memory performance. It is suggested that faster blood flow in the brain or higher cortisol levels from hot flashes may cause changes in the brain and affect information processing and memory.
Surgical menopause
A 2019 systematic review and meta-analysis identified surgical menopause, especially when performed at or before the age of 45, as a substantial risk factor for cognitive decline and dementia.
Cardiac procedures
Cardiac procedures such as invasive cerebral and coronary angiography, coronary artery bypass graft surgery (CABG), surgical aortic valve replacement, and transcatheter aortic valve replacement (TAVR) have been found to increase the risk of cognitive decline in females as they been found to increase the incidence of brain lesions.
Mechanism
The mechanism of postmenopausal confusion is poorly understood due to simultaneous aging-related physiological changes, as well as differential diagnoses presenting with similar symptoms. Research remains ongoing.
Treatment
Overview
There are pharmacological and non-pharmacological considerations in improving the symptoms of postmenopausal confusion. Currently, no pharmacological agents are indicated to treat postmenopausal confusion, but research remains ongoing. Non-pharmacological strategies to manage postmenopausal confusion symptoms are utilized, with focus on diet and exercise.
Pharmacological
Hormone therapy
Hormone therapy, also known as estrogen therapy, was previously a common treatment for postmenopausal confusion. However, more recent research indicates that hormone therapy is not an effective treatment for postmenopausal cognitive symptoms. A 2008 Cochrane review of 16 trials concluded that there is a body of evidence that suggests that hormone replacement therapy is unable to prevent cognitive decline or maintain cognitive function in healthy postmenopausal women when given over a short or long period of time. Conversely, studies have also suggested that the use of hormone replacement therapy are unlikely to have negative cognitive effects when used for their approved indications.
Previous research suggested that increases in blood flow to the hippocampus and temporal lobe occurred from hormone therapy, improving postmenopausal confusion symptoms. More recent research no longer supports this, and is inconclusive as to the true effects of estrogen on hippocampal volume as studies show results differing from improved cognition and maintained hippocampal volume when hormone therapy is administered during menopause to results showing no obvious beneficial results.
Research focusing on adiponectin (ADPN) has yielded positive results in the development of possible treatments for postmenopausal confusion. A study has shown an association between higher levels of ADPN and increased cognitive performance in postmenopausal women. However, an ADPN receptor agonist has yet to be discovered.
Psychostimulant therapy
There is ongoing research regarding the efficacy of psychostimulant drugs such as lisdexamphetamine (Vyvanse) and atomoxetine (Strattera) in treating postmenopausal and menopausal confusion.
Non-pharmacological
Diet
Individuals play an important role in maintaining their cognitive health. One way to achieve this is by the promotion of healthy nutrition. In particular, the Mediterranean diet, defined as being low in saturated fat and high in vegetable oils, showed improvement in aspects of cognitive function. This diet consists of low intake of sweets and eggs, moderate intakes of meat and fish, dairy products and red wine, and high intake of leafy green vegetables, pulses/legumes and nuts, fruits, cereal, and cold pressed extra virgin olive oil. Further analysis concluded that the Mediterranean diet supplemented by olive oil resulted in better cognition and memory as compared to the Mediterranean diet plus mixed nuts combination.
Supplementation
Soy isoflavones (SIF), a type of phytoestrogen which can be found in soybeans, fruits and nuts, has been shown to improve cognitive outcomes in recent postmenopausal women of less than 10 years. This suggests that the initiation of SIF may have a critical margin of opportunity when used at a younger age in postmenopausal women. In addition to improved cognitive functions and visual memory, no evidence of harm from SIF supplementation was revealed with the dose ranges tested in multiple trials.
Analysis of multiple randomized controlled trials have brought attention to black cohosh and red clover (which contain phytoestrogen) and its potential as an efficacious treatment of menopausal symptoms. Black cohosh did not reveal any evidence of risk of harm, but lack of good evidence cannot firmly conclude its safety. Overall, the results suggested that neither botanical treatments provided any cognitive benefits.
Resveratrol, another bioactive compound derived from plants, has also shown to improve cognitive performance in postmenopausal women. There are ongoing trials studying the cognitive benefits of resveratrol in early versus late postmenopausal women.
Chronic ginkgo biloba supplementation has been shown to improve "mental flexibility" in "older and more cognitively impaired" postmenopausal women. However, a combined ginkgo biloba and ginger supplementation had no effect on memory or cognitive performance in postmenopausal women.
Dehydroepiandrosterone (DHEA) supplementation may improve cognition in women with postmenopausal confusion but does not benefit those without cognitive impairment. More long-term studies are required to study the efficacy of DHEA and its role in cognition and postmenopausal women.
Exercise
Regular physical exercise may prevent symptoms of postmenopausal confusion. Studies have shown an association between exercise and "lower rates of cognitive decline" in postmenopausal women. On the other hand, an inactive lifestyle has been strongly associated with "higher rates of cognitive decline" in postmenopausal women.
Mind-body therapy
Studies have shown benefits of mind-body therapies in women with postmenopausal symptoms including cognitive impairment. Mindfulness, hypnosis, and yoga may help decrease symptoms of insomnia, depression, or hot flashes in postmenopausal women which leads to better cognitive performance.
See also
Chronic fatigue syndrome
Estrogen and neurodegenerative diseases
Menopause in the workplace
References
Further reading
Hormones
Menopause
Neuroscience
Midwifery
Women's mental health
Aging-associated diseases | Postmenopausal confusion | Biology | 2,305 |
3,536,499 | https://en.wikipedia.org/wiki/E4M | Encryption for the Masses (E4M) is a free disk encryption software for Windows NT and Windows 9x families of operating systems. E4M is discontinued; it is no longer maintained. Its author, former criminal cartel boss Paul Le Roux, joined Shaun Hollingworth (the author of the Scramdisk) to produce the commercial encryption product DriveCrypt for the security company SecurStar.
The popular source-available freeware program TrueCrypt is based on E4M's source code. However, TrueCrypt uses a different container format than E4M, which makes it impossible to use one of these programs to access an encrypted volume created by the other.
Allegation of stolen source code
Shortly after TrueCrypt version 1.0 was released in February 2004, the TrueCrypt Team reported receiving emails from Wilfried Hafner, manager of SecurStar, claiming that Paul Le Roux had stolen the source code of E4M from SecurStar as an employee. According to the TrueCrypt Team, the emails stated that Le Roux illegally distributed E4M, and authored an illegal license permitting anyone to base derivative work on E4M and distribute it freely, which Hafner alleges Le Roux did not have any right to do, claiming that all versions of E4M always belonged only to SecurStar. For a time, this led the TrueCrypt Team to stop developing and
distributing TrueCrypt.
See also
On-the-fly encryption (OTFE)
Disk encryption
Disk encryption software
Comparison of disk encryption software
References
External links
Archived version of official website
Cryptographic software
Disk encryption
Freeware | E4M | Mathematics | 352 |
42,268,134 | https://en.wikipedia.org/wiki/Calcium%20looping | Calcium looping (CaL), or the regenerative calcium cycle (RCC), is a second-generation carbon capture technology. It is the most developed form of carbonate looping, where a metal (M) is reversibly reacted between its carbonate form (MCO3) and its oxide form (MO) to separate carbon dioxide from other gases coming from either power generation or an industrial plant. In the calcium looping process, the two species are calcium carbonate (CaCO3) and calcium oxide (CaO). The captured carbon dioxide can then be transported to a storage site, used in enhanced oil recovery or used as a chemical feedstock. Calcium oxide is often referred to as the sorbent.
Calcium looping is being developed as it is a more efficient, less toxic alternative to current post-combustion capture processes such as amine scrubbing. It also has interesting potential for integration with the cement industry.
Basic concept
CaCO3 ←→ CaO + CO2 ΔH = +178 kJ/mol
There are two main steps in CaL:
Calcination: Solid calcium carbonate is fed into a calciner, where it is heated to 850-950 °C to cause it to thermally decompose into gaseous carbon dioxide and solid calcium oxide (CaO). The almost-pure stream of CO2 is then removed and purified so that it is suitable for storage or use. This is the 'forward' reaction in the equation above.
Carbonation: The solid CaO is removed from the calciner and fed into the carbonator. It is cooled to approximately 650 °C and is brought into contact with a flue gas containing a low to medium concentration of CO2. The CaO and CO2 react to form CaCO3, thus reducing the CO2 concentration in the flue gas to a level suitable for emission to the atmosphere. This is the 'backward' reaction in the equation above.
Note that carbonation is calcination in reverse.
Whilst the process can be theoretically performed an infinite number of times, the calcium oxide sorbent degrades as it is cycled. For this reason, it is necessary to remove (purge) some of the sorbent from the system and replace it with fresh sorbent (often in the carbonate form). The size of the purge stream compared with the amount of sorbent going round the cycle affects the process considerably.
Background
In the Ca-looping process, a CaO-based sorbent, typically derived from limestone, reacts via the reversible reaction described in Equation () and is repeatedly cycled between two vessels.
The forward, endothermic step is called calcination while the backward, exothermic step is carbonation.
A typical Ca-looping process for post-combustion CO2 capture is shown in Figure 1, followed by a more detailed description.
Flue gas containing CO2 is fed to the first vessel (the carbonator), where carbonation occurs. The CaCO3 formed is passed to another vessel (the calciner). Calcination occurs at this stage, and the regenerated CaO is quickly passed back to the carbonator, leaving a pure CO2 stream behind. As this cycle continues, CaO sorbent is constantly replaced by fresh (reactive) sorbent. The highly concentrated CO2 from the calciner is suitable for sequestration, and the spent CaO has potential uses elsewhere, most notably in the cement industry. The heat necessary for calcination can be provided by oxy-combustion of coal below.
Oxy-combustion of coal: Pure oxygen rather than air is used for combustion, eliminating the large amount of nitrogen in the flue-gas stream. After particulate matter is removed, flue gas consists only of water vapor and CO2, plus smaller amounts of other pollutants. After compression of the flue gas to remove water vapor and additional removal of air pollutants, a nearly pure CO2 stream suitable for storage is produced.
The carbonator's operating temperature of 650-700 °C is chosen as a compromise between higher equilibrium (maximum) capture at lower temperatures due to the exothermic nature of the carbonation step, and a decreased reaction rate. Similarly, the temperature of >850 °C in the calcinator strikes a balance between increased rate of calcination at higher temperatures and reduced rate of degradation of CaO sorbent at lower temperatures.
Process description
CaL is usually designed using a dual fluidised bed system to ensure sufficient contact between the gas streams and the sorbent. The calciner and carbonator are fluidised beds with associated process equipment for separating the gases and solids attached (such as cyclones). Calcination is an endothermic process and as such requires the application of heat to the calciner. The opposite reaction, carbonation, is exothermic and heat must be removed. Since the exothermic reaction happens at about 650 °C and the endothermic reaction at 850-950 °C, the heat from the carbonator cannot be directly used to heat the calciner.
The fluidisation of the solid bed in the carbonator is achieved by passing the flue gas through the bed. In the calciner, some of the recovered CO2 is recycled through the system. Some oxygen may also be passed through the reactor if fuel is being burned in the calciner to provide energy.
Provision of energy to the calciner
Heat can be provided for the endothermic calcination step either directly or indirectly.
Direct provision of heat involves the combustion of fuel in the calciner itself (fluidised bed combustion). This is generally assumed to be done under oxy-fuel conditions; i.e. oxygen rather than air is used to burn the fuel to prevent dilution of the CO2 with nitrogen. The provision of oxygen for the combustion uses much electricity and is associated with high investment costs. Other air separation processes are being developed.
The penalties of calcium looping may be reduced by providing the heat for the calcination indirectly. This can be done in one of the following ways:
Combustion of fuel in an external chamber and conduction of energy in to the vessel
Combustion of fuel in an external chamber and use of a heat transfer medium.
Indirect methods are generally less efficient but do not require the provision of oxygen for combustion within the calciner to prevent dilution. The flue gas from the combustion of fuel in the indirect method could be mixed with the flue gas from the process that the CaL plant is attached to and passed through the carbonator to capture the CO2.
One efficient way of transferring heat into the calciner is by means of heat pipes. The indirectly heated calcium looping (IHCaL) using heat pipes has high potential to decarbonize the lime and cement industry. The deployment of this technology with refuse-derived fuels would allow to achieve net negative CO2 emissions.
Recovery of energy from the carbonator
Although the heat from the carbonator is not at a high enough temperature to be used in the calciner, the high temperatures involved (>600 °C) mean that a relatively efficient Rankine cycle for generating electricity can be operated.
Note that the waste heat from the market-leading amine scrubbing CO2 capture process is emitted at a maximum of 150 °C. The low temperature of this heat means that it contains much less exergy and can generate much less electricity through a Rankine or organic Rankine cycle.
This electricity generation is one of the main benefits of CaL over lower-temperature post-combustion capture processes as the electricity is an extra revenue stream (or reduces costs).
Sorbent degradation
It has been shown that the activity of the sorbent reduces quite markedly in laboratory, bench-scale and pilot plant tests. This degradation has been attributed to three main mechanisms, as shown below.
Attrition
Calcium oxide is friable, that is, quite brittle. In fluidised beds, the calcium oxide particles can break apart upon collision with the other particles in the fluidised bed or the vessel containing it. The problem seems to be greater in pilot plant tests than at a bench scale.
Sulfation
Sulfation is a relatively slow reaction (several hours) compared with carbonation (<10 minutes); thus it is more likely that SO2 will come into contact with CaCO3 than CaO. However, both reactions are possible, and are shown below.
Indirect sulfation: CaO + SO2 + 1/2 O2 → CaSO4
Direct sulfation: CaCO3 + + 1/2 O2 → CaSO4 + CO2
Because calcium sulfate has a greater molar volume than either CaO or CaCO3 a sulfated layer will form on the outside of the particle, which can prevent the uptake of CO2 by the CaO further inside the particle. Furthermore, the temperature at which calcium sulfate dissociates to CaO and SO2 is relatively high, precluding sulfation's reversibility at the conditions present in CaL.
Technical implications
Calcium looping technology offers several technical advantages over amine scrubbing for carbon capture. Firstly, both carbonator and calciner can use fluidized bed technology, due to the good gas-solid contacting and uniform bed temperature. Fluidized bed technology has already been demonstrated at large scale: large (460MWe) atmospheric and pressurized systems exist, and there is not a need for intensive scaling up as there is for the solvent scrubbing towers used in amine scrubbing.
Also, the calcium looping process is energy efficient. The heat required for the endothermic calcination of CaCO3 and the heat required to raise the temperature of fresh limestone from ambient temperature, can be provided by in-situ oxy-fired combustion of fuel in the calciner. Although additional energy is required to separate O2 from N2, the majority of the energy input can be recovered because the carbonator reaction is exothermic and CO2 from the calciner can be used to power a steam cycle. A solid purge heat exchanger can also be utilized to recover energy from the deactivated CaO and coal ashes from the calciner. As a result, a relatively small efficiency penalty is imposed on the power process, where the efficiency penalty refers to the power losses for CO2 compression, air separation and steam generation. It is estimated at 6-8 % points, compared to 9.5-12.5 % from post combustion amine capture.
The main shortcoming of Ca-looping technology is the decreased reactivity of CaO through multiple calcination-carbonation cycles. This can be attributed to sintering and the permanent closure of small pores during carbonation.
Closure of small pores
The carbonation step is characterized by a fast initial reaction rate abruptly followed by a slow reaction rate (Figure 2). The carrying capacity of the sorbent is defined as the number of moles of CO2 reacted in the period of fast reaction rate with respect to that of the reaction stoichiometry for complete conversion of CaO to CaCO3. As seen in Figure 2, while mass after calcination remains constant, the mass change upon carbonation- the carrying capacity- reduces with a large number of cycles. In calcination, porous CaO (molar volume = ) is formed in place of CaCO3 (.). On the other hand, in carbonation, the CaCO3 formed on the surface of a CaO particle occupies a larger molar volume. As a result, once a layer of carbonate has formed on the surface (including on the large internal surface of porous CaO), it impedes further CO2 capture. This product layer grows over the pores and seals them off, forcing carbonation to follow a slower, diffusion dependent mechanism.
Sintering
CaO is also prone to sintering, or change in pore shape, shrinkage and grain growth during heating. Ionic compounds such as CaO mostly sinter because of volume diffusion or lattice diffusion mechanics. As described by sintering theory, vacancies generated by temperature sensitive defects direct void sites from smaller to larger ones, explaining the observed growth of large pores and the shrinkage of small pores in cycled limestone. It was found that sintering of CaO increases at higher temperatures and longer calcination durations, whereas carbonation time has minimal effect on particle sintering. A sharp increase in sintering of particles is observed at temperatures above 1173 K, causing a reduction in reactive surface area and a corresponding decrease in reactivity.
Solutions: Several options to reduce sorbent deactivation are currently being researched. An ideal sorbent would be mechanically strong, maintain its reactive surface through repeated cycles, and be reasonably inexpensive. Using thermally pre-activated particles or reactivating spent sorbents through hydration are two promising options. Thermally pre-activated particles have been found to retain activity for up to a thousand cycles. Similarly, particles reactivated by hydration show improved long term (after~20 cycles) conversions.
Disposal of waste sorbent
Properties of waste sorbent
After cycling several times and being removed from the calcium loop, the waste sorbent will have attrited, sulfated and become mixed with the ash from any fuel used. The amount of ash in the waste sorbent will depend on the fraction of the sorbent being removed and the ash and calorific content of the fuel. The size fraction of the sorbent is dependent on the original size fraction but also the number of cycles used and the type of limestone used.
Disposal routes
Proposed disposal routes of waste sorbent include:
Landfill;
Disposal at sea;
Use in cement manufacture;
Use in flue gas desulfurisation (FGD).
The lifecycle CO2 emissions for power generation with CaL and the first three disposal techniques have been calculated. Before disposal of the CaO coal power with CaL has a similar level of lifecycle emissions as amine scrubbing but with the CO2-absorbing properties of CaO CaL becomes significantly less polluting. Ocean disposal was found to be the best, but current laws relating to dumping waste at sea prevent this. Next best was use in cement manufacture, reducing emissions over an unabated coal plant by 93%.
Use in lime and cement manufacture
The manufacture of lime and cement is responsible for approximately 8% of the world's CO2 emissions. Around 65% of this CO2 comes from the calcination of calcium carbonate as shown earlier in this article, and the rest from fuel combustion. By replacing some or all of the calcium carbonate entering the plant with waste calcium oxide, the CO2 caused from calcination can be avoided, as well as some of the CO2 from fossil fuel combustion.
This calcium oxide could be sourced from other point sources of CO2 such as power stations, but most effort has been focussed on integrating calcium looping with Portland cement manufacture. By replacing the calciner in the cement plant with a calcium looping plant, it should be possible to capture 90% or more of the CO2 relatively inexpensively. There are alternative set-ups such as placing the calcium looping plant in the preheater section so as to make the plant as efficient as possible or to indirectly heat the calciner for increased energy efficiency.
Some work has been undertaken into whether calcium looping affects the quality of the Portland cement produced, but results so far seem to suggest that the production of strength-giving phases such as alite are similar for calcium looped and non-calcium looped cement.
Direct Separation Technology
Calix Ltd has developed a new type of kiln that enables the from the calcination process to be driven off as a pure stream. Calix achieves this by calcining finely ground CaCO3 continuously down vertical reactor tubes. The reactor tubes are heated from the outside using electricity or fuel ensuring the stream is pure and not contaminated with air or combustion products.
This technology has been successfully piloted in Europe by a cooperative industry group with support from the European Union as the Low Emission Intensity Lime And Cement (LEILAC1) reactor project. The study report concluded that the technology could capture C02 from full scale lime and cement kilns at €14 to €24/t. Transport and storage costs are not included in this estimate and will be dependent upon infrastructure available near the cement or lime plant
A FEED study is underway for a larger commercial demonstration kiln proposed for the Heidelberg Cement plant in Hannover (LEILAC2). This commercial demonstration kiln is designed to capture 100ktpa . Leilac-2 passed its Financial Investment Decision (FID) in March 2022, and it's Front End Engineering Design (FEED) Study Summary was completed and published on 13 October 2023, leading to a new and improved design and revised timeline. The next milestone is procurement of long lead items, currently underway (2023).
This type of kiln is also being studied as a potential method to decarbonise shipping through both looping and single use processes. The single use process would involve using CaCO3 to be sown over the ocean, thereby permanently capturing addition carbon from the ocean as the CaCO3 reacts to form Ca(HCO3)2 and reversing ocean acidification.
Economic implications
Calcium looping has several economic advantages.
Cost per metric ton for CO2 captured
Firstly, Ca-looping offers greater cost advantage compared to conventional amine-scrubbing technologies. The cost/metric ton for CO2 captured through Ca-looping is ~$23.70 whereas that for CO2 captured through amine scrubbing is about $35–$96. This can be attributed to the high availability and low cost of the CaO sorbent (derived from limestone) as compared to MEA. Also, Ca-looping imposes a lower energy penalty than amine scrubbing, resulting in lower energy costs. The amine scrubbing process is energy intensive, with approximately 67% of the operating costs going into steam requirements for solvent regeneration. A more detailed comparison of Ca-looping and amine scrubbing is shown below.
Cost of CO2 emissions avoided through Ca-looping
In addition, the cost of CO2 emissions avoided through Ca-looping is lower than the cost of emissions avoided via an oxyfuel combustion process (~US$23.8/t). This can be explained by the fact that, despite the capital costs incurred in constructing the carbonator for Ca-looping, CO2 will not only be captured from the oxy-fired combustion, but also from the main combustor (before the carbonator). The oxygen required in the calciners is only 1/3 that required for an oxyfuel process, lowering air separation unit capital costs and operating costs.
Sensitivity Analysis: Figure 3 shows how varying 8 separate parameters affects the cost/metric ton of CO2 captured through Ca-looping. It is evident that the dominant variables that affect cost are related to sorbent use, the Ca/C ratio and the CaO deactivation ratio. This is because the large sorbent quantities required dominate the economics of the capture process. Low costs of CO2 avoided for the indirectly heated Ca-looping process have been reported for integrated concepts in the lime production.
These variables should therefore be taken into account to achieve further cost reductions in the Ca-looping process. The cost of limestone is largely driven by market forces, and is outside the control of the plant. Currently, carbonators require a Ca/C ratio of 4 for effective CO2 capture. However, if the Ca/C ratio or CaO deactivation is reduced (i.e. the sorbent can be made to work more efficiently), the reduction in material consumption and waste can lower feedstock demand and operating costs.
Cement production
Finally, favorable economics can be achieved by using the purged material from the calcium looping cycle in cement production. The raw feed for cement production includes ~ 85 wt% limestone with the remaining material consisting of clay and additives (e.g. SiO2, Al2O3 etc.). The first step in the process involves calcinating limestone to produce CaO, which is then mixed with other materials in a kiln to produce clinker.
Using purged material from a Ca-looping system would reduce the raw material costs for cement production. Waste CaO and ash can be used in place of CaCO3 (the main constituent cement feed). The ash could also fulfill the aluminosilicate requirements otherwise supplied by additives. Since over 60% of the energy used in cement production goes into heat input for the precalciner, this integration with Ca-looping and the consequent reduced need for a calcination step, could lead to substantial energy savings (EU, 2001). However, there are problems with using the waste CaO in cement manufacture. If the technology is applied on a large scale, the purge rate of CaO should be optimized to minimize waste.
Political and environmental implications
To fully gauge the viability of calcium looping as a capture process, it is necessary to consider the political, environmental, and health effects of the process as well.
Political implications
Though many recent scientific reports (e.g.: the seven-wedge stabilization plan by Pacala and Socolow) convey an urgent need to deploy CCS, this urgency has not spread to the political establishment, mainly due to the high costs and energy penalty of CCS The economics of calcium looping are integral to its political viability. One economic and political advantage is the ability for Ca-looping to be retrofitted onto existing power plants, rather than requiring new plants to be built. The IEA sees power plants as an important target for carbon capture, and has set the goal to have all fossil fuel based power plants deploy CCS systems by 2040. However, power plants are expensive to build, and long lived. Retrofitting of post-combustion capture systems, such as Ca-looping, seems to be the only politically and economically viable way to achieve the IEA's goal.
A further political advantage is the potential synergy between calcium looping and cement production. An IEA report concludes that to meet emission reduction goals, there should be 450 CCS projects in India and China by 2050. However, this could be politically difficult, especially with these nations' numerous other development goals. After all, for a politician to commit money to CCS might be less advantageous than to commit it to job schemes or agricultural subsidies. Here, the integration of calcium looping with the prosperous and (particularly with infrastructure expansion in the developing world) vital cement industry might prove compelling to the political establishment.
This potential synergy with the cement industry also provides environmental benefits by simultaneously reducing the waste output of the looping process and decarbonizing cement production. Cement manufacture is energy and resource intensive, consuming 1.5 tonnes of material per tonne of cement produced. In the developing world, economic growth will drive infrastructure growth, increasing cement demand. Deploying a waste product for cement production could therefore have a large, positive environmental impact.
Environmental implications
The starting material for calcium looping is limestone, which is environmentally benign and widely available, accounting for over 10% (by volume) of all sedimentary rock. Limestone is already mined and cheaply obtainable. The mining process has no major known adverse environmental effects, beyond the unavoidable intrusiveness of any mining operation. However, as the following calculation shows, despite integration with cement industry, waste from Ca-looping can still be a problem.
From the environmental and health standpoint, Ca-looping compares favorably with amine scrubbing. Amine scrubbing is known to generate air pollutants, including amines and ammonia, which can react to form carcinogenic nitrosamines. Calcium looping, on the other hand, does not produce harmful pollutants. In addition, not only does it capture CO2, but it also removes the pollutant SO2 from the flue gas. This is both an advantage and disadvantage, as the air quality improves, but the captured SO2 has a detrimental effect on the cement that is generated from the calcium looping wastes.
Advantages and drawbacks
Advantages of the process
Calcium looping is considered as potential promising solutions to reduce CO2 capture energy penalty. There are many advantages from the calcium looping methods. Firstly, the method has been proved to yield a low efficiency penalties (5-8% points) while other mature CO2 capture systems yield a higher efficiency penalties (8–12.5%). Moreover, the method is well suited for a wide range of flue gases. Calcium looping is applicable for new builds and retrofits to existing power stations or other stationary industrial CO2 sources because the method can be implemented using large-scale circulating fluidized beds while other methods such as amine scrubbing is required a vastly upscale solvent scrubbing towers. In addition, crushed limestone used in calcium looping as the sorbent is a natural product, which is well distributed all over the world, non-hazardous and inexpensive. Many cement manufacturers or power plants located close to limestone sources could conceivably employ Calcium looping for CO2 capture. The waste sorbent can be used in cement manufacture.
Drawbacks
Apart from these advantages, there are several disadvantages needed to take into considerations. The plant integrating Ca-Looping might require a high construction investment because of the high thermal power of the post-combustion calcium loop. The sorbent capacity decreases significantly with the number of cycles for every carbonation-calcination cycle so the calcium-looping unit will require a constant flow of limestone. In order to increase the long-term reactivity of the sorbent or to reactivate the sorbent, some methods are under investigation such as thermal pretreatment, chemical doping and the production of artificial sorbents. The method applying the concept of fluidized bed reactor, but there are some problems causing the uncertainty for the process. Attrition of the limestone can be a problem during repeated cycling.
Benefits of calcium looping compared with other post-combustion capture processes
Calcium looping compares favorably with several post-combustion capture technologies. Amine scrubbing is the capture technology closest to being market-ready, and calcium looping has several marked benefits over it. When modeled on a 580 MW coal-fired power plant, Calcium looping experienced not only a smaller efficiency penalty (6.7-7.9% points compared to 9.5% for monoethanolamine and 9% for chilled ammonia) but also a less complex retrofitting process. Both technologies would require the plant to be retrofitted for adoption, but the calcium looping retrofitting process would result in twice the net power output of the scrubbing technology. Furthermore, this advantage can be compounded by introducing technology such as cryogenic O2 separation systems. This ups the efficiency of the calcium looping technology by increasing the energy density by 57.4%, making the already low energy penalties even less of an issue.
Calcium looping already has an energy advantage over amine scrubbing, but the main problem is that amine scrubbing is the more market-ready technology. However, the accompanying infrastructure for amine scrubbing include large solvent scrubbing towers, the likes of which have never been used on an industrial scale. The accompanying infrastructure for calcium looping capture technologies are circulating fluidized beds, which have already been implemented on an industrial scale. Although the individual technologies differ in terms of current technological viability, the fact that the infrastructure needed to properly implement an amine scrubbing system has yet to be developed keeps calcium looping competitive from a viability standpoint.
Sample evaluation
Assumptions
For a Ca-looping cycle installed on a 500 MW power plant, the purge rate is 12.6 kg CaO/s.
For the cement production process, 0.65 kg CaO is required/ kg cement produced.
U.S. electric generation capacity (only fossil fuels): Natural gas = 415 GW, Coal= 318 GW & Petroleum = 51 GW
Cement consumption in U.S. = 110,470 × 103 metric tons = 1.10470 × 108 metric tons = 1.10470 × 1011 kg.
Calculations
For a single Ca-looping cycle installed on a 500MW power plant:
Amount of CaO from purge annually = 12.6 kg CaO/s × 365 days/year × 24 hr/day × 3600s / hour = 3.97 × 108 kg CaO/ year
Cement that can be obtained from purge annually= 3.97 × 108 kg CaO/ year × 1 kg cement/ 0.65 kg CaO = 6.11 × 108 kg cement/year
Net electricity generation in US: (415 + 318 + 51) GW = 784 GW = 7.84 × 1011 W
Number of 500 MW power plants: 7.84 × 1011 W / 5.00 × 108 W = 1568 power plants
Amount of cement that can be produced from Ca-looping waste: 1568 × 6.11 × 108 kg cement/ year = 9.58 × 1011 kg cement/year
Production from Ca-looping waste as percent of total annual cement consumption = [(9.58 × 1011 kg)/ (1.10470 × 1011 kg)] × 100 = 870%
Therefore, amount of cement production from Ca-looping waste of all fossil fuel based electric power plants in US will be far greater than net consumption. To make Ca-looping more viable, waste must be minimized (i.e. sorbent degradation reduced) to ideally about 1/10th of current levels.
References
Carbon capture and storage
Chemical engineering
Chemical processes | Calcium looping | Chemistry,Engineering | 6,125 |
1,166,647 | https://en.wikipedia.org/wiki/Magnetic%20reconnection | Magnetic reconnection is a physical process occurring in electrically conducting plasmas, in which the magnetic topology is rearranged and magnetic energy is converted to kinetic energy, thermal energy, and particle acceleration. Magnetic reconnection involves plasma flows at a substantial fraction of the Alfvén wave speed, which is the fundamental speed for mechanical information flow in a magnetized plasma.
The concept of magnetic reconnection was developed in parallel by researchers working in solar physics and in the interaction between the solar wind and magnetized planets. This reflects the bidirectional nature of reconnection, which can either disconnect formerly connected magnetic fields or connect formerly disconnected magnetic fields, depending on the circumstances.
Ron Giovanelli is credited with the first publication invoking magnetic energy release as a potential mechanism for particle acceleration in solar flares. Giovanelli proposed in 1946 that solar flares stem from the energy obtained by charged particles influenced by induced electric fields within close proximity of sunspots. In the years 1947-1948, he published more papers further developing the reconnection model of solar flares. In these works, he proposed that the mechanism occurs at points of neutrality (weak or null magnetic field) within structured magnetic fields.
James Dungey is credited with first use of the term “magnetic reconnection” in his 1950 PhD thesis, to explain the coupling of mass, energy and momentum from the solar wind into Earth's magnetosphere. The concept was published for the first time in a seminal paper in 1961. Dungey coined the term "reconnection" because he envisaged field lines and plasma moving together in an inflow toward a magnetic neutral point (2D) or line (3D), breaking apart and then rejoining again but with different magnetic field lines and plasma, in an outflow away from the magnetic neutral point or line.
In the meantime, the first theoretical framework of magnetic reconnection was established by Peter Sweet and Eugene Parker at a conference in 1956. Sweet pointed out that by pushing two plasmas with oppositely directed magnetic fields together, resistive diffusion is able to occur on a length scale much shorter than a typical equilibrium length scale. Parker was in attendance at this conference and developed scaling relations for this model during his return travel.
Fundamental principles
Magnetic reconnection is a breakdown of "ideal-magnetohydrodynamics" and so of "Alfvén's theorem" (also called the "frozen-in flux theorem") which applies to large-scale regions of a highly-conducting magnetoplasma, for which the Magnetic Reynolds Number is very large: this makes the convective term in the induction equation dominate in such regions. The frozen-in flux theorem states that in such regions the field moves with the plasma velocity (the mean of the ion and electron velocities, weighted by their mass). The reconnection breakdown of this theorem occurs in regions of large magnetic shear (by Ampére's law these are current sheets) which are regions of small width where the Magnetic Reynolds Number can become small enough to make the diffusion term in the induction equation dominate, meaning that the field diffuses through the plasma from regions of high field to regions of low field. In reconnection, the inflow and outflow regions both obey Alfvén's theorem and the diffusion region is a very small region at the centre of the current sheet where field lines diffuse together, merge and reconfigure such that they are transferred from the topology of the inflow regions (i.e., along the current sheet) to that of the outflow regions (i.e., threading the current sheet). The rate of this magnetic flux transfer is the electric field associated with both the inflow and the outflow and is called the "reconnection rate".
The equivalence of magnetic shear and current can be seen from one of Maxwell's equations
In a plasma (ionized gas), for all but exceptionally high frequency phenomena, the second term on the right-hand side of this equation, the displacement current, is negligible compared to the effect of the free current and this equation reduces to Ampére's law for free charges. The displacement current is neglected in both the Parker-Sweet and Petschek theoretical treatments of reconnection, discussed below, and in the derivation of ideal MHD and Alfvén's theorem which is applied in those theories everywhere outside the small diffusion region.
The resistivity of the current layer allows magnetic flux from either side to diffuse through the current layer, cancelling outflux from the other side of the boundary. However, the small spatial scale of the current sheet makes the Magnetic Reynolds Number small and so this alone can make the diffusion term dominate in the induction equation without the resistivity being enhanced. When the diffusing field lines from the two sites of the boundary touch they form the separatrices and so have both the topology of the inflow region (i.e. along the current sheet) and the outflow region (i.e., threading the current sheet). In magnetic reconnection the field lines evolve from the inflow topology through the separatrices topology to the outflow topology. When this happens, the plasma is pulled out by Magnetic tension force acting on the reconfigured field lines and ejecting them along the current sheet. The resulting drop in pressure pulls more plasma and magnetic flux into the central region, yielding a self-sustaining process. The importance of Dungey's concept of a localized breakdown of ideal-MHD is that the outflow along the current sheet prevents the build-up in plasma pressure that would otherwise choke off the inflow. In Parker-Sweet reconnection the outflow is only along a thin layer the centre of the current sheet and this limits the reconnection rate that can be achieved to low values. On the other hand, in Petschek reconnection the outflow region is much broader, being between shock fronts (now thought to be Alfvén waves) that stand in the inflow: this allows much faster escape of the plasma frozen-in on reconnected field lines and the reconnection rate can be much higher.
Dungey coined the term "reconnection" because he initially envisaged field lines of the inflow topology breaking and then joining together again in the outflow topology. However, this means that magnetic monopoles would exist, albeit for a very limited period, which would violate Maxwell's equation that the divergence of the field is zero. However, by considering the evolution through the separatrix topology, the need to invoke magnetic monopoles is avoided. Global numerical MHD models of the magnetosphere, which use the equations of ideal MHD, still simulate magnetic reconnection even though it is a breakdown of ideal MHD. The reason is close to Dungey's original thoughts: at each time step of the numerical model the equations of ideal MHD are solved at each grid point of the simulation to evaluate the new field and plasma conditions. The magnetic field lines then have to be re-traced. The tracing algorithm makes errors at thin current sheets and joins field lines up by threading the current sheet where they were previously aligned with the current sheet. This is often called "numerical resistivity" and the simulations have predictive value because the error propagates according to a diffusion equation.
A current problem in plasma physics is that observed reconnection happens much faster than predicted by MHD in high Lundquist number plasmas (i.e. fast magnetic reconnection). Solar flares, for example, proceed 13–14 orders of magnitude faster than a naive calculation would suggest, and several orders of magnitude faster than current theoretical models that include turbulence and kinetic effects. One possible mechanism to explain the discrepancy is that the electromagnetic turbulence in the boundary layer is sufficiently strong to scatter electrons, raising the plasma's local resistivity. This would allow the magnetic flux to diffuse faster.
Properties
Physical interpretation
The qualitative description of the reconnection process is such that magnetic field lines from different magnetic domains (defined by the field line connectivity) are spliced to one another, changing their patterns of connectivity with respect to the sources. It is a violation of an approximate conservation law in plasma physics, called Alfvén's theorem (also called the "frozen-in flux theorem") and can concentrate mechanical or magnetic energy in both space and time. Solar flares, the largest explosions in the Solar System, may involve the reconnection of large systems of magnetic flux on the Sun, releasing, in minutes, energy that has been stored in the magnetic field over a period of hours to days. Magnetic reconnection in Earth's magnetosphere is one of the mechanisms responsible for the aurora, and it is important to the science of controlled nuclear fusion because it is one mechanism preventing magnetic confinement of the fusion fuel.
In an electrically conductive plasma, magnetic field lines are grouped into 'domains'— bundles of field lines that connect from a particular place to another particular place, and that are topologically distinct from other field lines nearby. This topology is approximately preserved even when the magnetic field itself is strongly distorted by the presence of variable currents or motion of magnetic sources, because effects that might otherwise change the magnetic topology instead induce eddy currents in the plasma; the eddy currents have the effect of canceling out the topological change.
Types of reconnection
In two dimensions, the most common type of magnetic reconnection is separator reconnection, in which four separate magnetic domains exchange magnetic field lines. Domains in a magnetic plasma are separated by separatrix surfaces: curved surfaces in space that divide different bundles of flux. Field lines on one side of the separatrix all terminate at a particular magnetic pole, while field lines on the other side all terminate at a different pole of similar sign. Since each field line generally begins at a north magnetic pole and ends at a south magnetic pole, the most general way of dividing simple flux systems involves four domains separated by two separatrices: one separatrix surface divides the flux into two bundles, each of which shares a south pole, and the other separatrix surface divides the flux into two bundles, each of which shares a north pole. The intersection of the separatrices forms a separator, a single line that is at the boundary of the four separate domains. In separator reconnection, field lines enter the separator from two of the domains, and are spliced one to the other, exiting the separator in the other two domains (see the first figure).
In three dimensions, the geometry of the field lines become more complicated than the two-dimensional case and it is possible for reconnection to occur in regions where a separator does not exist, but with the field lines connected by steep gradients. These regions are known as quasi-separatrix layers (QSLs), and have been observed in theoretical configurations and solar flares.
Theoretical descriptions
Slow reconnection: Sweet–Parker model
The first theoretical framework of magnetic reconnection was established by Peter Sweet and Eugene Parker at a conference in 1956. Sweet pointed out that by pushing two plasmas with oppositely directed magnetic fields together, resistive diffusion is able to occur on a length scale much shorter than a typical equilibrium length scale. Parker was in attendance at this conference and developed scaling relations for this model during his return travel.
The Sweet–Parker model describes time-independent magnetic reconnection in the resistive MHD framework when the reconnecting magnetic fields are antiparallel (oppositely directed) and effects related to viscosity and compressibility are unimportant. The initial velocity is simply an velocity, so
where is the out-of-plane electric field, is the characteristic inflow velocity, and is the characteristic upstream magnetic field strength. By neglecting displacement current, the low-frequency Ampere's law, , gives the relation
where is the current sheet half-thickness. This relation uses that the magnetic field reverses over a distance of . By matching the ideal electric field outside of the layer with the resistive electric field inside the layer (using Ohm's law), we find that
where is the magnetic diffusivity. When the inflow density is comparable to the outflow density, conservation of mass yields the relationship
where is the half-length of the current sheet and is the outflow velocity. The left and right hand sides of the above relation represent the mass flux into the layer and out of the layer, respectively. Equating the upstream magnetic pressure with the downstream dynamic pressure gives
where is the mass density of the plasma. Solving for the outflow velocity then gives
where is the Alfvén velocity. With the above relations, the dimensionless reconnection rate can then be written in two forms, the first in terms of using the result earlier derived from Ohm's law, the second in terms of from the conservation of mass as
Since the dimensionless Lundquist number is given by
the two different expressions of are multiplied by each other and then square-rooted, giving a simple relation between the reconnection rate and the Lundquist number
Sweet–Parker reconnection allows for reconnection rates much faster than global diffusion, but is not able to explain the fast reconnection rates observed in solar flares, the Earth's magnetosphere, and laboratory plasmas. Additionally, Sweet–Parker reconnection neglects three-dimensional effects, collisionless physics, time-dependent effects, viscosity, compressibility, and downstream pressure. Numerical simulations of two-dimensional magnetic reconnection typically show agreement with this model. Results from the Magnetic Reconnection Experiment (MRX) of collisional reconnection show agreement with a generalized Sweet–Parker model which incorporates compressibility, downstream pressure and anomalous resistivity.
Fast reconnection: Petschek model
The fundamental reason that Petschek reconnection is faster than Parker-Sweet is that it broadens the outflow region and thereby removes some of the limitation caused by the build up in plasma pressure. The inflow velocity, and thus the reconnection rate, can only be very small if the outflow region is narrow. In 1964, Harry Petschek proposed a mechanism where the inflow and outflow regions are separated by stationary slow mode shocks that stand in the inflows. The aspect ratio of the diffusion region is then of order unity and the maximum reconnection rate becomes
This expression allows for fast reconnection and is almost independent of the Lundquist number. Theory and numerical simulations show that most of the actions of the shocks that were proposed by Petschek can be carried out by Alfvén waves and in particular rotational discontinuities (RDs). In cases of asymmetric plasma densities on the two sides of the current sheet (as at Earth's dayside magnetopause) the Alfvén wave that propagates into the inflow on higher-density side (in the case of the magnetopause the denser magnetosheath) has a lower propagation speed and so the field rotation increasingly becomes at that RD as the field line propagates away from the reconnection site: hence the magnetopause current sheet becomes increasingly concentrated in the outer, slower, RD.
Simulations of resistive MHD reconnection with uniform resistivity showed the development of elongated current sheets in agreement with the Sweet–Parker model rather than the Petschek model. When a localized anomalously large resistivity is used, however, Petschek reconnection can be realized in resistive MHD simulations. Because the use of an anomalous resistivity is only appropriate when the particle mean free path is large compared to the reconnection layer, it is likely that other collisionless effects become important before Petschek reconnection can be realized.
Anomalous resistivity and Bohm diffusion
In the Sweet–Parker model, the common assumption is that the magnetic diffusivity is constant. This can be estimated using the equation of motion for an electron with mass and electric charge :
where is the collision frequency. Since in the steady state, , then the above equation along with the definition of electric current, , where is the electron number density, yields
Nevertheless, if the drift velocity of electrons exceeds the thermal velocity of plasma, a steady state cannot be achieved and magnetic diffusivity should be much larger than what is given in the above. This is called anomalous resistivity, , which can enhance the reconnection rate in the Sweet–Parker model by a factor of .
Another proposed mechanism is known as the Bohm diffusion across the magnetic field. This replaces the Ohmic resistivity with , however, its effect, similar to the anomalous resistivity, is still too small compared with the observations.
Stochastic reconnection
In stochastic reconnection, magnetic field has a small scale random component arising because of turbulence. For the turbulent flow in the reconnection region, a model for magnetohydrodynamic turbulence should be used such as the model developed by Goldreich and Sridhar in 1995. This stochastic model is independent of small scale physics such as resistive effects and depends only on turbulent effects. Roughly speaking, in stochastic model, turbulence brings initially distant magnetic field lines to small separations where they can reconnect locally (Sweet-Parker type reconnection) and separate again due to turbulent super-linear diffusion (Richardson diffusion ). For a current sheet of the length , the upper limit for reconnection velocity is given by
where . Here , and are turbulence injection length scale and velocity respectively and is the Alfvén velocity. This model has been successfully tested by numerical simulations.
Non-MHD process: Collisionless reconnection
On length scales shorter than the ion inertial length (where is the ion plasma frequency), ions decouple from electrons and the magnetic field becomes frozen into the electron fluid rather than the bulk plasma. On these scales, the Hall effect becomes important. Two-fluid simulations show the formation of an X-point geometry rather than the double Y-point geometry characteristic of resistive reconnection. The electrons are then accelerated to very high speeds by Whistler waves. Because the ions can move through a wider "bottleneck" near the current layer and because the electrons are moving much faster in Hall MHD than in standard MHD, reconnection may proceed more quickly. Two-fluid/collisionless reconnection is particularly important in the Earth's magnetosphere.
Observations
Solar atmosphere
Magnetic reconnection occurs during solar flares, coronal mass ejections, and many other events in the solar atmosphere. The observational evidence for solar flares includes observations of inflows/outflows, downflowing loops, and changes in the magnetic topology. In the past, observations of the solar atmosphere were done using remote imaging; consequently, the magnetic fields were inferred or extrapolated rather than observed directly. However, the first direct observations of solar magnetic reconnection were gathered in 2012 (and released in 2013) by the High Resolution Coronal Imager.
Earth's magnetosphere
Magnetic reconnection events that occur in the Earth's magnetosphere (in the dayside magnetopause and in the magnetotail) were for many years inferred because they uniquely explained many aspects of the large-scale behaviour of the magnetosphere and its dependence on the orientation of the near-Earth Interplanetary magnetic field. Subsequently, spacecraft such as Cluster II and the Magnetospheric Multiscale Mission. have made observations of sufficient resolution and in multiple locations to observe the process directly and in-situ. Cluster II is a four-spacecraft mission, with the four spacecraft arranged in a tetrahedron to separate the spatial and temporal changes as the suite flies through space. It has observed numerous reconnection events in which the Earth's magnetic field reconnects with that of the Sun (i.e. the Interplanetary Magnetic Field). These include 'reverse reconnection' that causes sunward convection in the Earth's ionosphere near the polar cusps; 'dayside reconnection', which allows the transmission of particles and energy into the Earth's vicinity and 'tail reconnection', which causes auroral substorms by injecting particles deep into the magnetosphere and releasing the energy stored in the Earth's magnetotail. The Magnetospheric Multiscale Mission, launched on 13 March 2015, improved the spatial and temporal resolution of the Cluster II results by having a tighter constellation of spacecraft. This led to a better understanding of the behavior of the electrical currents in the electron diffusion region.
On 26 February 2008, THEMIS probes were able to determine the triggering event for the onset of magnetospheric substorms. Two of the five probes, positioned approximately one third the distance to the Moon, measured events suggesting a magnetic reconnection event 96 seconds prior to auroral intensification. Dr. Vassilis Angelopoulos of the University of California, Los Angeles, who is the principal investigator for the THEMIS mission, claimed, "Our data show clearly and for the first time that magnetic reconnection is the trigger.".
Laboratory plasma experiments
Magnetic reconnection has also been observed in numerous laboratory experiments. For example, studies on the Large Plasma Device (LAPD) at UCLA have observed and mapped quasi-separatrix layers near the magnetic reconnection region of a two flux rope system, while experiments on the Magnetic Reconnection Experiment (MRX) at the Princeton Plasma Physics Laboratory (PPPL) have confirmed many aspects of magnetic reconnection, including the Sweet–Parker model in regimes where the model is applicable. Analysis of the physics of helicity injection, used to create the initial plasma current in the NSTX spherical tokamak, led Dr. Fatima Ebrahimi to propose a plasma thruster that uses fast magnetic reconnection to accelerate plasma to produce thrust for space propulsion.
Sawtooth oscillations are periodic mixing events occurring in the tokamak plasma core. The Kadomtsev model describes sawtooth oscillations as a consequence of magnetic reconnection due to displacement of the central region with safety factor caused by the internal kink mode.
See also
Current sheet
Solar corona
Magnetic switchback
References
Further reading
Eric Priest, Terry Forbes, Magnetic Reconnection, Cambridge University Press 2000, , contents and sample chapter online
Discoveries about magnetic reconnection in space could unlock fusion power, Space.com, 6 February 2008
Nasa MMS-SMART mission, The Magnetospheric Multiscale (MMS) mission, Solving Magnetospheric Acceleration, Reconnection, and Turbulence. Due for launch in 2014.
Cluster spacecraft science results
External links
Magnetism on the Sun
Magnetic Reconnection Experiment (MRX)
Plasma phenomena
Stellar phenomena
Solar phenomena
Articles containing video clips | Magnetic reconnection | Physics | 4,732 |
2,278,435 | https://en.wikipedia.org/wiki/Valnoctamide | Valnoctamide (INN, USAN) has been used in France as a sedative-hypnotic since 1964. It is a structural isomer of valpromide, a valproic acid prodrug; unlike valpromide, however, valnoctamide is not transformed into its homologous acid, valnoctic acid, in vivo.
Indications
In addition to being a sedative, valnoctamide has been investigated for use in epilepsy.
It was studied for neuropathic pain in 2005 by Winkler et al., with good results: it had minimal effects on motor coordination and alertness at effective doses, and appeared to be equally effective as gabapentin.
RH Belmaker, Yuly Bersudsky and Alex Mishory started a clinical trial of valnoctamide for prophylaxis of mania in lieu of the much more teratogenic valproic acid or its salts.
Side effects
The side effects of valnoctamide are mostly minor and include somnolence and the slight motor impairments mentioned above.
Interactions
Valnoctamide is known to increase through inhibition of epoxide hydrolase the serum levels of carbamazepine-10,11-epoxide, the active metabolite of carbamazepine, sometimes to toxic levels.
Chemistry
Valnoctamide is a racemic compound with four stereoisomers, all of which were shown to be more effective than valproic acid in animal models of epilepsy and one of which [(2S,3S]-valnoctamide) was considered to be a good candidate by Isoherranen, et al. for an anticonvulsant in August 2003.
Butabarbital can be hydrolyzed to Valnoctamide.
References
Carboxamides
Anticonvulsants
GABA analogues
Mood stabilizers
GABA transaminase inhibitors
Histone deacetylase inhibitors
Prodrugs | Valnoctamide | Chemistry | 404 |
366,007 | https://en.wikipedia.org/wiki/Rank%20%28computer%20programming%29 | In computer programming, rank with no further specifications is usually a synonym for (or refers to) "number of dimensions"; thus, a two-dimensional array has rank two, a three-dimensional array has rank three and so on.
Strictly, no formal definition can be provided which applies to every programming language, since each of them has its own concepts, semantics and terminology; the term may not even be applicable or, to the contrary, applied with a very specific meaning in the context of a given language.
In the case of APL the notion applies to every operand; and dyads ("binary functions") have a left rank and a right rank.
The box below instead shows how rank of a type and rank of an array expression could be defined (in a semi-formal style) for C++ and illustrates a simple way to calculate them at compile time.
#include <type_traits>
#include <cstddef>
/* Rank of a type
* -------------
*
* Let the rank of a type T be the number of its dimensions if
* it is an array; zero otherwise (which is the usual convention)
*/
template <typename T> struct rank
{
static const std::size_t value = 0;
};
template<typename T, std::size_t N>
struct rank<T[N]>
{
static const std::size_t value = 1 + rank<T>::value;
};
template <typename T>
constexpr auto rank_v = rank<T>::value;
/* Rank of an expression
*
* Let the rank of an expression be the rank of its type
*/
template <typename T>
using unqualified_t = std::remove_cv_t<std::remove_reference_t<T>>;
template <typename T>
auto rankof(T&& expr)
{
return rank_v<unqualified_t<T>>;
}
Given the code above the rank of a type T can be calculated at compile time by
rank<T>::value
or the shorter form
rank_v<T>
Calculating the rank of an expression can be done using
rankof(expr)
See also
Rank (linear algebra), for a definition of rank as applied to matrices
Rank (J programming language), a concept of the same name in the J programming language
Arrays
Programming language topics | Rank (computer programming) | Engineering | 532 |
488,059 | https://en.wikipedia.org/wiki/Magnetic%20cartridge | A magnetic cartridge, more commonly called a phonograph cartridge or phono cartridge or (colloquially) a pickup, is an electromechanical transducer that is used to play phonograph records on a turntable.
The cartridge contains a removable or permanently mounted stylus, the tip - usually a gemstone, such as diamond or sapphire - of which makes physical contact with the record's groove. In popular usage and in disc jockey jargon, the stylus, and sometimes the entire cartridge, is often called the needle. As the stylus tracks the serrated groove, it vibrates a cantilever on which is mounted a permanent magnet which moves between the magnetic fields of sets of electromagnetic coils in the cartridge (or vice versa: the coils are mounted on the cantilever, and the magnets are in the cartridge). The shifting magnetic fields generate an electrical current in the coils. The electrical signal generated by the cartridge can be amplified and then converted into sound by a loudspeaker.
History
The first commercially successful type of electrical phonograph pickup was introduced in 1925. Although electromagnetic, its resemblance to later magnetic cartridges is remote: it employed a bulky horseshoe magnet and used the same single-use steel needles which had been standard since the first mechanical transfer disc record players appeared in the 1890s. This early type of magnetic pickup dominated the market well into the 1930s, but by the end of that decade it had been superseded by the comparatively lightweight piezoelectric crystal pickup type,.however the use of short-lived disposable metal needles remained standard. During the years immediately following World War II, as old record players with very heavy pickups were replaced, precision-ground and long-lasting stylus tips made of sapphire or the exotic hard metal osmium were increasingly popular. However, records made for home use were still made of the same abrasive shellac compound formulated to rapidly wear down the points of steel needles to fit the groove.
The introduction of the 33 rpm record LP "album" in 1948 and the 45 rpm record "single" in 1949 prompted consumers to upgrade to a new multi-speed record player with the required smaller-tipped "microgroove" stylus. Sapphire and diamond then became the standard stylus tip materials. At first, the new styli came installed in smaller, lighter piezoelectric crystal or ceramic cartridges of the general type found in inexpensive self-contained portable record players throughout the phonographic era. Ceramic cartridges continue to be used in most of the "retro" and compact record players currently being made, in part because they are comparatively robust and resistant to damage from careless handling, but mostly because they are inexpensive. However, during the 1950s, a new generation of small, lightweight, highly compliant magnetic cartridges appeared and quickly found favor among high-fidelity enthusiasts because of their audibly superior performance. The high compliance also reduced record wear. They soon became standard in all but the cheapest component audio systems and are the most common type of pickup cartridge in use today.
Design and construction
The cartridge consists of several components: the stylus, cantilever, magnets, coils and body. The stylus is the part that, when in use, is the interface with the record surface and tracks the modulations in the groove. It is typically made of a small polished diamond or other industrial gemstone. The cantilever supports the stylus, and transmits the vibrations from it to the coil/magnet assembly. The former is typically made of boron or aluminium, and previously beryllium although some manufacturers market models with exotic gemstone cantilevers. Most models of moving magnet cartridges have detachable stylus–cantilever sub-assemblies that allow for their replacement without the need to remove and replace the entire cartridge when the stylus has become worn.
Coupled to the tonearm, the cartridge body's function is to give the moving parts a stationary platform so that they can track the groove with precision.
Types
In high-fidelity systems, crystal and ceramic pickups have been replaced by the magnetic cartridge, using either a moving magnet or a moving coil.
Compared to the crystal and ceramic pickups, the magnetic cartridge usually gives improved playback fidelity and reduced record wear by tracking the groove with lighter pressure. Magnetic cartridges use lower tracking forces and thus reduce the potential for groove damage. They also have a lower output voltage than a crystal or ceramic pickup, in the range of only a few millivolts, thus requiring greater amplification.
Moving Magnet (MM) and Moving Iron (MI) cartridges
In a moving magnet cartridge, the stylus cantilever carries a tiny permanent magnet, which is positioned between two sets of fixed coils (in a stereophonic cartridge), forming a tiny electromagnetic generator. As the magnet vibrates in response to the stylus following the record groove, it induces a tiny current in the coils.
Because the magnet is small and has little mass, and is not coupled mechanically to the generator (as in a ceramic cartridge), a properly adjusted stylus follows the groove more faithfully while requiring less tracking force (the downward pressure on the stylus).
Moving iron and induced magnet types (ADC being a well-known example) have a moving piece of iron or other ferrous alloy coupled to the cantilever (instead of a magnet), while a permanent, bigger magnet is over the coils, providing the necessary magnetic flux.
Moving Coil (MC) cartridges
The MC design is again a tiny electromagnetic generator, but (unlike an MM design) with the magnet and coils reversed: the coils are attached to the cantilever, and move within the field of a permanent magnet. The coils are tiny and made from very fine wire.
Since the number of windings that can be supported in such an armature is small, the output voltage level is correspondingly small. The resulting signal is only a few hundred microvolts, and thus more easily swamped by noise, induced hum, etc. Thus it is more challenging to design a preamplifier with the extremely low noise inputs needed for moving-coil cartridge, therefore a "step up transformer" is sometimes used instead.
However, there are available many "high output" moving coil cartridges that have output levels similar to MM cartridges.
Moving coil cartridges are extremely small precision instruments and are therefore generally expensive, but are frequently preferred by audiophiles due to a subjectively better performance.
Moving Micro Cross (MMC) cartridges
The MMC design was invented and patented by Bang & Olufsen. The MMC cartridge is a variation of the Moving Iron (MI) design. Magnets and coils are stationary while a micro cross moves with the stylus, thereby varying the distances between the arms of the cross and the magnets. It is claimed that the MMC design allows for superior channel separation, since each channel's movements appear on a separate axis.
Moving Magnet vs. Moving Coil
Moving magnet cartridges are more commonly found at the 'lower-end' of the market, while the 'higher-end' tends to be dominated by moving coil designs. The debate as to whether MM or MC designs can ultimately produce the better sound is often heated and subjective. The distinction between the two is often blurred by cost and design considerations - e.g. can an MC cartridge requiring another step-up amplification outperform well made MM cartridges that need simpler front-end stages?
MC cartridges offer very low inductance and impedance, which means that the effects of capacitance (in the cable that goes from the cartridge to the preamp) are negligible, unlike MM cartridges, which comparatively sport very high inductance and impedance. In the latter, cable capacitance can negatively affect the flatness of frequency response and linearity of phase response. This would account for a potential sonic advantage to MC types.
It is generally believed that MC cartridges sport lower moving masses. However, quality MM cartridges are able to offer as low or lower moving mass than some MC cartridges. For example, the state-of-the-art Technics EPC-100CMK4 with 0.055 mg of effective tip mass, of moving magnet design. Comparatively, the popular Denon DL-301 moving coil cartridge has an effective tip mass of 0.270 mg.
To discriminate cartridges by engine (MC vs MM) overlooks the fact that the stylus tip shape (conical vs elliptical vs advanced shapes), mounting (bonded vs nude), cantilever material (aluminum vs boron vs beryllium) and cantilever design (solid rod vs rolled tube) have a significant influence in the sound, and this may account for more variation of sound quality than the engine type used.
MM cartridges generally have output of 3-6 mV, compatible with the MM inputs of preamplifiers. MC cartridges come in two varieties, low output
(usually < 1.0 mV) and high output (more than 1.5 mV); there are also some with very low output (0.3 mV or less). High output MC cartridges are a
concession to compatibility with older preamp MM inputs; low output MC cartridges may generate excessive noise or have insufficient preamp gain to drive amplifiers to their rated output if used on MM inputs. Most solid state preamplifiers have separate high gain, low noise MC inputs to accommodate these. Cartridges with very low output need a separate pre-phono amplification stage before input to an MC or MM preamplifier stage.
"London Decca" Cartridges
The Decca phono cartridges were a unique design, with fixed magnets and coils. The stylus shaft was composed of the diamond tip, a short piece of soft iron, and an L-shaped cantilever made of non-magnetic steel. Since the iron was placed very close to the tip (within 1 mm), the motions of the tip could be tracked very accurately. Decca engineers called this "positive scanning". Vertical and lateral compliance was controlled by the shape and thickness of the cantilever. Decca cartridges had a reputation for being very musical; however early versions required more tracking force than competitive designs - making record wear a concern.
References
External links
Cartridge history from the book ‘Hi-Fi All-New 1958 Edition’
Understand Phono Cartridges from S.K. Pramanik, Audio R&D Engineer – Bang & Olufsen – Struer, Denmark, ‘Understand Phono Cartridges’, Audio Vol.63 (March, 1979).
Sensors
Electromagnetic components
Turntables
Audio transducers | Magnetic cartridge | Technology,Engineering | 2,177 |
240,863 | https://en.wikipedia.org/wiki/Psychological%20pricing | Psychological pricing (also price ending or charm pricing) is a pricing and marketing strategy based on the theory that certain prices have a psychological impact. In this pricing method, retail prices are often expressed as just-below numbers: numbers that are just a little less than a round number, e.g. $19.99 or £2.98. There is evidence that consumers tend to perceive just-below prices (also referred to as "odd prices") as being lower than they are, tending to round to the next lowest monetary unit. Thus, prices such as $1.99 may to some degree be associated with spending $1 rather than $2. The theory that drives this is that pricing practices such as this cause greater demand than if consumers were perfectly rational. Psychological pricing is one cause of price points.
Overview
According to a 1997 study published in the Marketing Bulletin, approximately 60% of prices in advertising material ended in the digit 9, 30% ended in the digit 5, 7% ended in the digit 0 and the remaining seven digits combined accounted for only slightly over 3% of prices evaluated. In the UK, before the withdrawal of the halfpenny coin in 1969, prices often ended in d (elevenpence halfpenny: just under a shilling, which was 12d); another example (before 1961) was £1/19/d. (one pound, nineteen shillings, and elevenpence three farthings) which is one farthing under £2. This is still seen today in gasoline (petrol) pricing ending in of the local currency's smallest denomination; for example, in the US the price of a gallon of gasoline almost always ends at US$0.009 (e.g. US$3.599).
In a traditional cash transaction, fractional pricing imposes tangible costs on the vendor (printing fractional prices), the cashier (producing awkward change) and the customer (stowing the change). These factors have become less relevant with the increased use of checks, credit and debit cards, and other forms of currency-free exchange; also, in some jurisdictions the addition of sales tax makes the advertised price irrelevant and the final digit of the real transaction price effectively random.
The psychological pricing theory is based on one or more of the following hypotheses:
Thomas and Morwitz (2005) coined the term left-digit effect and suggested that this bias is caused by the use of an anchoring heuristic in multi-digit comparisons.
Another rationale for just-below pricing is prospect theory. This theory holds that consumers facing uncertainty in decision making base the value of an alternative on gains or losses offered by the alternative relative to some reference point, rather than on final absolute states of wealth or welfare. The theory also incorporates evidence that small deviations from a reference point tend to be over-valued. So, based on prospect theory, pricing something only a few cents under a whole dollar could be beneficial to the seller. This theory works well because of how the reference point is established by the consumer. The reference point for something that is $19.98 would be $20. This leads the just-below price to be seen as involving a gain, thus making it feel like a better deal.
Consumers ignore the least significant digits rather than do the proper rounding. Even though the cents are seen and not totally ignored, they may subconsciously be partially ignored. Keith Coulter, Associate Professor of Marketing at the Graduate School of Management, Clark University, suggests that this effect may be enhanced when the cents are printed smaller (for example, $1999).
Fractional prices suggest to consumers that goods are marked at the lowest possible price.
When items are listed in a way that is segregated into price bands (such as an online real estate search), price ending is used to keep an item in a lower band, to be seen by more potential purchasers.
The theory of psychological pricing is controversial. Some studies show that buyers, even young children, have a very sophisticated understanding of true cost and relative value and that, to the limits of the accuracy of the test, they behave rationally. Other researchers claim that this ignores the non-rational nature of the phenomenon and that acceptance of the theory requires belief in a subconscious level of thought processes, a belief that economic models tend to deny or ignore. Results from research using modern scanner data are mixed.
Now that many customers are used to just-below pricing, some restaurants and high-end retailers psychologically-price in even numbers in an attempt to reinforce their brand image of quality and sophistication.
Theories
Kaushik Basu used game theory in 1997 to argue that rational consumers value their own time and effort in calculation. Such consumers process the price from left to right and tend to mentally replace the last two digits of the price with an estimate of the mean "cent component" of all goods in the marketplace. In a sufficiently large marketplace, this implies that any individual seller can charge the largest possible "cent component" (99¢) without significantly affecting the average of cent components and without changing customer behavior. Ruffle and Shtudiner's (2006) laboratory test shows considerable support for Basu's 99-cent pricing equilibrium, particularly when other sellers' prices are observable.
The introduction of the euro in 2002, with its various exchange rates, distorted existing nominal price patterns while at the same time retaining real prices. A European wide study (el Sehity, Hoelzl and Kirchler, 2005) investigated consumer price digits before and after the euro introduction for price adjustments. The research showed a clear trend towards psychological pricing after the transition. Further, Benford's Law as a benchmark for the investigation of price digits was successfully introduced into the context of pricing. The importance of this benchmark for detecting irregularities in prices was demonstrated and with it a clear trend towards psychological pricing after the nominal shock of the euro introduction.
Another phenomenon noted by economists is that a price point for a product (such as $4.99) remains stable for a long period of time, with companies slowly reducing the quantity of product in the package until consumers begin to notice. At this time, the price will increase marginally (to $5.05) and then within an exceptionally short time will increase to the next price point ($5.99, for example).
Several studies have shown that when prices are presented to a prospect in descending order (versus ascending order), positive effects for the seller result, mainly a willingness to pay a higher price, higher perceptions of value, and higher probability of purchase. The reason for this is that when presented in the former, the higher price serves as a reference point, and the lower prices are perceived favorably as a result.
In consumer behavior
Thomas and Morwitz (2005) suggested that this bias is a manifestation of the pervasive anchoring heuristic in multi-digit comparisons. (The anchoring heuristic is one of the heuristics identified by Nobel laureate Kahneman and his co-author Tversky.) Judgments of numerical differences are anchored on leftmost digits, causing a bias in relative magnitude judgments. This hypothesis suggests that people perceive the difference between 1.99 and 3.00 to be closer to 2 than to 1 because their judgments are anchored on the leftmost digit.
Stiving and Winer (1997) examined the left-digit effect using scanner panel models. They proposed that 9-ending prices can influence consumer behavior through two distinct processes: image effects and level effects. Image effect suggests that 99-ending prices are associated with images of sales promotions. Level effect captures the magnitude underestimation caused by anchoring on the leftmost digits of prices. Their results suggest that both of these effects account for the influence of 9-ending prices in grocery stores. Manning and Sprott (2009) demonstrated that left-digit anchoring can influence consumer choices using experimental studies.
Choi, Lee, and Ji (2012) examined the interactive effects of 9-ending prices and message framing in advertisements. The researchers found that when pairing nine-ending prices with positive messages, advertisements were much more positively received by consumers. This in turn increased their likelihood of making a purchase decision.
In financial markets
Left-digit effect has also been shown to influence stock-market transactions. Bhattacharya, Holden, and Jacobsen (2011) examined the left-digit effect in stock market transactions. They found that there was excess buying at just-below prices ($1.99) versus round numbers ($2.00) right above them. This discrepancy in buy-sell can lead to significant changes in 24-hour returns that can meaningfully impact markets.
In public policy
Research has also found psychological pricing relevant to the study of politics and public policy. For instance, a study of Danish municipal income taxes found evidence of "odd taxation" as tax rates with a nine-ending were found to be over-represented compared to other ending digits. Further, it was found that citizens' evaluations of public-school districts in a Danish population changed noticeably based on the leftmost digit. In particular, the researchers looked at minuscule changes in average grades that shifted the leftmost digit. Once this value changed, citizens responded more drastically and as such their stance in terms of public policy on the issue changed.
MacKillop et al. (2014) looked at how the left-digit effect affects the relationship between price hikes and smoking cessation. There was a very clearly demonstrated inverse relationship between the price of cigarettes and individual's motivation to smoke. Researchers found that price hikes that impacted the leftmost digit in the price (i.e. $4.99 vs. $5.00) were particularly effective in causing change among adult smokers. These findings can be utilized by public policy researchers and legislators to implement more effective cigarette tax policies.
Regulation
According to Davidovich-Weisberg (2013), in Israel several high-profile regulatory commissions have joined to ban retailers from charging prices ending in 99. These regulatory bodies have claimed that this was an attempt to make prices look less expensive to customers. In addition, due to the phasing out of certain denominations of coins in Israel, these quirky prices also made little practical sense in terms of everyday shopping.
Historical comments
Exactly how psychological pricing came into common use is not clear, though it is known the practice arose during the late 19th century. Scot Morris' 1979 Book of Strange Facts & Useless Information speculated that it originated when Melville E. Stone founded the Chicago Daily News in 1875 and priced it at one cent to compete with the nickel papers of the day; however, Cecil Adams has directly addressed Morris' claims, noting that Stone sold the News in 1876, and also that the News archives indicate that "prices ending in 9 (39 cents, 69 cents, etc.) were rare until well into the 1880s and weren't all that common then. The practice didn't really become widespread until the 1920s, and even then prices as often as not ended in .95, not .99."
Others have suggested that fractional pricing was first adopted as a control on employee theft. For cash transactions with a round price, there is a chance that a dishonest cashier will pocket the bill rather than record the sale. For cash transactions with a just-below price, the cashier must nearly always make change for the customer. This generally means opening the cash register which creates a record of the sale in the register and reduces the risk of the cashier stealing from the store owner.
Since the registration is done with the process of returning change, according to Bill Bryson odd pricing came about because by charging odd amounts like 49 and 99 cents (or 45 and 95 cents when nickels are more used than pennies), the cashier very probably had to open the till for the penny change and thus announce the sale.
In the former Czechoslovakia, people called this pricing "baťovská cena" ("Baťa's price"), referring to Tomáš Baťa, a Czech manufacturer of footwear. He began to widely use this practice in 1920.
Price ending has also been used by retailers to highlight sale or clearance items for administrative purposes. A retailer might end all regular prices in 95 and all sale prices in 50. This makes it easy for a buyer to identify which items are discounted when looking at a report.
In its 2005 United Kingdom general election manifesto, the Official Monster Raving Loony Party proposed the introduction of a 99-pence coin to "save on change".
A recent trend in some monetary systems as inflation gradually reduces the value of money is to eliminate the smallest denomination coin (typically 0.01 of the local currency). The total cost of purchased items is then rounded up or down to, for example, the nearest 0.05. This may have an effect on future just-below pricing, especially at small retail outlets where single-item purchases are more common, encouraging vendors to price with .98 and .99 endings, which are rounded up when .05 is the smallest denomination, while .96 and .97 are rounded down. An example of this practice is in Australia, where 5 cents has been the smallest denomination coin since 1992, but pricing at .98 or .99 on items under several hundred dollars is still almost universally applied (e.g.: $1.99–299.99), while goods on sale often price at .94 and its variations. Finland and the Netherlands were the first two countries using the euro currency to eliminate the 1- and 2-cent coins.
See also
Infomercial
Pricing
Price point
Marketing mix
Mental accounting
Microeconomics
Numerical cognition
Take a penny, leave a penny
References
Citations
General and cited references
External links
The Left-Digit Effect in Price Cognition
Heuristics in Numerical Cognition
99-cent pricing hooks shoppers
Nine-ending Price and Consumer Behavior: An Evaluation in a New Context (PDF)
"Get smart for just $9.99"
"Perfect Pricing: The psychology of online prices"
Business intelligence terms
Cognitive biases
Consumer behaviour
Pricing | Psychological pricing | Biology | 2,872 |
60,841,716 | https://en.wikipedia.org/wiki/APBS%20%28software%29 | APBS (previously also Advanced Poisson-Boltzmann Solver) is a free and open-source software for solving the equations of continuum electrostatics intended primarily for the large biomolecular systems. It is available under the BSD license.
PDB2PQR prepares the protein structure files from Protein Data Bank for use with APBS. The preparation steps include, but aren't limited to adding missing heavy atoms to the structures and assigning charges from a number of force fields. The output file format is PQR and that's where the name of the software comes from.
References
External links
Official documentation
APBS, PDB2PQR, and related software - GitHub
Molecular modelling software
Free and open-source software
Free software programmed in Python
Free software programmed in C++
Free software programmed in C | APBS (software) | Chemistry | 173 |
3,452,765 | https://en.wikipedia.org/wiki/ViroPharma | ViroPharma Incorporated was a pharmaceutical company that developed and sold drugs that addressed serious diseases treated by physician specialists and in hospital settings. The company focused on product development activities on viruses and human disease, including those caused by cytomegalovirus (CMV) and hepatitis C virus (HCV) infections. It was purchased by Shire in 2013, with Shire paying around $4.2 billion for the company in a deal that was finalized in January 2014. ViroPharma was a member of the NASDAQ Biotechnology Index and the S&P 600.
The company had strategic relationships with GlaxoSmithKline, Schering-Plough, and Sanofi-Aventis. ViroPharma acquired Lev Pharmaceuticals in a merger in 2008.
History
ViroPharma Incorporated was founded in 1994 by Claude H. Nash (Chief Executive Officer), Mark A. McKinlay (Vice President, Research & Development), Marc S. Collett (Vice President, Discovery Research), Johanna A. Griffin (Vice President, Business Development), and Guy D. Diana (Vice President, Chemistry Research.) None of the founders are still with the company.
In November 2014, Shire plc acquired ViroPharma for $4.2 billion.
Products
Marketed products
Vancocin Pulvules HCl: licensed from Eli Lilly in 2004. Oral Vancocin is an antibiotic for treatment of staphylococcal enterocolitis and antibiotic associated pseudomembranous colitis caused by Clostridioides difficile.
Pipeline
Maribavir is an oral antiviral drug candidate licensed from GlaxoSmithKline in 2003 for the prevention and treatment of human cytomegalovirus disease in hematopoietic stem cell/bone marrow transplant patients. In February 2006, ViroPharma announced that the United States Food and Drug Administration (FDA) had granted the company fast track status for maribavir.
In March 2006, the company announced that a Phase II study with maribavir demonstrated that prophylaxis with maribavir displays strong antiviral activity, as measured by statistically significant reduction in the rate of reactivation of CMV in recipients of hematopoietic stem cell/bone marrow transplants. In an intent-to-treat analysis of the first 100 days after the transplant, the number of subjects who required pre-emptive anti-CMV therapy was statistically significantly reduced (p-value = 0.051 to 0.001) in each of the maribavir groups compared to the placebo group (57% for placebo vs. 15%, 30%, and 15% for maribavir 100 mg twice daily, 400 mg daily, and 400 mg twice daily, respectively).
ViroPharma conducted a Phase III clinical study to evaluate the prophylactic use for the prevention of cytomegalovirus disease in recipients of allogeneic stem cell transplant patients. In February 2009, ViroPharma announced that the Phase III study failed to achieve its goal, showing no significant difference between maribavir and a placebo in reducing the rate of CMV disease.
Failed products
Oral pleconaril was ViroPharma's first compound, licensed from Sanofi in 1995. Pleconaril is active against viruses in the picornavirus family. ViroPharma's first indication was for enteroviral meningitis, but that indication was abandoned when the clinical trials did not demonstrate efficacy.
In 2001, ViroPharma submitted a New Drug Application of pleconaril to the FDA for the common cold. On 2002-03-19, the FDA Antiviral Advisory Committee recommended that the company had failed to show adequate safety, and the FDA subsequently issued a not-approvable letter.
In November 2004, ViroPharma licensed pleconaril to Schering-Plough, who are developing an intranasal formulation for the common cold and asthma exacerbations. (Schering-Plough Development Pipeline). In August 2006, Schering-Plough started a Phase II clinical trial.
References
Further references
The Long Road Ahead for ViroPharma Motley Fool 29 September 2006
External links
viropharma.com
Companies formerly listed on the Nasdaq
Pharmaceutical companies established in 1994
Biotechnology companies established in 1994
Biotechnology companies of the United States
Defunct pharmaceutical companies of the United States
Health care companies based in Pennsylvania
Defunct companies based in Pennsylvania
Companies based in Chester County, Pennsylvania
Life sciences industry
American companies established in 1994
1994 establishments in Pennsylvania
Pharmaceutical companies disestablished in 2014
2014 disestablishments in Pennsylvania
2014 mergers and acquisitions | ViroPharma | Biology | 963 |
56,227,208 | https://en.wikipedia.org/wiki/Haploporus%20latisporus | Haploporus latisporus is a species of poroid white rot crust fungus in the family Polyporaceae. It is found in Central China, where it grows on decomposing pine twigs.
Taxonomy
The type collection of Haploporus latisporus was made in Jigongshan (Henan Province) in August 2005, and described as a new species in 2007. The specific epithet latisporus refers to its wide spores, which, at up to 10 μm, are the broadest in genus Haploporus.
Description
Fruit bodies of Haploporus septatus occur in small crust-like patches that are difficult to separate from the underlying substrate. Individually, they measure up to long and wide, and up to 1 mm thick at the centre. The hymenophore, or pore surface, is white to cream coloured, darkening slightly in dry conditions. The round to angular pores number around two to three per millimetre.
The hyphal structure is dimitic to trimitic. The generative hyphae have clamp connections. The thick-walled, cylindrical spores are ornamented with warts (up to 2 μm long) and typically measure 13–16.5 by 8–10 μm.
References
Fungi described in 2007
Fungi of China
Polyporaceae
Taxa named by Yu-Cheng Dai
Fungus species | Haploporus latisporus | Biology | 280 |
52,926,101 | https://en.wikipedia.org/wiki/G-10%20%28material%29 | G-10 or garolite is a high-pressure fiberglass laminate, a type of composite material. It is created by stacking multiple layers of glass cloth, soaked in epoxy resin, then compressing the resulting material under heat until the epoxy cures. It is manufactured in flat sheets, most often a few millimeters thick.
G-10 is very similar to Micarta and carbon fiber laminates, except that glass cloth is used as filler material. (Note that the professional nomenclature of "filler" and "matrix" in composite materials may be somewhat counterintuitive when applied to soaking textiles with resin.)
G-10 is the toughest of the glass fiber resin laminates and therefore the most commonly used.
Properties
G-10 is favored for its high strength, low moisture absorption, and high level of electrical insulation and chemical resistance. These properties are maintained not only at room temperature but also under humid or moist conditions. It was first used as a substrate for printed circuit boards, and its designation, G-10, comes from a National Electrical Manufacturers Association standard for this purpose.
Decorative uses
Decorative variations of G-10 are produced in many colors and patterns and are especially used to make handles for knives, grips for firearms and other tools. These can be textured (for grip), bead blasted, sanded or polished. Its strength and low density make it useful for other kinds of handcrafting as well.
Structural uses
G-10 is used to reinforce the edges of fiberglass coated wood. It is used to protect the point-of-contact on many such items. During ordinary use it is the G-10 that takes the brunt of the blow. In such applications it is meant to be replaced as it wears. G-10 is also used as a 3D-Printer build surface.
G-10 is also commonly used as a material for durable knife and gun handles and grips.
Hazards
G-10 is safe to handle absent extreme conditions.
Hazards can result from cutting or grinding the material, as glass and epoxy dust are well known to contribute to respiratory disorders and may increase the risk of developing lung cancer. For any work of this kind, the work space should be appropriately ventilated and masks or respirators worn.
Epoxy resin is flammable and, once ignited, will burn vigorously, giving off poisonous gases. For this reason, materials such as FR-4 containing flame retardant additives have replaced G-10 in certain applications.
See also
Bakelite
References
Printed circuit board manufacturing
Fibre-reinforced polymers
Fiberglass | G-10 (material) | Chemistry,Materials_science,Engineering | 533 |
2,654,483 | https://en.wikipedia.org/wiki/Enterprise%20data%20management | Enterprise data management (EDM) is the ability of an organization to precisely define, easily integrate and effectively retrieve data for both internal applications and external communication. EDM focuses on the creation of accurate, consistent, and transparent content. EDM emphasizes data precision, granularity, and meaning and is concerned with how the content is integrated into business applications as well as how it is passed along from one business process to another.
EDM arose to address circumstances where users within organizations independently source, model, manage and store data. Uncoordinated approaches by various segments of the organization can result in data conflicts and quality inconsistencies, lowering the trustworthiness of the data as it is used for operations and reporting.
The goal of EDM is trust and confidence in data assets. Its components are:
Strategy and governance
EDM requires a strategic approach to choosing the right processes, technologies, and resources (i.e. data owners, governance, stewardship, data analysts, and data architects). EDM is a challenge for organizations because it requires alignment among multiple stakeholders (including IT, operations, finance, strategy, and end-users) and relates to an area (creation and use of common data) that has not traditionally had a clear “owner.”
The governance challenge can be a big obstacle to the implementation of an effective EDM because of the difficulties associated with providing a business case on the benefits of data management. The core of the challenge is due to the fact that data quality has no intrinsic value. It is an enabler of other processes and the true benefits of effective data management are systematic and intertwined with other processes. This makes it hard to quantify all the downstream implications or upstream improvements.
The difficulties associated with quantification of EDM benefits translate into challenges with the positioning of EDM as an organizational priority. Achieving organizational alignment on the importance of data management (as well as managing data as an ongoing area of focus) is the domain of governance. In recent years the establishment of an EDM and the EDM governance practice has become commonplace despite these difficulties.
Program implementation
Implementation of an EDM program encompasses many processes – all of which need to be coordinated throughout the organization and managed while maintaining operational continuity. Below are some of the major components of EDM implementation that should be given consideration:
Stakeholder requirements
EDM requires alignment among multiple stakeholders (at the right level of authority) who all need to understand and support the EDM objectives. EDM begins with a thorough understanding of the requirements of the end users and the organization as a whole. Managing stakeholder requirements is a critical, and ongoing, process based in an understanding of workflow, data dependencies and the tolerance of the organization for operational disruption. Many organizations use formal processes such as service level agreements to specify requirements and establish EDM program objectives.
Policies and procedures
Effective EDM usually includes the creation, documentation and enforcement of operating policies and procedures associated with change management, (i.e. data model, business glossary, master data shared domains, data cleansing and normalization), data stewardship, security constraints and dependency rules. In many cases, these policies and procedures are documented for the first time as part of the EDM initiative.
Data definitions and tagging
One of the core challenges associated with EDM is the ability to compare data that is obtained from multiple internal and external sources. In many circumstances, these sources use inconsistent terms and definitions to describe the data content itself – making it hard to compare data, hard to automate business processes, hard to feed complex applications and hard to exchange data. This frequently results in a difficult process of data mapping and cross-referencing. Normalization of all the terms and definitions at the data attribute level is referred to as the metadata component of EDM and is an essential prerequisite for effective data management.
Platform requirements
Even though EDM is fundamentally a data content challenge, there is a core technology dimension that must be addressed. Organizations need to have a functional storage platform, a comprehensive data model and a robust messaging infrastructure. They must be able to integrate data into applications and deal with the challenges of the existing (i.e. legacy) technology infrastructure. Building the platform or partnering with an established technology provider on how the data gets stored and integrated into business applications is an essential component of the EDM process.
Enterprise data management as an essential business requirement has emerged as a priority for many organizations. The objective is confidence and trust in data as the glue that holds business strategy together.
See also
Master data management
Master data
Web data integration
References
General
Enterprise Data Management Council http://www.edmcouncil.org
Issues in Enterprise Data Management: A Survey Report, 12/06
Data management
Product lifecycle management | Enterprise data management | Technology | 959 |
42,395,737 | https://en.wikipedia.org/wiki/Grape%20syrup | Grape syrup is a condiment made with concentrated grape juice. It is thick and sweet because of its high ratio of sugar to water. Grape syrup is made by boiling grapes, removing their skins, and squeezing them through a sieve to extract the juice. Like other fruit syrups, a common use of grape syrup is as a topping to sweet cakes, such as pancakes or waffles.
Names and etymology
The ancient Greek name for grape syrup is (), in the general category of (), which translates to 'boiled'. The Greek name was used in Crete and, in modern times, in Cyprus.
is the name for a type of Mediterranean grape syrup. The word comes from the Turkish , which usually refers to grape syrup, but is also used to refer to mulberry and other fruit syrups.
(not to be confused with ) is the southern Italian term for grape syrup. It is made only from cooked wine grape must (), with no fermentation involved. There is no alcohol or vinegar content, and no additives, preservatives or sweeteners are added. It is both a condiment and ingredient used in either sweet or savory dishes.
History
Greco-Roman
One of the earliest mentions of grape syrup comes from the fifth-century BC Greek physician Hippocrates, who refers to (), the Greek name for the condiment. The fifth-century BC Athenian playwright Aristophanes also makes a reference to it, as does Roman-era Greek physician Galen.
Grape syrup was known by different names in Ancient Roman cuisine depending on the boiling procedure. , , and were reductions of must. They were made by boiling down grape juice or must in large kettles until it had been reduced to two-thirds of the original volume, ; half the original volume, ; or one-third, . The Greek name for this variant of grape syrup was ().
The main culinary use of was to help preserve and sweeten wine, but it was also added to fruit and meat dishes as a sweetening and souring agent and even given to food animals such as ducks and suckling pigs to improve the taste of their flesh. was mixed with garum to make the popular condiment . Quince and melon were preserved in and honey through the winter, and some Roman women used or as a cosmetic. was often used as a food preservative in provisions for Roman troops.
There is some confusion as the amount of reduction for and . As James Grout explains in its Encyclopedia Romana, authors informed different reductions, as follows:The elder Cato, Columella, and Pliny all describe how unfermented grape juice (, must) was boiled to concentrate its natural sugars. "A product of art, not of nature," the must was reduced to one half () or even one third its volume () (Pliny, XIV.80), although the terms are not always consistent. Columella identifies as "must of the sweetest possible flavour" that has been boiled down to a third of its volume (XXI.1). Isidore of Seville, writing in the seventh century AD, says that it is that has been reduced by a third but goes on to imagine that is so called because it has been cheated or defrauded () (Etymologies, XX.3.15). Varro reverses Pliny's proportions altogether (quoted in Nonius Marcellus, De Conpendiosa Doctrina, XVIII.551M). is mentioned in almost all Roman books dealing with cooking or household management. Pliny the Elder recommended that only be boiled at the time of the new moon, while Cato the Censor suggested that only the sweetest possible should be used.
In ancient Rome, grape syrup was often boiled in lead pots, which sweetened the syrup through the leaching of the sweet-tasting chemical compound lead acetate into the syrup. Incidentally, this is thought to have caused lead poisoning for Romans consuming the syrup. A 2009 History Channel documentary produced a batch of historically accurate in lead-lined vessels and tested the liquid, finding a lead level of 29,000 parts per billion (ppb), which is 2,900 times higher than contemporary American drinking water limit of 10 ppb. These levels are easily high enough to cause either acute lead toxicity if consumed in large amounts or chronic lead poisoning when consumed in smaller quantities over a longer period of time (as was typically used).
However, the use of leaden cookware, though popular, was not the general standard of use. Copper cookware was used far more generally and no indication exists as to how often was added or in what quantity. There is not, however, scholarly agreement on the circumstances and quantity of lead in these ancient Roman condiments. For instance, the original research was done by Jerome Nriagu, but was criticized by John Scarborough, a pharmacologist and classicist, who characterized Nriagu's research as "so full of false evidence, miscitations, typographical errors, and a blatant flippancy regarding primary sources that the reader cannot trust the basic arguments."
Levant
Grape syrup has been used in the Levant since antiquity, as evidenced by a document from Nessana in the northern Negev, within modern Israel, that mentions grape syrup production. Sources describing the Muslim conquest of the Levant in 636 note that when Jews met with Rashidun caliph Umar, who camped in Jabiyah, southern Golan, they claimed that due to the harsh climate and plagues, they had to drink wine. Umar suggested honey instead, but they said it was not beneficial for them. As a compromise, Umar agreed they could make a dish from grape syrup without intoxicating effects. They boiled grape juice until two-thirds evaporated and presented it to Umar, who noted it reminded him of an ointment for camels. Botanist Zohar Amar estimates that this explains the winepresses from Mishnaic and Talmudic times found in the Mount Hermon area, which are similar to those used for grape syrup production in modern times.
Islamic law increased the prevalence of grape syrup in the region due to the prohibition of wine, a practice that was strictly enforced during the Mamluk period, when grape syrup became a common wine substitute among Muslims. Rabbi Joseph Tov Elem, who lived in Jerusalem around 1370, proposed that the honey mentioned in the Bible is actually grape syrup. Obadiah of Bertinoro also mentioned grape syrup among various types of honey sold in Jerusalem, and Meshullam of Volterra described it as "hard as a rock and very fine." Baalbek, in modern Lebanon, was particularly renowned for its dibs production, and Ibn Battuta detailed the production process, noting the use of a type of soil to harden the syrup so that it remained intact even if the container broke. In the 15th century, hashish users mixed it with dibs to mitigate its effects. Rabbis such as Nissim of Gerona and Obadish of Bertinoro discussed its kashrut. In the early Ottoman period, there was sometimes a special tax on raisins and dibs. In the 19th century, Hebron exported significant quantities of grape syrup to Egypt, as documented by Samson Bloch and Samuel David Luzzatto.
Islamic civilization
In early Islam, was known in Arabic as . Early caliphs distributed to Muslim troops along with other foodstuffs, considering that it was no longer intoxicating. However, fermentation could resume in the amphorae, and in the late 710s, Caliph ‘Umar II prohibited drinking this beverage.
Modern
Cyprus
The ancient Greek name (now pronounced in Cypriot Greek) is still used to refer to the condiment, which is still made in Cyprus.
Greece
( ), also called () and in English grapemust or grape molasses, is a syrup that is reduced until it becomes dark and syrupy. keeps indefinitely. Its flavor is sweet with slightly bitter undertones. The syrup may be light or dark colored, depending on the grapes used. Before the wide availability of inexpensive cane sugar, was a common sweetener in Greek cooking, along with carob syrup and honey. is still used today in desserts and as a sweet topping for some foods. Though can be homemade, it is also sold commercially under different brand names.
Fruits and vegetables that have been candied by boiling in () are called .
From late August until the beginning of December, many Greek bakeries make and sell dark crunchy and fragrant cookies, ().
() is a spiced cake with .
Iran
In Iranian cuisine, grape syrup (in ) is used to sweeten ardeh (tahini), which is consumed at breakfast. An alternative is date syrup, which is also widely used in Middle Eastern cooking.
Italy
, (from the Latin word , with the same meaning), vincotto or is commonly used in Italy, especially in the regions of Emilia Romagna, Marche, Calabria, and Sardinia, where it is considered a traditional flavor.
North Macedonia
In North Macedonia, a form of grape syrup known as (Macedonian: Гроздов маџун) has been produced for centuries, commonly used as a sweetener, but also as traditional medicine. It never contains any added sugar.
South Africa
In South Africa, the grape syrup is known as .
Spain
is a form of grape concentrate typically produced in Spain. Often derived from grape varieties such as Pedro Ximénez, it is made by boiling unfermented grape juice until the volume is reduced by at least 50%, and its viscosity reduced to a syrup. The final product is a thick liquid with cooked caramel flavours, and its use is frequent as an additive for dark, sweet wines such as sweet styles of sherry, Malaga, and Marsala.
Turkey
In Turkey, grape syrup is known as .
Levant
Grape syrup is known as or dibs in the countries of the Levant (Palestine, Jordan, Lebanon, Israel and Syria). It is usually used as a sweetener and as part of desserts alongside carob syrup and bee honey. In areas of Palestine, it is also used to sweeten wine and eaten with leben and toasted nuts such as walnuts and almonds for breakfast. The syrup is made in Druze villages in the northern Golan Heights.
See also
Churchkhela, a sausage-shaped candy made from grape must, flour and nuts
Drakshasava, an Ayurvedic tonic made from grapes
Moustalevria
Must
Pekmez, a similar product in the Ottoman world
Pomegranate syrup
Vino cotto
List of fruit dishes
List of grape dishes
List of syrups
References
Further reading
Theodoros Varzakas, Athanasios Labropoulos, Stylianos Anestis, eds., Sweeteners: Nutritional Aspects, Applications, and Production Technology, 2012, , p. 201ff.
Harris, Andy Modern Greek: 170 Contemporary Recipes from the Mediterranean. Chronicle Books, 2002.
Ilaria G. Giacosa; A Taste of Ancient Rome; University of Chicago Press; (paperback, 1994)
Pliny the Elder; Natural History; tr. H. Rackham; Harvard University Press (Loeb Classical Library); (cloth, 1956)
Marcus Porcius Cato; On Agriculture ; Harvard University Press (Loeb Classical Library); (hardcover, 1979)
External links
James Grout, Lead Poisoning, part of the Encyclopædia Romana
Condiments
Syrup
Fruit juice
Grape juice
Greek cuisine
Lead poisoning
Oenology
Roman cuisine
Syrup
Toxicology | Grape syrup | Environmental_science | 2,397 |
9,944,425 | https://en.wikipedia.org/wiki/Hamiltonian%20completion | The Hamiltonian completion problem is to find the minimal number of edges to add to a graph to make it Hamiltonian.
The problem is clearly NP-hard in the general case (since its solution gives an answer to the NP-complete problem of determining whether a given graph has a Hamiltonian cycle). The associated decision problem of determining whether K edges can be added to a given graph to produce a Hamiltonian graph is NP-complete.
Moreover, Hamiltonian completion belongs to the APX complexity class, i.e., it is unlikely that efficient constant ratio approximation algorithms exist for this problem.
The problem may be solved in polynomial time for certain classes of graphs, including series–parallel graphs and their subgraphs, which include outerplanar graphs, as well as for a line graph of a tree
or a cactus graph.
Gamarnik et al. use a linear time algorithm for solving the problem on trees to study the asymptotic number of edges that must be added for sparse random graphs to make them Hamiltonian.
References
NP-complete problems
Hamiltonian paths and cycles | Hamiltonian completion | Mathematics | 220 |
37,360,538 | https://en.wikipedia.org/wiki/Human%20interactions%20with%20insects | Human interactions with insects include both a wide variety of uses, whether practical such as for food, textiles, and dyestuffs, or symbolic, as in art, music, and literature, and negative interactions including damage to crops and extensive efforts to control insect pests.
Academically, the interaction of insects and society has been treated in part as cultural entomology, dealing mostly with "advanced" societies, and in part as ethnoentomology, dealing mostly with "primitive" societies, though the distinction is weak and not based on theory. Both academic disciplines explore the parallels, connections and influence of insects on human populations, and vice versa. They are rooted in anthropology and natural history, as well as entomology, the study of insects. Other cultural uses of insects, such as biomimicry, do not necessarily lie within these academic disciplines.
More generally, people make a wide range of uses of insects, both practical and symbolic. On the other hand, attitudes to insects are often negative, and extensive efforts are made to kill them. The widespread use of insecticides has failed to exterminate any insect pest, but has caused resistance to commonly-used chemicals in a thousand insect species.
Practical uses include as food, in medicine, for the valuable textile silk, for dyestuffs such as carmine, in science, where the fruit fly is an important model organism in genetics, and in warfare, where insects were successfully used in the Second World War to spread disease in enemy populations. One insect, the honey bee, provides honey, pollen, royal jelly, propolis and an anti-inflammatory peptide, melittin; its larvae too are eaten in some societies. Medical uses of insects include maggot therapy for wound debridement. Over a thousand protein families have been identified in the saliva of blood-feeding insects; these may provide useful drugs such as anticoagulants, vasodilators, antihistamines and anaesthetics.
Symbolic uses include roles in art, in music (with many songs featuring insects), in film, in literature, in religion, and in mythology. Insect costumes are used in theatrical productions and worn for parties and carnivals.
Cultural entomology and ethnoentomology
Ethnoentomology developed from the 19th century with early works by authors such as Alfred Russel Wallace (1852) and Henry Walter Bates (1862). Hans Zinsser's classic Rats, Lice and History (1935) showed that insects were an important force in human history. Writers like William Morton Wheeler, Maurice Maeterlinck, and Jean Henri Fabre described insect life and communicated their meaning to people "with imagination and brilliance". Frederick Simon Bodenheimer's Insects as Human Food (1951) drew attention to the scope and potential of entomophagy, and showed a positive aspect of insects. Food is the most studied topic in ethnoentomology, followed by medicine and beekeeping.
In 1968, claimed cultural entomology as a branch of insect studies, in a review of the roles insects played in folklore and culture including religion, food, medicine and the arts. In 1984, Charles Hogue covered the field in English and from 1994 to 1997, Hogue's The Cultural Entomology Digest served as a forum on the field. Hogue argued that "Humans spend their intellectual energies in three basic areas of activity: surviving, using practical learning (the application of technology); seeking pure knowledge through inductive mental processes (science); and pursuing enlightenment to taste a pleasure by aesthetic exercises that may be referred to as the 'humanities.' Entomology has long been concerned with survival (economic entomology) and scientific study (academic entomology), but the branch of investigation that addresses the influence of insects (and other terrestrial Arthropoda, including arachnids and myriapods) in literature, language, music, the arts, interpretive history, religion, and recreation has only become recognized as a distinct field" through Schimitschek's work.
Hogue set out the boundaries of the field by saying: "The narrative history of the science of entomology is not part of cultural entomology, while the influence of insects on general history would be considered cultural entomology." He added: "Because the term "cultural" is narrowly defined, some aspects normally included in studies of human societies are excluded."
Darrell Addison Posey, noting that the boundary between cultural entomology and ethnoentomology is difficult to draw, cites Hogue as limiting cultural entomology to the influence of insects on "the essence of humanity as expressed in the arts and humanities". Posey notes further that cultural anthropology is usually restricted to the study of "advanced", industrialised, and literate societies, whereas ethnoentomology studies "the entomological concerns of 'primitive' or 'noncivilized' societies". Posey states at once that the division is artificial, complete with an unjustified us/them bias. Brian Morris similarly criticises the way that anthropologists treat non-Western attitudes to nature as monadic and spiritualist, and contrast this "in gnostic fashion" with a simplistic treatment of Western, often 17th-century, mechanistic attitude. Morris considers this "quite unhelpful, if not misleading", and offers instead his own research into the multiple ways that the people of Malawi relate to insects and other animals: "pragmatic, intellectual, realist, practical, aesthetic, symbolic and sacramental."
Benefits and costs
Insect ecosystem services
The Millennium Ecosystem Assessment (MEA) report 2005 defines ecosystem services as benefits people obtain from ecosystems, and distinguishes four categories, namely provisioning, regulating, supporting, and cultural. A fundamental tenet is that a few species of arthropod are well understood for their influence on humans (such as honeybees, ants, mosquitoes, and spiders). However, insects offer ecological goods and services. The Xerces Society calculates the economic impact of four ecological services rendered by insects: pollination, recreation (i.e. "the importance of bugs to hunting, fishing, and wildlife observation, including bird-watching"), dung burial, and pest control. The value has been estimated at $153 billion worldwide. As the ant expert E. O. Wilson observed: "If all mankind were to disappear, the world would regenerate back to the rich state of equilibrium that existed ten thousand years ago. If insects were to vanish, the environment would collapse into chaos." A Nova segment on the American Public Broadcasting Service framed the relationship with insects in an urban context: "We humans like to think that we run the world. But even in the heart of our great cities, a rival superpower thrives ... These tiny creatures live all around us in vast numbers, though we hardly even notice them. But in many ways, it is they who really run the show. The Washington Post stated: "We are flying blind in many aspects of preserving the environment, and that's why we are so surprised when a species like the honeybee starts to crash, or an insect we don't want, the Asian tiger mosquito or the fire ant, appears in our midst. In other words: Start thinking about the bugs."
Pests and propaganda
Human attitudes toward insects are often negative, reinforced by sensationalism in the media. This has produced a society that attempts to eliminate insects from daily life. For example, nearly 75 million pounds of broad-spectrum insecticides are manufactured and sold each year for use in American homes and gardens. Annual revenues from insecticide sales to homeowners exceeded $450 million in 2004. Out of the roughly a million species of insects described so far, not more than 1,000 can be regarded as serious pests, and less than 10,000 (about 1%) are even occasional pests. Yet not one species of insect has been permanently eradicated through the use of pesticides. Instead, at least 1,000 species have developed field resistance to pesticides, and extensive harm has been done to beneficial insects including pollinators such as bees.
During the Cold War, the Warsaw Pact countries launched a widespread war against the potato beetle, blaming the introduction of the species from America on the CIA, demonising the species in propaganda posters, and urging children to gather the beetles and kill them.
Practical uses
As food
Entomophagy is the eating of insects. Many edible insects are considered a culinary delicacy in some societies around the world, and Frederick Simon Bodenheimer's Insects as Human Food (1951) drew attention to the scope and potential of entomophagy, but the practice is uncommon and even taboo in other societies. Sometimes insects are considered suitable only for the poor in the third world, but in 1975 Victor Meyer-Rochow suggested that insects could help ease global future food shortages and advocated a change in western attitudes towards cultures in which insects were appreciated as a food item. P. J. Gullan and P. S. Cranston felt that the remedy for this may be marketing of insect dishes as suitably exotic and costly to make them acceptable. They also note that some societies in sub-Saharan Africa prefer caterpillars to beef, while Chakravorty et al. (2011) point out that food insects (highly appreciated in North-East India) are more expensive than meat. The economics, i.e., the costs involved collecting food insects and the money earned through the sale of such insects, have been studied in a Laotian setting by Meyer-Rochow et al. (2008). In Mexico, ant larvae and corixid water boatman eggs are sought out as a form of caviar by gastronomes. In Guangdong, water beetles fetch a high enough price for these insects to be farmed. Especially high prices are fetched in Thailand for the giant water bug Lethocerus indicus.
Insects used in food include honey bee larvae and pupae, mopani worms, silkworms, Maguey worms, Witchetty grubs, crickets, grasshoppers and locusts. In Thailand, there are 20,000 farmers rearing crickets, producing some 7,500 tons per year.
In medicine
Insects have been used medicinally in cultures around the world, often according to the Doctrine of Signatures. Thus, the femurs of grasshoppers, which were said to resemble the human liver, were used to treat liver ailments by the indigenous peoples of Mexico. The doctrine was applied in both Traditional Chinese Medicine (TCM) and in Ayurveda. TCM uses arthropods for various purposes; for example, centipede is used to treat tetanus, seizures, and convulsions, while the Chinese Black Mountain Ant, Polyrhachis vicina, is used as a cure all, especially by the elderly, and extracts have been examined as a possible anti-cancer agent. Ayurveda uses insects such as Termite for conditions such as ulcers, rheumatic diseases, anaemia, and pain. The Jatropha leaf miner's larvae are used boiled to induce lactation, reduce fever, and soothe the gastrointestinal tract. In contrast, the traditional insect medicine of Africa is local and unformalised. The indigenous peoples of Central America used a wide variety of insects medicinally. Mayans used Army ant soldiers as living sutures. The venom of the Red harvester ant was used to cure rheumatism, arthritis, and poliomyelitis via the immune reaction produced by its sting. Boiled silkworm pupae were taken to treat apoplexy, aphasy, bronchitis, pneumonia, convulsions, haemorrhages, and frequent urination.
Honey bee products are used medicinally in apitherapy across Asia, Europe, Africa, Australia, and the Americas, despite the fact that the honey bee was not introduced to the Americas until the colonization by Spain and Portugal. They are by far the most common medical insect product both historically and currently, and the most frequently referenced of these is honey. It can be applied to skin to treat excessive scar tissue, rashes, and burns, and as an eye poultice to treat infection. Honey is taken for digestive problems and as a general health restorative. It is taken hot to treat colds, cough, throat infections, laryngitis, tuberculosis, and lung diseases. Apitoxin (honey bee venom) is applied via direct stings to relieve arthritis, rheumatism, polyneuritis, and asthma. Propolis, a resinous, waxy mixture collected by honeybees and used as a hive insulator and sealant, is often consumed by menopausal women because of its high hormone content, and it is said to have antibiotic, anesthetic, and anti-inflammatory properties. Royal jelly is used to treat anaemia, gastrointestinal ulcers, arteriosclerosis, hypo- and hypertension, and inhibition of sexual libido. Finally bee bread, or bee pollen, is eaten as a generally health restorative, and is said to help treat both internal and external infections. One of the major peptides in bee venom, melittin, has the potential to treat inflammation in sufferers of rheumatoid arthritis and multiple sclerosis.
The rise of antibiotic resistant infections has sparked pharmaceutical research for new resources, including into arthropods.
Maggot therapy uses blowfly larvae to perform wound-cleaning debridement.
Cantharidin, the blister-causing oil found in several families of beetles described by the vague common name Spanish fly has been used as an aphrodisiac in some societies.
Blood-feeding insects like ticks, horseflies, and mosquitoes inject multiple bioactive compounds into their prey. These insects have long been used by practitioners of Eastern Medicine to prevent blood clot formation or thrombosis, suggesting possible applications in scientific medicine. Over 1280 protein families have been associated with the saliva of blood feeding organisms, including inhibitors of platelet aggregation, ADP, arachidonic acid, thrombin, PAF, anticoagulants, vasodilators, vasoconstrictors, antihistamines, sodium channel blockers, complement inhibitors, pore formers, inhibitors of angiogenesis, anaesthetics, AMPs and microbial pattern recognition molecules, and parasite enhancers/activators.
In science and technology
Insects play an important role in biological research. Because of its small size, short generation time and high fecundity, the common fruit fly Drosophila melanogaster was selected as a model organism for studies of the genetics of higher eukaryotes. D. melanogaster has been an essential part of studies into principles like genetic linkage, interactions between genes, chromosomal genetics, evolutionary developmental biology, animal behaviour and evolution. Because genetic systems are well conserved among eukaryotes, understanding basic cellular processes like DNA replication or transcription in fruit flies helps scientists to understand those processes in other eukaryotes, including humans. The genome of D. melanogaster was sequenced in 2000, reflecting the fruit fly's important role in biological research. 70% of the fly genome is similar to the human genome, supporting the Darwinian theory of evolution from a single origin of life.
Some hemipterans are used to produce dyestuffs such as carmine (also called cochineal). The scale insect Dactylopius coccus produces the brilliant red-coloured carminic acid to deter predators. Up to 100,000 scale insects are needed to make a kilogram (2.2 lbs) of cochineal dye.
A similarly enormous number of lac bugs are needed to make a kilogram of shellac, a brush-on colourant and wood finish. Additional uses of this traditional product include the waxing of citrus fruits to extend their shelf-life, and the coating of pills to moisture-proof them, provide slow-release or mask the taste of bitter ingredients.
Kermes is a red dye from the dried bodies of the females of a scale insect in the genus Kermes, primarily Kermes vermilio. Kermes are native to the Mediterranean region, living on the sap of the kermes oak. They were used as a red dye by the ancient Greeks and Romans. The kermes dye is a rich red, and has good colour fastness in silk and wool.
Insect attributes are sometimes mimicked in architecture, as at the Eastgate Centre, Harare, which uses passive cooling, storing heat in the morning and releasing it in the warm parts of the day. The target of this piece of biomimicry is the structure of the mounds of termites such as Macrotermes michaelseni which effectively cool the nests of these social insects. The properties of the Namib desert beetle's exoskeleton, in particular its wing-cases (elytra) which have bumps with hydrophilic (water-attracting) tips and hydrophobic (water-shedding) sides, have been mimicked in a film coating designed for the British Ministry of Defence, to capture water in arid regions.
In textiles
Silkworms, the caterpillars and pupae of the moth Bombyx mori, have been reared to produce silk in China from the Neolithic Yangshao period onwards, c. 5000 BC. Production spread to India by 140 AD. The caterpillars are fed on mulberry leaves. The cocoon, produced after the fourth moult, is covered with a continuous filament of the silk protein, fibroin, gummed together with sericin. In the traditional process, the gum is removed by soaking in hot water, and the silk is then unwound from the cocoon and reeled. Filaments are spun together to make silk thread. Commerce in silk between China and countries to its west began in ancient times, with silk known from an Egyptian mummy of 1070 BC, and later to the ancient Greeks and Romans. The Silk Road leading west from China was opened in the 2nd century AD, helping to drive trade in silk and other goods.
In warfare
The use of insects for warfare may have been attempted in the Middle Ages or earlier, but was first systematically researched by several nations during the 20th century. It was put into practice by the Japanese army's Unit 731 in attacks on China during the Second World War, killing almost 500,000 Chinese people with fleas infected with plague and flies infected with cholera. Also in the Second World War, the French and Germans explored the use of Colorado beetles to destroy enemy potato crops. During the Cold War, the US Army considered using yellow fever mosquitoes to attack Soviet cities.
Symbolic uses
In mythology and folklore
Insects have appeared in mythology around the world from ancient times. Among the insect groups featuring in myths are the bee, butterfly, cicada, fly, dragonfly, praying mantis and scarab beetle. Scarab beetles held religious and cultural symbolism in Old Egypt, Greece and some shamanistic Old World cultures. The ancient Chinese regarded cicadas as symbols of rebirth or immortality. In the Homeric Hymn to Aphrodite, the goddess Aphrodite retells the legend of how Eos, the goddess of the dawn, requested Zeus to let her lover Tithonus live forever as an immortal. Zeus granted her request, but, because Eos forgot to ask him to also make Tithonus ageless, Tithonus never died, but he did grow old. Eventually, he became so tiny and shriveled that he turned into the first cicada.
In an ancient Sumerian poem, a fly helps the goddess Inanna when her husband Dumuzid is being chased by galla demons. Flies also appear on Old Babylonian seals as symbols of Nergal, the god of death and fly-shaped lapis lazuli beads were often worn by many different cultures in ancient Mesopotamia, along with other kinds of fly-jewellery. The Akkadian Epic of Gilgamesh contains allusions to dragonflies, signifying the impossibility of immortality.
Amongst the Arrernte people of Australia, honey ants and witchety grubs served as personal clan totems. In the case of the San bushmen of the Kalahari, it is the praying mantis which holds much cultural significance including creation and zen-like patience in waiting.
Insects feature in folklore around the world. In China, farmers traditionally regulated their crop planting according to the Awakening of the Insects, when temperature shifts and monsoon rains bring insects out of hibernation. Most "awakening" customs are related to eating snacks like pancakes, parched beans, pears, and fried corn, symbolizing harmful insects in the field.
In the Great Lakes region of the United States, there is an annual Woollybear Festival that has been celebrated for over 40 years. The larvae of the species Pyrrharctia isabella (commonly known as the isabella tiger moth), with their 13 distinct segments of black and reddish brown, have the reputation in common folklore of being able to forecast the coming winter weather.
There is a common misconception that cockroaches are serious vectors of disease, but while they can carry bacteria they do not travel far, and have no bite or sting. Their shells contain a protein, arylphorin, implicated in asthma and other respiratory conditions.
Among the deep-sea fishermen of Greenock in Scotland, there is a belief that if a fly falls into a glass from which a person has been drinking, or is about to drink, it is a sure omen of good luck to the drinker.
Many people believe the urban myth that the daddy longlegs (Opiliones) has the most poisonous bite in the spider world, but that the fangs are too small to penetrate human skin. This is untrue on several counts. None of the known species of harvestmen have venom glands; their chelicerae are not hollowed fangs but grasping claws that are typically very small and definitely not strong enough to break human skin.
In Japan, the emergence of fireflies and rhinoceros beetles signify the anticipated changing of the seasons.
In religion
In the Brazilian Amazon, members of the Tupí–Guaraní language family have been observed using Pachycondyla commutata ants during female rite-of-passage ceremonies, and prescribing the sting of Pseudomyrmex spp. for fevers and headaches.
The red harvester ant Pogonomyrmex californicus has been widely used by natives of Southern California and Northern Mexico for hundreds of years in ceremonies conducted to help tribe members acquire spirit helpers through hallucination. During the ritual, young men are sent away from the tribe and consume large quantities of live, unmasticated ants under the supervision of an elderly member of the tribe. Ingestion of ants should lead to a prolonged state of unconsciousness, where dream helpers appear and serve as allies to the dreamer for the rest of his life.
In art
Both the symbolic form and the actual body of insects have been used to adorn humans in ancient and modern times. A recurrent theme for ancient cultures in Europe and the Near East regarded the sacred image of a bee or human with insect features. Often referred to as the bee "goddess", these images were found in gems and stones. An onyx gem from Knossos (ancient Crete) dating to approximately 1500 BC illustrates a Bee goddess with bull horns above her head. In this instance, the figure is surrounded by dogs with wings, most likely representing Hecate and Artemis – gods of the underworld, similar to the Egyptian gods Akeu and Anubis.
Beetlewing art is an ancient craft technique using iridescent beetle wings practiced traditionally in Thailand, Myanmar, India, China and Japan. Beetlewing pieces are used as an adornment to paintings, textiles and jewelry. Different species of metallic wood-boring beetle wings were used depending on the region, but traditionally the most valued were those from beetles belonging to the genus Sternocera. The practice comes from across Asia and Southeast Asia, especially Thailand, Myanmar, Japan, India and China. In Thailand beetlewings were preferred to decorate clothing (shawls and Sabai cloth) and jewellery in former court circles.
The Canadian entomologist Charles Howard Curran's 1945 book, Insects of the Pacific World, noted women from India and Sri Lanka, who kept inch (38 mm) long, iridescent greenish coppery beetles of the species Chrysochroa ocellata as pets. These living jewels were worn on festive occasions, probably with a small chain attached to one leg anchored to the clothing to prevent escape. Afterwards, the insects were bathed, fed, and housed in decorative cages. Living jewelled beetles have also been worn and kept as pets in Mexico.
Butterflies have long inspired humans with their life cycle, color, and ornate patterns. The novelist Vladimir Nabokov was also a renowned butterfly expert. He published and illustrated many butterfly species, stating:
I discovered in nature the nonutilitarian delights that I sought in art. Both were a form of magic, both were games of intricate enchantment and deception.
It was the aesthetic complexity of insects that led Nabokov to reject natural selection.
The naturalist Ian MacRae writes of butterflies:
the animal is at once awkward, flimsy, strange, bouncy in flight, yet beautiful and immensely sympathetic; it is painfully transient, albeit capable of extreme migrations and transformations. Images and phrases such as "kaleidoscopic instabilities," "oxymoron of similarities," "rebellious rainbows," "visible darkness" and "souls of stone" have much in common. They bring together the two terms of a conceptual contradiction, thereby facilitating the mixing of what should be discrete and mutually exclusive categories ... In positing such questions, butterfly science, an inexhaustible, complex, and finely nuanced field, becomes not unlike the human imagination, or the field of literature itself. In the natural history of the animal, we begin to sense its literary and artistic possibilities.
The photographer Kjell Sandved spent 25 years documenting all 26 characters of the Latin alphabet using the wing patterns of butterflies and moths as The Butterfly Alphabet.
In 2011, the artist Anna Collette created over 10,000 individual ceramic insects at Nottingham Castle, "Stirring the Swarm". Reviews of the exhibit offered a compelling narrative for cultural entomology: "the unexpected use of materials, dark overtones, and the straightforward impact of thousands of tiny multiples within the space. The exhibition was at once both exquisitely beautiful and deeply repulsive, and this strange duality was fascinating."
In literature and film
The Ancient Greek playwright Aeschylus has a gadfly pursue and torment Io, a maiden associated with the moon, watched constantly by the eyes of the herdsman Argus, associated with all the stars: "Io: Ah! Hah! Again the prick, the stab of gadfly-sting! O earth, earth, hide, the hollow shape—Argus—that evil thing—the hundred-eyed." William Shakespeare, inspired by Aeschylus, has Tom o'Bedlam in King Lear, "Whom the foul fiend hath led through fire and through flame, through ford and whirlpool, o'er bog and quagmire", driven mad by the constant pursuit. In Antony and Cleopatra, Shakespeare similarly likens Cleopatra's hasty departure from the Actium battlefield to that of a cow chased by a gadfly.
H. G. Wells introduced giant wasps in his 1904 novel The Food of the Gods and How It Came to Earth, making use of the newly discovered growth hormones to lend plausibility to his science fiction.
Lafcadio Hearn's essay Butterflies analyses the treatment of the butterfly in Japanese literature, both prose and poetry. He notes that these often allude to Chinese tales, such as of the young woman that the butterflies took to be a flower. He translates 22 Japanese haiku poems about butterflies, including one by the haiku master Matsuo Bashō, said to suggest happiness in springtime: "Wake up! Wake up!—I will make thee my comrade, thou sleeping butterfly."
The novelist Vladimir Nabokov was the son of a professional lepidopterist, and was interested in butterflies himself. He wrote his novel Lolita while travelling on his annual butterfly-collection trips in the western United States. He eventually became a leading lepidopterist. This is reflected in his fiction, where for example The Gift devotes two whole chapters (of five) to the tale of a father and son on a butterfly expedition.
Horror films involving insects, sometimes called "big bug movies", include the pioneering 1954 Them!, featuring giant ants mutated by radiation, and the 1957 The Deadly Mantis.
The Far Side, a newspaper cartoon, has been used by professor of Michael Burgett as a teaching tool in his entomology class; The Far Side and its author Gary Larson have been acknowledged by biologist Dale H. Clayton his colleague for "the enormous contribution" Larson has made to their field through his cartoons.
In music
Some popular and influential pieces of music have had insects as their subjects. The French Renaissance composer Josquin des Prez wrote a frottola entitled El Grillo (). It is among the most frequently sung of his works. Nikolai Rimsky-Korsakov wrote the "Flight of the Bumblebee" in 1899–1900 as part of his opera The Tale of Tsar Saltan. The piece is one of the most recognizable pieces in classical composition. The bumblebee in the story is a prince who has been transformed into an insect so that he can fly off to visit his father. The play upon which the opera was based – written by Alexander Pushkin – originally had two more insect themes: the Flight of the Mosquito and the Flight of the Fly.
The Hungarian composer Béla Bartók explained in his diary that he was attempting to depict the desperate attempts to escape of a fly caught in a cobweb in his piece From the Diary of a Fly, for piano (Mikrokosmos Vol. 6/142).
The jazz musician and philosophy professor David Rothenberg plays duets with singing insects including cicadas, crickets, and beetles.
In astronomy and cosmology
In astronomy, constellations named after arthropods include the zodiacal Scorpius, the scorpion, and Musca, the fly, also known as Apis, the bee, in the deep southern sky. Musca, the only recognised insect constellation, was named by Petrus Plancius in 1597.
"The Bug Nebula", also called "The Butterfly Nebula", is a more recent discovery. Known as NGC 6302 is one of the brightest and most popular stars in the universe – popular in that its features draw the attention of a lot of researchers. It happens to be located in the Scorpius constellation. It is perfectly bipolar, and until recently, the central star was unobservable, clouded by gas, but estimated to be one of the hottest in the galaxy – 200,000 degrees Fahrenheit, perhaps 35 times hotter than the Sun.
The honey bee played a central role in the cosmology of the Mayan people. The stucco figure at the temples of Tulum known as "Ah Mucen Kab" – the Diving Bee God – bears resemblance to the insect in the Codex Tro-Cortesianus identified as a bee. Such reliefs might have indicated towns and villages that produce honey. Modern Mayan authorities say the figure also have a connection to modern cosmology. Mayan mythology expert Migel Angel Vergara relates that the Mayans held a belief that bees came from Venus, the "Second Sun." The relief might be indicative of another "insect deity", that of Xux Ex, the Mayan "wasp star." The Mayan embodied Venus in the form of the god Kukulkán (also known as or related to Gukumatz and Quetzalcoatl in other parts of Mexico), Quetzalcoatl is a Mesoamerican deity whose name in Nahuatl means "feathered serpent". The cult was the first Mesoamerican religion to transcend the old Classic Period linguistic and ethnic divisions. This cult facilitated communication and peaceful trade among peoples of many different social and ethnic backgrounds. Although the cult was originally centered on the ancient city of Chichén Itzá in the modern Mexican state of Yucatán, it spread as far as the Guatemalan highlands.
In costumes
Bee and other insect costumes are worn in a variety of countries for parties, carnivals and other celebrations.
Ovo is an insect-themed production by the world renowned Canadian entertainment company Cirque du Soleil. The show looks at the world of insects and its biodiversity where they go about their daily lives until a mysterious egg appears in their midst, as the insects become awestruck about this iconic object that represents the enigma and cycles of their lives. The costuming was a fusion of arthropod body types blended with superhero armour. Liz Vandal, the lead costume designer, has a special affinity for the world of the insect:
The Webby award-winning video series Green Porno was created to showcase the reproductive habits of insects. Jody Shapiro and Rick Gilbert were responsible for translating the research and concepts that Isabella Rossellini envisioned into the paper and paste costumes which directly contribute to the series' unique visual style. The film series was driven by the creation of costumes to translate scientific research into "something visual and how to make it comical."
See also
Arthropods in culture
Human uses of birds
Human uses of plants
Human interactions with insects in southern Africa
Insects in ethics
Insect collecting
References
Further reading
External links
The Cultural Entomology Digest
Ethnoentomology journal (in Czech)
Subfields of entomology
Ethnobiology
Cultural studies
Biology and culture
Insects in culture
Animals in culture | Human interactions with insects | Biology,Environmental_science | 7,069 |
75,384,564 | https://en.wikipedia.org/wiki/National%20Laboratory%20of%20Scientific%20Computation%20%28Brazil%29 | The National Laboratory of Scientific Computation or the National Laboratory for Scientific Computing (LNCC; Portuguese: Laboratório Nacional de Computação Científica) is a Brazilian institution for scientific research and technological development linked to the Ministry of Science, Technology and Innovation (MCTI), specialized in scientific computing. It was created in 1980 and since 1988 it has been headquartered in the city of Petrópolis, state of Rio de Janeiro, Brazil.
The institution is known for its participation in the arrival of the internet in Brazil in the 1980s, as a result of joint work with the Federal University of Rio de Janeiro (UFRJ). It is also known for the Santos Dumond Supercomputer, the largest in Latin America.
The institution has a postgraduate program (master's and doctor's degree) in computational modeling. Some of the laboratory's lines of research focus on interdisciplinary areas such as biosystems, bioinformatics, computational biology, atmosphere and oceans, environment, multiscale science, among others. In addition to its academic role, it participates in some processes related to meteorology, computer modeling and is used by some companies, such as Petrobras.
References
1980 establishments in Brazil
Research institutes in Brazil
Science and technology in Brazil | National Laboratory of Scientific Computation (Brazil) | Technology | 259 |
988,114 | https://en.wikipedia.org/wiki/Data%20logger | A data logger (also datalogger or data recorder) is an electronic device that records data over time or about location either with a built-in instrument or sensor or via external instruments and sensors. Increasingly, but not entirely, they are based on a digital processor (or computer), and called digital data loggers (DDL). They generally are small, battery-powered, portable, and equipped with a microprocessor, internal memory for data storage, and sensors. Some data loggers interface with a personal computer and use software to activate the data logger and view and analyze the collected data, while others have a local interface device (keypad, LCD) and can be used as a stand-alone device.
Data loggers vary from general-purpose devices for various measurement applications to very specific devices for measuring in one environment or application type only. While it is common for general-purpose types to be programmable, many remain static machines with only a limited number or no changeable parameters. Electronic data loggers have replaced chart recorders in many applications.
One primary benefit of using data loggers is their ability to automatically collect data on a 24-hour basis. Upon activation, data loggers are typically deployed and left unattended to measure and record information for the duration of the monitoring period. This allows for a comprehensive, accurate picture of the environmental conditions being monitored, such as air temperature and relative humidity.
The cost of data loggers has been declining over the years as technology improves and costs are reduced. Simple single-channel data loggers can cost as little as $25, while more complicated loggers may cost hundreds or thousands of dollars.
Data formats
Standardization of protocols and data formats has been a problem but is now growing in the industry and XML, JSON, and YAML are increasingly being adopted for data exchange. The development of the Semantic Web and the Internet of Things is likely to accelerate this present trend.
Instrumentation protocols
Several protocols have been standardized including a smart protocol, SDI-12, that allows some instrumentation to be connected to a variety of data loggers. The use of this standard has not gained much acceptance outside the environmental industry. SDI-12 also supports multi-drop instruments. Some data logging companies support the MODBUS standard. This has been used traditionally in the industrial control area, and many industrial instruments support this communication standard. Another multi-drop protocol that is now starting to become more widely used is based upon CAN-Bus (ISO 11898). Some data loggers use a flexible scripting environment to adapt to various non-standard protocols.
Data logging versus data acquisition
The terms data logging and data acquisition are often used interchangeably. However, in a historical context, they are quite different. A data logger is a data acquisition system, but a data acquisition system is not necessarily a data logger.
Data loggers typically have slower sample rates. A maximum sample rate of 1 Hz may be considered to be very fast for a data logger, yet very slow for a typical data acquisition system.
Data loggers are implicitly stand-alone devices, while typical data acquisition systems must remain tethered to a computer to acquire data. This stand-alone aspect of data loggers implies onboard memory that is used to store acquired data. Sometimes this memory is very large to accommodate many days, or even months, of unattended recording. This memory may be battery-backed static random access memory, flash memory, or EEPROM. Earlier data loggers used magnetic tape, punched paper tape, or directly viewable records such as "strip chart recorders".
Given the extended recording times of data loggers, they typically feature a mechanism to record the date and time in a timestamp to ensure that each recorded data value is associated with a date and time of acquisition to produce a sequence of events. As such, data loggers typically employ built-in real-time clocks whose published drift can be an important consideration when choosing between data loggers.
Data loggers range from simple single-channel input to complex multi-channel instruments. Typically, the simpler the device the less programming flexibility. Some more sophisticated instruments allow for cross-channel computations and alarms based on predetermined conditions. The newest data loggers can serve web pages, allowing numerous people to monitor a system remotely.
The unattended and remote nature of many data logger applications implies the need for some applications to operate from a DC power source, such as a battery. Solar power may be used to supplement these power sources. These constraints have generally led to ensuring that the devices they market are extremely power efficient relative to computers. In many cases, they are required to operate in harsh environmental conditions where computers will not function reliably.
This unattended nature also dictates that data loggers must be extremely reliable. Since they may operate for long periods nonstop with little or no human supervision and may be installed in harsh or remote locations, it is imperative that so long as they have power, they will not fail to log data for any reason. Manufacturers go to great lengths to ensure that the devices can be depended on in these applications. As such data loggers are almost completely immune to the problems that might affect a general-purpose computer in the same application, such as program crashes and the instability of some operating systems.
Applications
Applications of data logging include:
Unattended weather station recording (such as wind speed / direction, temperature, relative humidity, solar radiation).
Unattended hydrographic recording (such as water level, water depth, water flow, water pH, water conductivity).
Unattended soil moisture level recording.
Unattended gas pressure recording.
Offshore buoys for recording a variety of environmental conditions.
Road traffic counting.
Measure temperatures (humidity, etc.) of perishables during shipments: Cold chain.
Measure variations in light intensity.
Measuring temperature of pharmaceutical products, medicines and vaccines during storage
Measuring temperature and humidity of perishable products during transportation to ensure cold chain is maintained
Process monitoring for maintenance and troubleshooting applications.
Process monitoring to verify warranty conditions
Wildlife research with pop-up archival tags
Measure vibration and handling shock (drop height) environment of distribution packaging.
Tank level monitoring.
Deformation monitoring of any object with geodetic or geotechnical sensors controlled by an automatic deformation monitoring system.
Environmental monitoring.
Vehicle testing (including crash testing)
Motor racing
Monitoring of relay status in railway signaling.
For science education enabling 'measurement', 'scientific investigation' and an appreciation of 'change'
Record trend data at regular intervals in veterinary vital signs monitoring.
Load profile recording for energy consumption management.
Temperature, humidity and power use for heating and air conditioning efficiency studies.
Water level monitoring for groundwater studies.
Digital electronic bus sniffer for debug and validation
Examples
Black-box (stimulus/response) loggers:
A flight data recorder (FDR) is a piece of recording equipment used to collect specific aircraft performance data. The term may also be used, albeit less accurately, to describe the cockpit voice recorder (CVR), another type of data recording device found on board aircraft.
An event data recorder (EDR) is a device installed by the manufacturer in some automobiles which collects and stores various data during the time-frame immediately before and after a crash.
A voyage data recorder (VDR) is a data recording system designed to collect data from various sensors on board a ship.
A train event recorder is a device that records data about the operation of train controls and performance in response to those controls and other train control systems.
An accident data recorder (ADR) is a device for triggering accidents or incidents in most kind of land vehicles and recording the relevant data. In automobiles, all diagnostic trouble codes (DTCs) are logged in engine control units (ECUs) so that at the time of service of a vehicle, a service engineer will read all the DTCs using Tech-2 or similar tools connected to the on-board diagnostics port, and will come to know problems occurred in the vehicle. Sometimes a small OBD data logger is plugged into the same port to continuously record vehicle data.
In embedded system and digital electronics design, specialized high-speed digital data logger help overcome the limitations of more traditional instruments such as the oscilloscope and the logic analyzer. The main advantage of a data logger is its ability to record very long traces, which proves very useful when trying to correct functional bugs that happen once in while.
In the racing industry, Data Loggers are used to record data such as braking points, lap/sector timing, and track maps, as well as any on-board vehicle sensors.
Health data loggers:
The growing, preparation, storage and transportation of food. Data logger is generally used for data storage and these are small in size.
A Holter monitor is a portable device for continuously monitoring various electrical activity of the cardiovascular system for at least 24 hours.
Electronic health record loggers.
Other general data acquisition loggers:
An (scientific) experimental testing data acquisition tool.
Ultra Wideband Data Recorder, high-speed data recording up to 2 Giga Samples per second.
Future directions
Data Loggers are changing more rapidly now than ever before. The original model of a stand-alone data logger is changed to one of a device that collects data but also has access to wireless communications for alarming of events, automatic reporting of data, and remote control. Data loggers are beginning to serve web pages for current readings, e-mail their alarms, and FTP their daily results into databases or direct to the users. Very recently, there is a trend to move away from proprietary products with commercial software to open-source software and hardware devices. The Raspberry Pi single-board computer is among others a popular platform hosting real-time Linux or preemptive-kernel Linux operating systems with many
digital interfaces like I2C, SPI, or UART enable the direct interconnection of a digital sensor and a computer,
and an unlimited number of configurations to show measurements in real-time over the internet, process data, plot charts, and diagrams...
See also
Black box
Bus analyzer
Computer data logging: logging APIs, server logs & syslog, web logging & web counters
Continuous emissions monitoring system
Runtime intelligence
Sequence of events recorder
SensorML
Shock and vibration data logger
Temperature data logger
References
Recording devices
Onboard computers
Measuring instruments | Data logger | Technology,Engineering | 2,121 |
1,168,490 | https://en.wikipedia.org/wiki/Metatable | A metatable is the section of a database or other data holding structure that is designated to hold data that will act as source code or metadata. In most cases, specific software has been written to read the data from the metatables and perform different actions depending on the data it finds.
See also
Magic number (programming)
Virtual method table
External links
Binding With Metatable And Closures
Metadata
Programming constructs
Software architecture | Metatable | Technology | 85 |
2,465,097 | https://en.wikipedia.org/wiki/Alexander%27s%20band | Alexander's band or Alexander's dark band is an optical phenomenon associated with rainbows which was named after Alexander of Aphrodisias who first described this phenomenon in Aphrodisias, Commentary on Book IV of Aristotle's Meteorology (also known as: Commentary on Book IV of Aristotle's De Meteorologica or On Aristotle's Meteorology 4), commentary 41.
The dark band occurs due to the deviation angles of the primary and secondary rainbows. Both bows exist due to an optical effect called the angle of minimum deviation. The refractive index of water prevents light from being deviated at smaller angles. The minimum deviation angle for the primary bow is 137.5°. Light can be deviated up to 180°, causing it to be reflected right back to the observer. Light which is deviated at intermediate angles brightens the inside of the rainbow.
The minimum deviation angle for the secondary bow is about 230°. The fact that this angle is greater than 180° makes the secondary bow an inside-out version of the primary. Its colors are reversed, and light which is deviated at greater angles brightens the sky outside the bow.
Between the two bows lies an area of unlit sky referred to as Alexander's band. Light which is reflected by raindrops in this region of the sky cannot reach the observer, though it may contribute to a rainbow seen by another observer elsewhere.
References
External links
Atmospheric optical phenomena
de:Alexanders dunkles Band
fr:Arc-en-ciel#La bande sombre d'Alexandre | Alexander's band | Physics | 323 |
44,111,699 | https://en.wikipedia.org/wiki/Jennifer%20Gardy | Jennifer Gardy is a Canadian scientist, educator and broadcaster, with expertise in the fields of molecular biology, biochemistry, and bioinformatics. Since February 2019 she has been the Deputy Director, Surveillance, Data, and Epidemiology on the Global Health: Malaria team at the Bill & Melinda Gates Foundation. She was previously an associate professor at the University of British Columbia's School of Population and Public Health, a Canada Research Chair in Public Health Genomics, and a Senior Scientist at the BC Centre for Disease Control. She is an occasional host of CBC's The Nature of Things, a science communicator, and a children's book author. She was elected to the National Academy of Medicine in 2021 as an International Member.
Education
Gardy received her PhD in molecular biology and biochemistry from Simon Fraser University in 2006. Previously, she earned a BSc in Cell Biology and Genetics from the University of British Columbia (2000) and a Graduate Certificate in Biotechnology from McGill University in 2001.
BC Centre for Disease Control
Dr. Gardy began working for the BC Centre for Disease Control (BCCDC) in 2009. During Dr. Gardy's time at the BCCDC, her team published the first paper to use next-generation DNA sequencing to reconstruct person-to-person disease transmission events in a large outbreak of tuberculosis. This and subsequent work from her team helped to establish the new field of pathogen genomic epidemiology. In 2014, she was made the Canada Research Chair (Tier 2) in Public Health Genomics.
Bill & Melinda Gates Foundation
In her current role at the Gates Foundation, Dr. Gardy oversees strategy and investment activities for the Malaria Surveillance, Data, and Epidemiology portfolio, which focuses on empowering National Malaria Control Programs to use better-quality data and advanced analytics for malaria strategic planning, decision-making, and policy-setting. Her portfolio includes work related to strengthening routine surveillance systems for malaria, improving data use within malaria programs, malaria genetic and genomic surveillance, and geospatial and mathematical modeling to understand malaria epidemiology.
Media
Jennifer Gardy has appeared on Discovery Channel Canada's Daily Planet. Jennifer Gardy has made regular appearances on CBC's documentary series The Nature of Things and hosted these episodes:
Bugs, Bones & Botany: The Science of Crime
Myth or Science
Myth or Science 2: The Quest for Perfection
Dreams of the Future
Myth or Science 3: You Are What You Eat
Myth or Science 4: In the Eye of the Storm
While You Were Sleeping (The Science of Sleep)
Myth or Science: The Secrets of Our Senses
Myth or Science: The Power of Poo
Selected publications
Gardy JL, Johnston JC, Shannon JHS et al. 2011. Whole genome sequencing and social-network analysis of a tuberculosis outbreak. N Engl J Med 364:730-739.
Lynn, DJ, Gardy JL, Hancock REW, Brinkman FSL. 2010. Systems-Level Analyses of the Mammalian Innate Immune Response. Book chapter in Systems Biology for Signaling Networks, Springer Publishing, NY.
Gardy JL, Lynn, DJ, Brinkman FSL, Hancock REW. 2009. Systems biology of the innate immune response: emerging approaches and resources. Trends in Immunology 30:249-262.
Vivona S, Gardy JL, Ramachandran S, Brinkman FSL, Raghava GPS, Flower DR, Filippini F. Computer-aided biotechnology: from immunoinformatics to reverse vaccinology. Trends in Biotechnology. 26:190-200.
Brown KL, Cosseau C, Gardy JL, Hancock REW. 2007. Complexities of targeting innate immunity to treat infection. Trends in Immunology. 28:260-6.
References
Living people
Canadian biochemists
Women biochemists
Canadian evolutionary biologists
Women evolutionary biologists
Simon Fraser University alumni
McGill University alumni
21st-century Canadian biologists
21st-century Canadian women scientists
People in public health
University of British Columbia alumni
1979 births | Jennifer Gardy | Chemistry | 825 |
20,254,389 | https://en.wikipedia.org/wiki/Mining%20and%20metallurgy%20in%20medieval%20Europe | During the Middle Ages, between the 5th and 16th century AD, Western Europe saw a period of growth in the mining industry. The first important mines were those at Goslar in the Harz mountains, taken into commission in the 10th century. Another notable mining town is Falun in Sweden where copper has been mined since at least the 10th century and possibly even earlier. (Olsson 2010)
The rise of the Western European mining industry depended on the increasing influence of Western Europe on the world stage. Advances in medieval mining and metallurgy enabled the flourishing of Western European civilization. Accessible ores and improved extraction techniques supported economic growth and trade. Innovations like water-powered machinery and better smelting methods increased the productivity and quality of metals.
Metallurgical activities were also encouraged by the central political powers, regional authorities, monastic orders, and ecclesiastical overlords. These powers attempted to claim royal rights over the mines and a share in the output, both on private lands and regions belonging to The Crown. They were particularly interested in the extraction of the precious metal ores, and for this reason, the mines in their territories were open to all miners (Nef 1987, 706–715).
Early Middle Ages, 500-1000 AD
The social, political, and economic stagnation that followed the Roman Empire affected Europe throughout the early medieval period and had a critical impact on technological progress, trade, and social organization. Technological developments that affected the course of metal production were only feasible within a stable political environment, and this was not the case until the 9th century (Martinon-Torres & Rehren in press, a).
During the first medieval centuries, the output of metal was in a steady decline with constraints in small-scale activities. Miners adopted methods much less efficient than those of Roman times. Ores were extracted only from shallow depths or from remnants of formerly abandoned mines. The vicinity of the mine to villages or towns was also a determining factor when due to the high cost of material transportation (Martinon-Torres & Rehren in press, b). Only the output of iron diminished less in relation to the other base and precious metals until the 8th century. This fact, correlated with the dramatic decrease in copper production, may indicate a possible displacement of copper and bronze artifacts by iron ones (Forbes 1957, 64; Bayley et al. 2008, 50).
By the end of the 9th century, economic, and social conditions dictated a greater need for metal for agriculture, arms, stirrups, and decoration. Consequently, conditions began to favor metallurgy and a slow but steady general progress developed. Starting from the reign of the emperor Otto I in the 960s, smelting sites were multiplied. New mines were discovered and exploited, like the well-known Mines of Rammelsberg, close to the town of Goslar in the Harz Mountains. Open-cast mining and metallurgical activities were mostly concentrated in the Eastern Alps, Saxony, Bohemia, Tuscany, Rhineland, Gaul, and Spain (Nef 1987). It was mainly German miners and metallurgists who were the generators of metal production, but the French and Flemish made contributions to the developments.
High Middle Ages, 11th to 13th centuries
The period immediately after the 10th century marked the widespread application of several innovations in the field of mining and ore treatment: a shift to large-scale and better quality production. Medieval miners and metallurgists had to find solutions for the practical problems that limited former metal production, in order to meet the market demands for metals. This increased demand for metal was due to the population growth from the 11th to the 13th centuries. This growth had an impact on agriculture, trade, and building construction, including Gothic churches.
The main problem was the inefficient means for draining water out of shafts and tunnels in underground mining. This resulted in the flooding of mines which limited the extraction of ore to shallow depths close to the surface. The secondary problem was the separation of the metal-bearing minerals from the worthless material that surrounds it, or is closely mixed with it. There was, additionally, the difficulty of transporting the ore, which resulted in subsequently high costs.
The economic value of mining led to investment in the development of solutions to these problems, which had a distinctly positive impact on medieval metal output. This included innovations such as water power using waterwheels for powering draining engines, bellows, hammers, and the introduction of advanced types of furnaces.
These innovations were not adopted all at once or applied to all mines and smelting sites. Throughout the medieval period, these technical innovations, and traditional techniques coexisted. Their application depended on the time period and geographical region. Water power in medieval mining and metallurgy was introduced well before the 11th century, but it was only in the 11th century that it was widely applied. The introduction of the blast furnace, mostly for iron smelting, in all the established centers of metallurgy contributed to the quantitative and qualitative improvement of the metal output, making metallic iron available at a lower price.
In addition, cupellation, developed in the 8th century, was more often used for the refinement of lead-silver ores, to separate the silver from the lead (Bayley 2008). Parallel production with more than one technical method, and different treatment of ores would occur wherever multiple ores were present at one site. (Rehren et al. 1999).
Underground work in shafts, although limited in depth, was accomplished either by fire-setting for massive ore bodies or with iron tools for smaller scale extraction of limited veins. The sorting of base and precious metal ores was completed underground and they were transferred separately (Martinon-Torres & Rehren in press, b).
Permanent mining in Sweden proper begun in the High Middle Ages and did not spread to Finland until 1530 when the first iron mine began operations there.
Late Middle Ages, 14th to 16th centuries
By the 14th century, the majority of the more easily accessible ore deposits were exhausted. Thus, more advanced technological achievements were introduced in order to keep up with the demand in metal. The alchemical laboratory, separating precious metals from the baser ones they are typically found with, was an essential feature of the metallurgical enterprise.
A significant hiatus in underground mining was noted during the 14th and the early 15th century due to a series of historical events with severe social and economic impacts. The Great Famine (1315–1317), the Black Death (1347–1353), which diminished the European population by one third to one half, and the Hundred Years War (1337–1453) between England and France, that, amongst others, caused severe deforestation, and had dramatic influences in metallurgical industry and trade.
Lead mining, for example, ground to a halt due to the Black Death pandemic, when atmospheric lead pollution from smelting dropped to natural levels (zero) for the first and only time in the last 2000 years. The great demand of metals, e.g. for armor, could not be met due to the lack of manpower and capital investment.
It was only by the end of the 13th century that great capital expenditures were invested and more sophisticated machinery was installed in underground mining, which resulted in reaching greater depths. The wider application of water and horse power was necessary for draining water out of these deep shafts. Also, acid parting in separating gold from silver was introduced in the 14th century (Bayley 2008). Signs of recovery were present only after the mid 15th century, when the improved methods were widely adopted (Nef 1987, 723).
The discovery of the New World had an impact on European metal production and trade, which has affected the world economy ever since. New, rich ore deposits found in Central Europe during the 15th century were dwarfed by the large amounts of precious metal imports from the Americas.
Smiths and miners within medieval society
Metallurgists throughout medieval Europe were generally free to move within different regions. For instance, German metallurgists in search of rich precious metal ores took the lead in mining and influenced the course of metal production, not only in East and South Germany but also in almost all of Central Europe and the Eastern Alps.
As mining gradually became a task for specialized craftsmen, miners moved in large groups and formed settlements close to mines, each with their own customs. They were always welcomed by regional authorities, as the latter were interested in increasing revenue through the profitable exploitation of the mineral-rich subsurface. These authorities claimed a portion of the output, and smiths and miners were provided with land for cottages, mills, forges, farming, and pasture, while also being allowed to utilize streams and lumber. (Nef 1987, 706–715).
Advancing into the high and late Middle Ages, a notable shift occurred where smelting sites gained geographical independence from mines, leading to the separation of metalworking from ore smelting. The urban expansion that unfolded from the 10th century onwards, coupled with the pivotal influence of towns, afforded metallurgists an optimal setting to cultivate and refine their technological advancements. This era witnessed the systematic formation of metallurgical guilds, with their workshops often converging on the outskirts of these urban centers. (McLees 1996).
In medieval societies, liberal and mechanical arts were considered to be totally different disciplines. Metallurgists, like all craftsmen and artisans, almost always lacked the formal education that would inform a methodical intellectual background. Instead, they were the pioneers of causal thinking based on empirical observation and experimentation (Zilsel 2000).
See also
Mining in the Upper Harz
Mining Law (1412)
References
Sources
Agricola, Georgius, 1556, Translation Hoover, Herbert, 1912, De re metallica, Farlang, full streaming version + scientific introduction
Craddock, P. T., 1989. Metalworking Techniques. In: Youngs, S. (ed), Work of Angels: Masterpieces of Celtic Metalwork, 6th-9th centuries AD, 170–213.
Forbes, R. J., 1957. Metallurgy. In: Singer, C., Holmyard, E. J., Hall, A. R. & Williams, T. I. (eds), A History of Technology, vol. 2: The Mediterranean Civilizations and the Middle Ages c. 700 BC to AD 1500. Oxford: Clarendon Press, 41–80.
Martinon-Torres, M. & Rehren, Th., in press (a). Metallurgy, Europe. In: Encyclopedia of Society and Culture in the Medieval World. Dallas: Schlager.
Martinon-Torres, M. & Rehren, Th., in press (b). Mining, Europe. In: Encyclopedia of Society and Culture in the Medieval World. Dallas: Schlager.
Smith, C.S. & Hawthorne, J.H., 1974. Mappae Clavicula, A little key to the world of medieval techniques. Transactions of American Philosophical Society 64 (4), 1–128.
Theophilus, On Divers Arts: The foremost medieval treatise on Painting, Glassmaking and Metalwork. Hawthorne, J.G. & Smith, C.S. (trans), 1979. New York: Dover Publications.
Economy of Europe
History of Europe
Technology in the Middle Ages
History of metallurgy
Mining in Europe
Medieval economic history | Mining and metallurgy in medieval Europe | Chemistry,Materials_science | 2,348 |
237,770 | https://en.wikipedia.org/wiki/Negative%20resistance | In electronics, negative resistance (NR) is a property of some electrical circuits and devices in which an increase in voltage across the device's terminals results in a decrease in electric current through it.
This is in contrast to an ordinary resistor in which an increase of applied voltage causes a proportional increase in current due to Ohm's law, resulting in a positive resistance. Under certain conditions it can increase the power of an electrical signal, amplifying it.
Negative resistance is an uncommon property which occurs in a few nonlinear electronic components. In a nonlinear device, two types of resistance can be defined: 'static' or 'absolute resistance', the ratio of voltage to current , and differential resistance, the ratio of a change in voltage to the resulting change in current . The term negative resistance means negative differential resistance (NDR), . In general, a negative differential resistance is a two-terminal component which can amplify, converting DC power applied to its terminals to AC output power to amplify an AC signal applied to the same terminals. They are used in electronic oscillators and amplifiers, particularly at microwave frequencies. Most microwave energy is produced with negative differential resistance devices. They can also have hysteresis and be bistable, and so are used in switching and memory circuits. Examples of devices with negative differential resistance are tunnel diodes, Gunn diodes, and gas discharge tubes such as neon lamps, and fluorescent lights. In addition, circuits containing amplifying devices such as transistors and op amps with positive feedback can have negative differential resistance. These are used in oscillators and active filters.
Because they are nonlinear, negative resistance devices have a more complicated behavior than the positive "ohmic" resistances usually encountered in electric circuits. Unlike most positive resistances, negative resistance varies depending on the voltage or current applied to the device, and negative resistance devices can only have negative resistance over a limited portion of their voltage or current range.
Definitions
The resistance between two terminals of an electrical device or circuit is determined by its current–voltage (I–V) curve (characteristic curve), giving the current through it for any given voltage across it. Most materials, including the ordinary (positive) resistances encountered in electrical circuits, obey Ohm's law; the current through them is proportional to the voltage over a wide range. So the I–V curve of an ohmic resistance is a straight line through the origin with positive slope. The resistance is the ratio of voltage to current, the inverse slope of the line (in I–V graphs where the voltage is the independent variable) and is constant.
Negative resistance occurs in a few nonlinear (nonohmic) devices. In a nonlinear component the I–V curve is not a straight line, so it does not obey Ohm's law. Resistance can still be defined, but the resistance is not constant; it varies with the voltage or current through the device. The resistance of such a nonlinear device can be defined in two ways, which are equal for ohmic resistances:
Static resistance (also called chordal resistance, absolute resistance or just resistance) – This is the common definition of resistance; the voltage divided by the current: It is the inverse slope of the line (chord) from the origin through the point on the I–V curve. In a power source, like a battery or electric generator, positive current flows out of the positive voltage terminal, opposite to the direction of current in a resistor, so from the passive sign convention and have opposite signs, representing points lying in the 2nd or 4th quadrant of the I–V plane (diagram right). Thus power sources formally have negative static resistance ( However this term is never used in practice, because the term "resistance" is only applied to passive components. Static resistance determines the power dissipation in a component. Passive devices, which consume electric power, have positive static resistance; while active devices, which produce electric power, do not.
Differential resistance (also called dynamic, or incremental resistance) – This is the derivative of the voltage with respect to the current; the ratio of a small change in voltage to the corresponding change in current, the inverse slope of the I–V curve at a point: Differential resistance is only relevant to time-varying currents. Points on the curve where the slope is negative (declining to the right), meaning an increase in voltage causes a decrease in current, have negative differential resistance . Devices of this type can amplify signals, and are what is usually meant by the term "negative resistance".
Negative resistance, like positive resistance, is measured in ohms.
Conductance is the reciprocal of resistance. It is measured in siemens (formerly mho) which is the conductance of a resistor with a resistance of one ohm. Each type of resistance defined above has a corresponding conductance
Static conductance
Differential conductance
It can be seen that the conductance has the same sign as its corresponding resistance: a negative resistance will have a negative conductance while a positive resistance will have a positive conductance.
Operation
One way in which the different types of resistance can be distinguished is in the directions of current and electric power between a circuit and an electronic component. The illustrations below, with a rectangle representing the component attached to a circuit, summarize how the different types work:
Types and terminology
{| style="background:#f5f5f5; float:right;" border="1" cellpadding="5" cellspacing="0" style="margin: 1em auto 1em auto;"
|-
! style="background:#cfcfcf;" |
! style="background:#cfcfcf; text-align:center;" | rdiff > 0Positive differential resistance
! style="background:#cfcfcf; text-align:center;" border="1" | rdiff < 0Negative differential resistance
|-
! scope="row" style="background:#dfdfdf; text-align:center;" | Rstatic > 0Passive:Consumesnet power
| valign="top" | Positive resistances:
| Passive negative differential resistances:
|-
! scope="row" style="background:#dfdfdf; text-align:center;" | Rstatic < 0Active:Producesnet power
| valign="top" | Power sources:
| "Active resistors"Positive feedback amplifiers used in:
|}
In an electronic device, the differential resistance , the static resistance , or both, can be negative, so there are three categories of devices (fig. 2–4 above, and table) which could be called "negative resistances".
The term "negative resistance" almost always means negative differential resistance Negative differential resistance devices have unique capabilities: they can act as one-port amplifiers, increasing the power of a time-varying signal applied to their port (terminals), or excite oscillations in a tuned circuit to make an oscillator. They can also have hysteresis. It is not possible for a device to have negative differential resistance without a power source, and these devices can be divided into two categories depending on whether they get their power from an internal source or from their port:
Passive negative differential resistance devices (fig. 2 above): These are the most well-known type of "negative resistances"; passive two-terminal components whose intrinsic I–V curve has a downward "kink", causing the current to decrease with increasing voltage over a limited range. The I–V curve, including the negative resistance region, lies in the 1st and 3rd quadrant of the plane so the device has positive static resistance. Examples are gas-discharge tubes, tunnel diodes, and Gunn diodes. These devices have no internal power source and in general work by converting external DC power from their port to time varying (AC) power, so they require a DC bias current applied to the port in addition to the signal. To add to the confusion, some authors call these "active" devices, since they can amplify. This category also includes a few three-terminal devices, such as the unijunction transistor. They are covered in the Negative differential resistance section below.
Active negative differential resistance devices (fig. 4): Circuits can be designed in which a positive voltage applied to the terminals will cause a proportional "negative" current; a current out of the positive terminal, the opposite of an ordinary resistor, over a limited range, In this video Prof. Horowitz demonstrates that negative static resistance actually exists. He has a black box with two terminals, labelled "−10 kilohms" and shows with ordinary test equipment that it acts like a linear negative resistor (active resistor) with a resistance of −10 KΩ: a positive voltage across it causes a proportional negative current through it, and when connected in a voltage divider with an ordinary resistor the output of the divider is greater than the input, it can amplify. At the end he opens the box and shows it contains an op-amp negative impedance converter circuit and battery. Unlike in the above devices, the downward-sloping region of the I–V curve passes through the origin, so it lies in the 2nd and 4th quadrants of the plane, meaning the device sources power. Amplifying devices like transistors and op-amps with positive feedback can have this type of negative resistance, and are used in feedback oscillators and active filters. Since these circuits produce net power from their port, they must have an internal DC power source, or else a separate connection to an external power supply. In circuit theory this is called an "active resistor". Although this type is sometimes referred to as "linear", "absolute", "ideal", or "pure" negative resistance to distinguish it from "passive" negative differential resistances, in electronics it is more often simply called positive feedback or regeneration. These are covered in the Active resistors section below.
Occasionally ordinary power sources are referred to as "negative resistances", abstract. (fig. 3 above). Although the "static" or "absolute" resistance of active devices (power sources) can be considered negative (see Negative static resistance section below) most ordinary power sources (AC or DC), such as batteries, generators, and (non positive feedback) amplifiers, have positive differential resistance (their source resistance).Glisson, 2011 Introduction to Circuit Analysis and Design, p. 96 Therefore, these devices cannot function as one-port amplifiers or have the other capabilities of negative differential resistances.
List of negative resistance devices
Electronic components with negative differential resistance include these devices:
tunnel diode, resonant tunneling diode and other semiconductor diodes using the tunneling mechanism
Gunn diode and other diodes using the transferred electron mechanism
IMPATT diode, TRAPATT diode and other diodes using the impact ionization mechanism
Some NPN transistors with E-C reverse biased, known as negistor
unijunction transistor (UJT)
thyristors
triode and tetrode vacuum tubes operating in the dynatron mode
Some magnetron tubes and other microwave vacuum tubes
maser
parametric amplifier
Electric discharges through gases also exhibit negative differential resistance,, fig. 1.54 including these devices
electric arc
thyratron tubes
neon lamp
fluorescent lamp
other gas discharge tubes
In addition, active circuits with negative differential resistance can also be built with amplifying devices like transistors and op amps, using feedback. A number of new experimental negative differential resistance materials and devices have been discovered in recent years. The physical processes which cause negative resistance are diverse, and each type of device has its own negative resistance characteristics, specified by its current–voltage curve.
Negative static or "absolute" resistance
A point of some confusion is whether ordinary resistance ("static" or "absolute" resistance, ) can be negative. In electronics, the term "resistance" is customarily applied only to passive materials and components – such as wires, resistors and diodes. These cannot have as shown by Joule's law A passive device consumes electric power, so from the passive sign convention . Therefore, from Joule's law In other words, no material can conduct electric current better than a "perfect" conductor with zero resistance. For a passive device to have would violate either conservation of energy or the second law of thermodynamics, (diagram). Therefore, some authors state that static resistance can never be negative.
However it is easily shown that the ratio of voltage to current v/i at the terminals of any power source (AC or DC) is negative. For electric power (potential energy) to flow out of a device into the circuit, charge must flow through the device in the direction of increasing potential energy, conventional current (positive charge) must move from the negative to the positive terminal. So the direction of the instantaneous current is out of the positive terminal. This is opposite to the direction of current in a passive device defined by the passive sign convention so the current and voltage have opposite signs, and their ratio is negative
This can also be proved from Joule's law
This shows that power can flow out of a device into the circuit if and only if . Whether or not this quantity is referred to as "resistance" when negative is a matter of convention. The absolute resistance of power sources is negative, but this is not to be regarded as "resistance" in the same sense as positive resistances. The negative static resistance of a power source is a rather abstract and not very useful quantity, because it varies with the load. Due to conservation of energy it is always simply equal to the negative of the static resistance of the attached circuit (right).
Work must be done on the charges by some source of energy in the device, to make them move toward the positive terminal against the electric field, so conservation of energy requires that negative static resistances have a source of power. The power may come from an internal source which converts some other form of energy to electric power as in a battery or generator, or from a separate connection to an external power supply circuit as in an amplifying device like a transistor, vacuum tube, or op amp.
Eventual passivity
A circuit cannot have negative static resistance (be active) over an infinite voltage or current range, because it would have to be able to produce infinite power. Any active circuit or device with a finite power source is "eventually passive". This property means if a large enough external voltage or current of either polarity is applied to it, its static resistance becomes positive and it consumes power
where is the maximum power the device can produce.
Therefore, the ends of the I–V curve will eventually turn and enter the 1st and 3rd quadrants. Thus the range of the curve having negative static resistance is limited, confined to a region around the origin. For example, applying a voltage to a generator or battery (graph, above) greater than its open-circuit voltage will reverse the direction of current flow, making its static resistance positive so it consumes power. Similarly, applying a voltage to the negative impedance converter below greater than its power supply voltage Vs will cause the amplifier to saturate, also making its resistance positive.
Negative differential resistance
In a device or circuit with negative differential resistance (NDR), in some part of the I–V curve the current decreases as the voltage increases:
The I–V curve is nonmonotonic (having peaks and troughs) with regions of negative slope representing negative differential resistance.
Passive negative differential resistances have positive static resistance; they consume net power. Therefore, the I–V curve is confined to the 1st and 3rd quadrants of the graph, and passes through the origin. This requirement means (excluding some asymptotic cases) that the region(s) of negative resistance must be limited, and surrounded by regions of positive resistance, and cannot include the origin.
Types
Negative differential resistances can be classified into two types:
Voltage controlled negative resistance (VCNR, short-circuit stable, or "N" type): In this type the current is a single valued, continuous function of the voltage, but the voltage is a multivalued function of the current. In the most common type there is only one negative resistance region, and the graph is a curve shaped generally like the letter "N". As the voltage is increased, the current increases (positive resistance) until it reaches a maximum (i1), then decreases in the region of negative resistance to a minimum (i2), then increases again. Devices with this type of negative resistance include the tunnel diode, resonant tunneling diode, lambda diode, Gunn diode, and dynatron oscillators.
Current controlled negative resistance (CCNR, open-circuit stable, or "S" type): In this type, the dual of the VCNR, the voltage is a single valued function of the current, but the current is a multivalued function of the voltage. In the most common type, with one negative resistance region, the graph is a curve shaped like the letter "S". Devices with this type of negative resistance include the IMPATT diode, UJT, SCRs and other thyristors, electric arc, and gas discharge tubes .
Most devices have a single negative resistance region. However devices with multiple separate negative resistance regions can also be fabricated. These can have more than two stable states, and are of interest for use in digital circuits to implement multivalued logic.
An intrinsic parameter used to compare different devices is the peak-to-valley current ratio (PVR), the ratio of the current at the top of the negative resistance region to the current at the bottom (see graphs, above):
The larger this is, the larger the potential AC output for a given DC bias current, and therefore the greater the efficiency
Amplification
A negative differential resistance device can amplify an AC signal applied to it if the signal is biased with a DC voltage or current to lie within the negative resistance region of its I–V curve.
The tunnel diode circuit (see diagram) is an example. The tunnel diode TD has voltage controlled negative differential resistance. The battery adds a constant voltage (bias) across the diode so it operates in its negative resistance range, and provides power to amplify the signal. Suppose the negative resistance at the bias point is . For stability must be less than . Using the formula for a voltage divider, the AC output voltage is
so the voltage gain is
In a normal voltage divider, the resistance of each branch is less than the resistance of the whole, so the output voltage is less than the input. Here, due to the negative resistance, the total AC resistance is less than the resistance of the diode alone so the AC output voltage is greater than the input . The voltage gain is greater than one, and increases without limit as approaches .
Explanation of power gain
The diagrams illustrate how a biased negative differential resistance device can increase the power of a signal applied to it, amplifying it, although it only has two terminals. Due to the superposition principle the voltage and current at the device's terminals can be divided into a DC bias component and an AC component .
Since a positive change in voltage causes a negative change in current , the AC current and voltage in the device are 180° out of phase. This means in the AC equivalent circuit (right), the instantaneous AC current Δi flows through the device in the direction of increasing AC potential Δv, as it would in a generator. Therefore, the AC power dissipation is negative; AC power is produced by the device and flows into the external circuit.
With the proper external circuit, the device can increase the AC signal power delivered to a load, serving as an amplifier, or excite oscillations in a resonant circuit to make an oscillator. Unlike in a two port amplifying device such as a transistor or op amp, the amplified signal leaves the device through the same two terminals (port) as the input signal enters.
In a passive device, the AC power produced comes from the input DC bias current, the device absorbs DC power, some of which is converted to AC power by the nonlinearity of the device, amplifying the applied signal. Therefore, the output power is limited by the bias power
The negative differential resistance region cannot include the origin, because it would then be able to amplify a signal with no applied DC bias current, producing AC power with no power input. The device also dissipates some power as heat, equal to the difference between the DC power in and the AC power out.
The device may also have reactance and therefore the phase difference between current and voltage may differ from 180° and may vary with frequency. As long as the real component of the impedance is negative (phase angle between 90° and 270°), the device will have negative resistance and can amplify.
The maximum AC output power is limited by size of the negative resistance region ( in graphs above)
Reflection coefficient
The reason that the output signal can leave a negative resistance through the same port that the input signal enters is that from transmission line theory, the AC voltage or current at the terminals of a component can be divided into two oppositely moving waves, the incident wave , which travels toward the device, and the reflected wave , which travels away from the device. A negative differential resistance in a circuit can amplify if the magnitude of its reflection coefficient , the ratio of the reflected wave to the incident wave, is greater than one.
where
The "reflected" (output) signal has larger amplitude than the incident; the device has "reflection gain". The reflection coefficient is determined by the AC impedance of the negative resistance device, , and the impedance of the circuit attached to it, . If and then and the device will amplify. On the Smith chart, a graphical aide widely used in the design of high frequency circuits, negative differential resistance corresponds to points outside the unit circle , the boundary of the conventional chart, so special "expanded" charts must be used.
Stability conditions
Because it is nonlinear, a circuit with negative differential resistance can have multiple equilibrium points (possible DC operating points), which lie on the I–V curve. An equilibrium point will be stable, so the circuit converges to it within some neighborhood of the point, if its poles are in the left half of the s plane (LHP), while a point is unstable, causing the circuit to oscillate or "latch up" (converge to another point), if its poles are on the jω axis or right half plane (RHP), respectively. In contrast, a linear circuit has a single equilibrium point that may be stable or unstable. The equilibrium points are determined by the DC bias circuit, and their stability is determined by the AC impedance of the external circuit.
However, because of the different shapes of the curves, the condition for stability is different for VCNR and CCNR types of negative resistance:
In a CCNR (S-type) negative resistance, the resistance function is single-valued. Therefore, stability is determined by the poles of the circuit's impedance equation:.
For nonreactive circuits a sufficient condition for stability is that the total resistance is positive so the CCNR is stable for
Since CCNRs are stable with no load at all, they are called "open circuit stable".
In a VCNR (N-type) negative resistance, the conductance function is single-valued. Therefore, stability is determined by the poles of the admittance equation . For this reason the VCNR is sometimes referred to as a negative conductance.As above, for nonreactive circuits a sufficient condition for stability is that the total conductance in the circuit is positive so the VCNR is stable for
Since VCNRs are even stable with a short-circuited output, they are called "short circuit stable".
For general negative resistance circuits with reactance, the stability must be determined by standard tests like the Nyquist stability criterion. Alternatively, in high frequency circuit design, the values of for which the circuit is stable are determined by a graphical technique using "stability circles" on a Smith chart.
Operating regions and applications
For simple nonreactive negative resistance devices with and the different operating regions of the device can be illustrated by load lines on the I–V curve (see graphs).
The DC load line (DCL) is a straight line determined by the DC bias circuit, with equation where is the DC bias supply voltage and R is the resistance of the supply. The possible DC operating point(s) (Q points) occur where the DC load line intersects the I–V curve. For stability
VCNRs require a low impedance bias , such as a voltage source.
CCNRs require a high impedance bias such as a current source, or voltage source in series with a high resistance.
The AC load line (L1 − L3) is a straight line through the Q point whose slope is the differential (AC) resistance facing the device. Increasing rotates the load line counterclockwise. The circuit operates in one of three possible regions (see diagrams), depending on .
Stable region (green) (illustrated by line L1): When the load line lies in this region, it intersects the I–V curve at one point Q1. For nonreactive circuits it is a stable equilibrium (poles in the LHP) so the circuit is stable. Negative resistance amplifiers operate in this region. However, due to hysteresis, with an energy storage device like a capacitor or inductor the circuit can become unstable to make a nonlinear relaxation oscillator (astable multivibrator) or a monostable multivibrator.
VCNRs are stable when .
CCNRs are stable when .
Unstable point (Line L2): When the load line is tangent to the I–V curve. The total differential (AC) resistance of the circuit is zero (poles on the jω axis), so it is unstable and with a tuned circuit can oscillate. Linear oscillators operate at this point. Practical oscillators actually start in the unstable region below, with poles in the RHP, but as the amplitude increases the oscillations become nonlinear, and due to eventual passivity the negative resistance r decreases with increasing amplitude, so the oscillations stabilize at an amplitude where .
Bistable region (red) (illustrated by line L3): In this region the load line can intersect the I–V curve at three points. The center point (Q1) is a point of unstable equilibrium (poles in the RHP), while the two outer points, Q2 and Q3 are stable equilibria. So with correct biasing the circuit can be bistable, it will converge to one of the two points Q2 or Q3 and can be switched between them with an input pulse. Switching circuits like flip-flops (bistable multivibrators) and Schmitt triggers operate in this region.
VCNRs can be bistable when
CCNRs can be bistable when
Active resistors – negative resistance from feedback
In addition to the passive devices with intrinsic negative differential resistance above, circuits with amplifying devices like transistors or op amps can have negative resistance at their ports. The input or output impedance of an amplifier with enough positive feedback applied to it can be negative. If is the input resistance of the amplifier without feedback, is the amplifier gain, and is the transfer function of the feedback path, the input resistance with positive shunt feedback is
So if the loop gain is greater than one, will be negative. The circuit acts like a "negative linear resistor" over a limited range, with I–V curve having a straight line segment through the origin with negative slope (see graphs). It has both negative differential resistance and is active
and thus obeys Ohm's law as if it had a negative value of resistance −R, over its linear range (such amplifiers can also have more complicated negative resistance I–V curves that do not pass through the origin).
In circuit theory these are called "active resistors". Applying a voltage across the terminals causes a proportional current out of the positive terminal, the opposite of an ordinary resistor. For example, connecting a battery to the terminals would cause the battery to charge rather than discharge.
Considered as one-port devices, these circuits function similarly to the passive negative differential resistance components above, and like them can be used to make one-port amplifiers and oscillators with the advantages that:
because they are active devices they do not require an external DC bias to provide power, and can be DC coupled,
the amount of negative resistance can be varied by adjusting the loop gain,
they can be linear circuit elements; if operation is confined to the straight segment of the curve near the origin the voltage is proportional to the current, so they do not cause harmonic distortion.
The I–V curve can have voltage-controlled ("N" type) or current-controlled ("S" type) negative resistance, depending on whether the feedback loop is connected in "shunt" or "series".
Negative reactances (below) can also be created, so feedback circuits can be used to create "active" linear circuit elements, resistors, capacitors, and inductors, with negative values. They are widely used in active filters because they can create transfer functions that cannot be realized with positive circuit elements. Examples of circuits with this type of negative resistance are the negative impedance converter (NIC), gyrator, Deboo integrator, frequency dependent negative resistance (FDNR), and generalized immittance converter (GIC).
Feedback oscillators
If an LC circuit is connected across the input of a positive feedback amplifier like that above, the negative differential input resistance can cancel the positive loss resistance inherent in the tuned circuit. If this will create in effect a tuned circuit with zero AC resistance (poles on the jω axis). Spontaneous oscillation will be excited in the tuned circuit at its resonant frequency, sustained by the power from the amplifier. This is how feedback oscillators such as Hartley or Colpitts oscillators work. This negative resistance model is an alternate way of analyzing feedback oscillator operation. All linear oscillator circuits have negative resistance although in most feedback oscillators the tuned circuit is an integral part of the feedback network, so the circuit does not have negative resistance at all frequencies but only near the oscillation frequency.
Q enhancement
A tuned circuit connected to a negative resistance which cancels some but not all of its parasitic loss resistance (so ) will not oscillate, but the negative resistance will decrease the damping in the circuit (moving its poles toward the jω axis), increasing its Q factor so it has a narrower bandwidth and more selectivity. Q enhancement, also called regeneration, was first used in the regenerative radio receiver invented by Edwin Armstrong in 1912 and later in "Q multipliers". It is widely used in active filters. For example, RF integrated circuits use integrated inductors to save space, consisting of a spiral conductor fabricated on chip. These have high losses and low Q, so to create high Q tuned circuits their Q is increased by applying negative resistance.
Chaotic circuits
Circuits which exhibit chaotic behavior can be considered quasi-periodic or nonperiodic oscillators, and like all oscillators require a negative resistance in the circuit to provide power. Chua's circuit, a simple nonlinear circuit widely used as the standard example of a chaotic system, requires a nonlinear active resistor component, sometimes called Chua's diode. This is usually synthesized using a negative impedance converter circuit.
Negative impedance converter
A common example of an "active resistance" circuit is the negative impedance converter (NIC) shown in the diagram. The two resistors and the op amp constitute a negative feedback non-inverting amplifier with gain of 2. The output voltage of the op-amp is
So if a voltage is applied to the input, the same voltage is applied "backwards" across , causing current to flow through it out of the input. The current is
So the input impedance to the circuit is
The circuit converts the impedance to its negative. If is a resistor of value , within the linear range of the op amp the input impedance acts like a linear "negative resistor" of value . The input port of the circuit is connected into another circuit as if it was a component. An NIC can cancel undesired positive resistance in another circuit, for example they were originally developed to cancel resistance in telephone cables, serving as repeaters.
Negative capacitance and inductance
By replacing in the above circuit with a capacitor , negative capacitances and inductances can also be synthesized. A negative capacitance will have an I–V relation and an impedance of
where . Applying a positive current to a negative capacitance will cause it to discharge; its voltage will decrease. Similarly, a negative inductance will have an I–V characteristic and impedance of
A circuit having negative capacitance or inductance can be used to cancel unwanted positive capacitance or inductance in another circuit. NIC circuits were used to cancel reactance on telephone cables.
There is also another way of looking at them. In a negative capacitance the current will be 180° opposite in phase to the current in a positive capacitance. Instead of leading the voltage by 90° it will lag the voltage by 90°, as in an inductor. Therefore, a negative capacitance acts like an inductance in which the impedance has a reverse dependence on frequency ω; decreasing instead of increasing like a real inductance Similarly a negative inductance acts like a capacitance that has an impedance which increases with frequency. Negative capacitances and inductances are "non-Foster" circuits which violate Foster's reactance theorem. One application being researched is to create an active matching network which could match an antenna to a transmission line over a broad range of frequencies, rather than just a single frequency as with current networks. This would allow the creation of small compact antennas that would have broad bandwidth, exceeding the Chu–Harrington limit.
Oscillators
Negative differential resistance devices are widely used to make electronic oscillators. In a negative resistance oscillator, a negative differential resistance device such as an IMPATT diode, Gunn diode, or microwave vacuum tube is connected across an electrical resonator such as an LC circuit, a quartz crystal, dielectric resonator or cavity resonator with a DC source to bias the device into its negative resistance region and provide power. A resonator such as an LC circuit is "almost" an oscillator; it can store oscillating electrical energy, but because all resonators have internal resistance or other losses, the oscillations are damped and decay to zero. The negative resistance cancels the positive resistance of the resonator, creating in effect a lossless resonator, in which spontaneous continuous oscillations occur at the resonator's resonant frequency.
Uses
Negative resistance oscillators are mainly used at high frequencies in the microwave range or above, since feedback oscillators function poorly at these frequencies. Microwave diodes are used in low- to medium-power oscillators for applications such as radar speed guns, and local oscillators for satellite receivers. They are a widely used source of microwave energy, and virtually the only solid-state source of millimeter wave and terahertz energy Negative resistance microwave vacuum tubes such as magnetrons produce higher power outputs, in such applications as radar transmitters and microwave ovens. Lower frequency relaxation oscillators can be made with UJTs and gas-discharge lamps such as neon lamps.
The negative resistance oscillator model is not limited to one-port devices like diodes but can also be applied to feedback oscillator circuits with two port devices such as transistors and tubes. In addition, in modern high frequency oscillators, transistors are increasingly used as one-port negative resistance devices like diodes. At microwave frequencies, transistors with certain loads applied to one port can become unstable due to internal feedback and show negative resistance at the other port. So high frequency transistor oscillators are designed by applying a reactive load to one port to give the transistor negative resistance, and connecting the other port across a resonator to make a negative resistance oscillator as described below.
Gunn diode oscillator
The common Gunn diode oscillator (circuit diagrams) illustrates how negative resistance oscillators work. The diode D has voltage controlled ("N" type) negative resistance and the voltage source biases it into its negative resistance region where its differential resistance is . The choke RFC prevents AC current from flowing through the bias source. is the equivalent resistance due to damping and losses in the series tuned circuit , plus any load resistance. Analyzing the AC circuit with Kirchhoff's Voltage Law gives a differential equation for , the AC current
Solving this equation gives a solution of the form
where
This shows that the current through the circuit, , varies with time about the DC Q point, . When started from a nonzero initial current the current oscillates sinusoidally at the resonant frequency ω of the tuned circuit, with amplitude either constant, increasing, or decreasing exponentially, depending on the value of α. Whether the circuit can sustain steady oscillations depends on the balance between and , the positive and negative resistance in the circuit:
: (poles in left half plane) If the diode's negative resistance is less than the positive resistance of the tuned circuit, the damping is positive. Any oscillations in the circuit will lose energy as heat in the resistance and die away exponentially to zero, as in an ordinary tuned circuit. So the circuit does not oscillate.
: (poles on jω axis) If the positive and negative resistances are equal, the net resistance is zero, so the damping is zero. The diode adds just enough energy to compensate for energy lost in the tuned circuit and load, so oscillations in the circuit, once started, will continue at a constant amplitude. This is the condition during steady-state operation of the oscillator.
: (poles in right half plane) If the negative resistance is greater than the positive resistance, damping is negative, so oscillations will grow exponentially in energy and amplitude. This is the condition during startup.
Practical oscillators are designed in region (3) above, with net negative resistance, to get oscillations started. A widely used rule of thumb is to make . When the power is turned on, electrical noise in the circuit provides a signal to start spontaneous oscillations, which grow exponentially. However, the oscillations cannot grow forever; the nonlinearity of the diode eventually limits the amplitude.
At large amplitudes the circuit is nonlinear, so the linear analysis above does not strictly apply and differential resistance is undefined; but the circuit can be understood by considering to be the "average" resistance over the cycle. As the amplitude of the sine wave exceeds the width of the negative resistance region and the voltage swing extends into regions of the curve with positive differential resistance, the average negative differential resistance becomes smaller, and thus the total resistance and the damping becomes less negative and eventually turns positive. Therefore, the oscillations will stabilize at the amplitude at which the damping becomes zero, which is when .
Gunn diodes have negative resistance in the range −5 to −25 ohms. In oscillators where is close to ; just small enough to allow the oscillator to start, the voltage swing will be mostly limited to the linear portion of the I–V curve, the output waveform will be nearly sinusoidal and the frequency will be most stable. In circuits in which is far below , the swing extends further into the nonlinear part of the curve, the clipping distortion of the output sine wave is more severe, and the frequency will be increasingly dependent on the supply voltage.
Types of circuit
Negative resistance oscillator circuits can be divided into two types, which are used with the two types of negative differential resistance – voltage controlled (VCNR), and current controlled (CCNR)
Negative resistance (voltage controlled) oscillator: Since VCNR ("N" type) devices require a low impedance bias and are stable for load impedances less than r, the ideal oscillator circuit for this device has the form shown at top right, with a voltage source Vbias to bias the device into its negative resistance region, and parallel resonant circuit load LC. The resonant circuit has high impedance only at its resonant frequency, so the circuit will be unstable and oscillate only at that frequency.
Negative conductance (current controlled) oscillator: CCNR ("S" type) devices, in contrast, require a high impedance bias and are stable for load impedances greater than r. The ideal oscillator circuit is like that at bottom right, with a current source bias Ibias (which may consist of a voltage source in series with a large resistor) and series resonant circuit LC. The series LC circuit has low impedance only at its resonant frequency and so will only oscillate there.
Conditions for oscillation
Most oscillators are more complicated than the Gunn diode example, since both the active device and the load may have reactance (X) as well as resistance (R). Modern negative resistance oscillators are designed by a frequency domain technique due to Kaneyuki Kurokawa. The circuit diagram is imagined to be divided by a "reference plane" (red) which separates the negative resistance part, the active device, from the positive resistance part, the resonant circuit and output load (right). The complex impedance of the negative resistance part depends on frequency ω but is also nonlinear, in general declining with the amplitude of the AC oscillation current I; while the resonator part is linear, depending only on frequency. The circuit equation is so it will only oscillate (have nonzero I) at the frequency ω and amplitude I for which the total impedance is zero. This means the magnitude of the negative and positive resistances must be equal, and the reactances must be conjugate
and
For steady-state oscillation the equal sign applies. During startup the inequality applies, because the circuit must have excess negative resistance for oscillations to start.
Alternately, the condition for oscillation can be expressed using the reflection coefficient. The voltage waveform at the reference plane can be divided into a component V1 travelling toward the negative resistance device and a component V2 travelling in the opposite direction, toward the resonator part. The reflection coefficient of the active device is greater than one, while that of the resonator part is less than one. During operation the waves are reflected back and forth in a round trip so the circuit will oscillate only if
As above, the equality gives the condition for steady oscillation, while the inequality is required during startup to provide excess negative resistance. The above conditions are analogous to the Barkhausen criterion for feedback oscillators; they are necessary but not sufficient, so there are some circuits that satisfy the equations but do not oscillate. Kurokawa also derived more complicated sufficient conditions, which are often used instead.
Amplifiers
Negative differential resistance devices such as Gunn and IMPATT diodes are also used to make amplifiers, particularly at microwave frequencies, but not as commonly as oscillators. Because negative resistance devices have only one port (two terminals), unlike two-port devices such as transistors, the outgoing amplified signal has to leave the device by the same terminals as the incoming signal enters it. Without some way of separating the two signals, a negative resistance amplifier is bilateral; it amplifies in both directions, so it suffers from sensitivity to load impedance and feedback problems. To separate the input and output signals, many negative resistance amplifiers use nonreciprocal devices such as isolators and directional couplers.
Reflection amplifier
One widely used circuit is the reflection amplifier in which the separation is accomplished by a circulator. A circulator is a nonreciprocal solid-state component with three ports (connectors) which transfers a signal applied to one port to the next in only one direction, port 1 to port 2, 2 to 3, and 3 to 1. In the reflection amplifier diagram the input signal is applied to port 1, a biased VCNR negative resistance diode N is attached through a filter F to port 2, and the output circuit is attached to port 3. The input signal is passed from port 1 to the diode at port 2, but the outgoing "reflected" amplified signal from the diode is routed to port 3, so there is little coupling from output to input. The characteristic impedance of the input and output transmission lines, usually 50Ω, is matched to the port impedance of the circulator. The purpose of the filter F is to present the correct impedance to the diode to set the gain. At radio frequencies NR diodes are not pure resistive loads and have reactance, so a second purpose of the filter is to cancel the diode reactance with a conjugate reactance to prevent standing waves.
The filter has only reactive components and so does not absorb any power itself, so power is passed between the diode and the ports without loss. The input signal power to the diode is
The output power from the diode is
So the power gain of the amplifier is the square of the reflection coefficient
is the negative resistance of the diode −r. Assuming the filter is matched to the diode so then the gain is
The VCNR reflection amplifier above is stable for . while a CCNR amplifier is stable for . It can be seen that the reflection amplifier can have unlimited gain, approaching infinity as approaches the point of oscillation at . This is a characteristic of all NR amplifiers, contrasting with the behavior of two-port amplifiers, which generally have limited gain but are often unconditionally stable. In practice the gain is limited by the backward "leakage" coupling between circulator ports.
Masers and parametric amplifiers are extremely low noise NR amplifiers that are also implemented as reflection amplifiers; they are used in applications like radio telescopes.
Switching circuits
Negative differential resistance devices are also used in switching circuits in which the device operates nonlinearly, changing abruptly from one state to another, with hysteresis. The advantage of using a negative resistance device is that a relaxation oscillator, flip-flop or memory cell can be built with a single active device, whereas the standard logic circuit for these functions, the Eccles-Jordan multivibrator, requires two active devices (transistors). Three switching circuits built with negative resistances are
Astable multivibrator – a circuit with two unstable states, in which the output periodically switches back and forth between the states. The time it remains in each state is determined by the time constant of an RC circuit. Therefore, it is a relaxation oscillator, and can produce square waves or triangle waves.
Monostable multivibrator – is a circuit with one unstable state and one stable state. When in its stable state a pulse is applied to the input, the output switches to its other state and remains in it for a period of time dependent on the time constant of the RC circuit, then switches back to the stable state. Thus the monostable can be used as a timer or delay element.
Bistable multivibrator or flip flop – is a circuit with two stable states. A pulse at the input switches the circuit to its other state. Therefore, bistables can be used as memory circuits, and digital counters.
Other applications
Neuronal models
Some instances of neurons display regions of negative slope conductances (RNSC) in voltage-clamp experiments. The negative resistance here is implied were one to consider the neuron a typical Hodgkin–Huxley style circuit model.
History
Negative resistance was first recognized during investigations of electric arcs, which were used for lighting during the 19th century. In 1881 Alfred Niaudet had observed that the voltage across arc electrodes decreased temporarily as the arc current increased, but many researchers thought this was a secondary effect due to temperature. The term "negative resistance" was applied by some to this effect, but the term was controversial because it was known that the resistance of a passive device could not be negative. Beginning in 1895 Hertha Ayrton, extending her husband William's research with a series of meticulous experiments measuring the I–V curve of arcs, established that the curve had regions of negative slope, igniting controversy. Frith and Rodgers in 1896 with the support of the Ayrtons introduced the concept of differential resistance, dv/di, and it was slowly accepted that arcs had negative differential resistance. In recognition of her research, Hertha Ayrton became the first woman voted for induction into the Institute of Electrical Engineers.
Arc transmitters
George Francis FitzGerald first realized in 1892 that if the damping resistance in a resonant circuit could be made zero or negative, it would produce continuous oscillations. In the same year Elihu Thomson built a negative resistance oscillator by connecting an LC circuit to the electrodes of an arc, perhaps the first example of an electronic oscillator. William Duddell, a student of Ayrton at London Central Technical College, brought Thomson's arc oscillator to public attention. Due to its negative resistance, the current through an arc was unstable, and arc lights would often produce hissing, humming, or even howling noises. In 1899, investigating this effect, Duddell connected an LC circuit across an arc and the negative resistance excited oscillations in the tuned circuit, producing a musical tone from the arc. To demonstrate his invention Duddell wired several tuned circuits to an arc and played a tune on it. Duddell's "singing arc" oscillator was limited to audio frequencies. However, in 1903 Danish engineers Valdemar Poulsen and P. O. Pederson increased the frequency into the radio range by operating the arc in a hydrogen atmosphere in a magnetic field, inventing the Poulsen arc radio transmitter, which was widely used until the 1920s.
Vacuum tubes
By the early 20th century, although the physical causes of negative resistance were not understood, engineers knew it could generate oscillations and had begun to apply it. Heinrich Barkhausen in 1907 showed that oscillators must have negative resistance. Ernst Ruhmer and Adolf Pieper discovered that mercury vapor lamps could produce oscillations, and by 1912 AT&T had used them to build amplifying repeaters for telephone lines.
In 1918 Albert Hull at GE discovered that vacuum tubes could have negative resistance in parts of their operating ranges, due to a phenomenon called secondary emission. In a vacuum tube when electrons strike the plate electrode they can knock additional electrons out of the surface into the tube. This represents a current away from the plate, reducing the plate current. Under certain conditions increasing the plate voltage causes a decrease in plate current. By connecting an LC circuit to the tube Hull created an oscillator, the dynatron oscillator. Other negative resistance tube oscillators followed, such as the magnetron invented by Hull in 1920.
The negative impedance converter originated from work by Marius Latour around 1920. He was also one of the first to report negative capacitance and inductance. A decade later, vacuum tube NICs were developed as telephone line repeaters at Bell Labs by George Crisson and others, which made transcontinental telephone service possible. Transistor NICs, pioneered by Linvill in 1953, initiated a great increase in interest in NICs and many new circuits and applications developed.
Solid state devices
Negative differential resistance in semiconductors was observed around 1909 in the first point-contact junction diodes, called cat's whisker detectors, by researchers such as William Henry Eccles and G. W. Pickard. They noticed that when junctions were biased with a DC voltage to improve their sensitivity as radio detectors, they would sometimes break into spontaneous oscillations. However the effect was not pursued.
The first person to exploit negative resistance diodes practically was Russian radio researcher Oleg Losev, who in 1922 discovered negative differential resistance in biased zincite (zinc oxide) point contact junctions. He used these to build solid-state amplifiers, oscillators, and amplifying and regenerative radio receivers, 25 years before the invention of the transistor. Later he even built a superheterodyne receiver. However his achievements were overlooked because of the success of vacuum tube technology. After ten years he abandoned research into this technology (dubbed "Crystodyne" by Hugo Gernsback), and it was forgotten.
The first widely used solid-state negative resistance device was the tunnel diode, invented in 1957 by Japanese physicist Leo Esaki. Because they have lower parasitic capacitance than vacuum tubes due to their small junction size, diodes can function at higher frequencies, and tunnel diode oscillators proved able to produce power at microwave frequencies, above the range of ordinary vacuum tube oscillators. Its invention set off a search for other negative resistance semiconductor devices for use as microwave oscillators, resulting in the discovery of the IMPATT diode, Gunn diode, TRAPATT diode, and others. In 1969 Kurokawa derived conditions for stability in negative resistance circuits. Currently negative differential resistance diode oscillators are the most widely used sources of microwave energy, and many new negative resistance devices have been discovered in recent decades.
Notes
References
Further reading
How negative differential resistance devices work in oscillators.
, ch. 6 Account of discovery of negative resistance and its role in early radio.
Elementary one-page introduction to negative resistance.
Electricity
Electronics concepts
Microwave technology
Physical quantities | Negative resistance | Physics,Mathematics | 11,309 |
3,332,762 | https://en.wikipedia.org/wiki/Naphthalenesulfonic%20acid | Naphthalenesulfonic acid may refer to:
Naphthalene-1-sulfonic acid
Naphthalene-2-sulfonic acid
Sulfonic acids | Naphthalenesulfonic acid | Chemistry | 39 |
60,944,280 | https://en.wikipedia.org/wiki/Scanning%20helium%20microscopy | The scanning helium microscope (SHeM) is a form of microscopy that uses low-energy (5–100 meV) neutral helium atoms to image the surface of a sample without any damage to the sample caused by the imaging process. Since helium is inert and neutral, it can be used to study delicate and insulating surfaces. Images are formed by rastering a sample underneath an atom beam and monitoring the flux of atoms that are scattered into a detector at each point.
The technique is different from a scanning helium ion microscope, which uses charged helium ions that can cause damage to a surface.
Motivation
Microscopes can be divided into two general classes: those that illuminate the sample with a beam, and those that use a physical scanning probe. Scanning probe microscopies raster a small probe across the surface of a sample and monitor the interaction of the probe with the sample. The resolution of scanning probe microscopies is set by the size of the interaction region between the probe and the sample, which can be sufficiently small to allow atomic resolution. Using a physical tip (e.g. AFM or STM) does have some disadvantages though including a reasonably small imaging area and difficulty in observing structures with a large height variation over a small lateral distance.
Microscopes that use a beam have a fundamental limit on the minimum resolvable feature size, , which is given by the Abbe diffraction limit,
where is the wavelength of the probing wave, is the refractive index of the medium the wave is travelling in and the wave is converging to a spot with a half-angle of . While it is possible to overcome the diffraction limit on resolution by using a near-field technique, it is usually quite difficult. Since the denominator of the above equation for the Abbe diffraction limit will be approximately two at best, the wavelength of the probe is the main factor in determining the minimum resolvable feature, which is typically about 1 μm for optical microscopy.
To overcome the diffraction limit, a probe that has a smaller wavelength is needed, which can be achieved using either light with a higher energy, or through using a matter wave.
X-rays have a much smaller wavelength than visible light, and therefore can achieve superior resolutions when compared to optical techniques. Projection X-ray imaging is conventionally used in medical applications, but high resolution imaging is achieved through scanning transmission X-ray microscopy (STXM). By focussing the X-rays to a small point and rastering across a sample, a very high resolution can be obtained with light. The small wavelength of X-rays comes at the expense of a high photon energy, meaning that X-rays can cause radiation damage. Additionally, X-rays are weakly interacting, so they will primarily interact with the bulk of the sample, making investigations of a surface difficult.
Matter waves have a much shorter wavelength than visible light and therefore can be used to study features below about 1 μm. The advent of electron microscopy opened up a variety of new materials that could be studied due to the enormous improvement in the resolution when compared to optical microscopy.
The de Broglie wavelength, , of a matter wave in terms of its kinetic energy, , and particle mass, , is given by
Hence, for an electron beam to resolve atomic structure, the wavelength of the matter wave would need be at least = 1 Å, and therefore the beam energy would need to be given by > 100 eV.
Since electrons are charged, they can be manipulated using electromagnetic optics to form extremely small spot sizes on a surface. Due to the wavelength of an electron beam being low, the Abbe diffraction limit can be pushed below atomic resolution and electromagnetic lenses can be used to form very intense spots on the surface of a material. The optics in a scanning electron microscope usually require the beam energy to be in excess of 1 keV to produce the best-quality electron beam.
The high energy of the electrons leads to the electron beam interacting not only with the surface of a material, but forming a tear-drop interaction volume underneath the surface. While the spot size on the surface can be extremely low, the electrons will travel into the bulk and continue interacting with the sample. Transmission electron microscopy avoids the bulk interaction by only using thin samples, however usually the electron beam interacting with the bulk will limit the resolution of a scanning electron microscope.
The electron beam can also damage the material, destroying the structure that is to be studied due to the high beam energy. Electron beam damage can occur through a variety of different processes that are specimen-specific. Examples of beam damage include the breaking of bonds in a polymer, which changes the structure, and knock-on damage in metals that creates a vacancy in the lattice, which changes to the surface chemistry. Additionally, the electron beam is charged, which means that the surface of the sample needs to be conducting to avoid artefacts of charge accumulation in images. One method to mitigate the issue when imaging insulating surfaces is to use an environmental scanning electron microscope (ESEM).
Therefore, in general, electrons are often not particularly suited to studying delicate surfaces due to the high beam energy and lack of exclusive surface sensitivity. Instead, an alternative beam is required for the study of surfaces at low energy without disturbing the structure.
Given the equation for the de Broglie wavelength above, the same wavelength of a beam can be achieved at lower energies by using a beam of particles that have a higher mass. Thus, if the objective were to study the surface of a material at a resolution that is below that which can be achieved with optical microscopy, it may be appropriate to use atoms as a probe instead. While neutrons can be used as a probe, they are weakly interacting with matter and can only study the bulk structure of a material. Neutron imaging also requires a high flux of neutrons, which usually can only be provided by a nuclear reactor or particle accelerator.
A beam of helium atoms with a wavelength = 1 Å has an energy of 20 meV, which is about the same as the thermal energy. Using particles of a higher mass than that of an electron means that it is possible to obtain a beam with a wavelength suitable to probe length scales down to the atomic level with a much lower energy.
Thermal energy helium atom beams are exclusively surface sensitive, giving helium scattering an advantage over other techniques such as electron and x-ray scattering for surface studies. For the beam energies that are used, the helium atoms will have classical turning points 2–3 Å away from the surface atom cores. The turning point is well above the surface atom cores, meaning that the beam will only interact with the outermost electrons.
History
The first discussion of obtaining an image of a surface using atoms was by King and Bigas, who showed that an image of a surface can be obtained by heating a sample and monitoring the atoms that evaporate from the surface. King and Bigas suggest that it could be possible to form an image by scattering atoms from the surface, though it was some time before this was demonstrated.
The idea of imaging with atoms instead of light was subsequently widely discussed in the literature.
The initial approach to producing a helium microscope assumed that a focussing element is required to produce a high intensity beam of atoms. An early approach was to develop an atomic mirror, which is appealing since the focussing is independent of the velocity distribution of the incoming atoms. However the material challenges to produce an appropriate surface that is macroscopically curved and defect free on an atomic length-scale has proved too challenging so far.
King and Bigas, showed that an image of a surface can be obtained by heating a sample and monitoring the atoms that evaporate from the surface. King and Bigas suggest it could be possible to form an image by scattering atoms from the surface, though it was some time before it was demonstrated.
Metastable atoms are atoms that have been excited out of the ground state, but remain in an excited state for a significant period of time. Microscopy using metastable atoms has been shown to be possible, where the metastable atoms release stored internal energy into the surface, releasing electrons that provide information on the electronic structure. The kinetic energy of the metastable atoms means that only the surface electronic structure is probed, but the large energy exchange when the metastable atom de-excites will still perturb delicate sample surfaces.
The first two-dimensional neutral helium images were obtained using a conventional Fresnel zone plate by Koch et al. in a transmission setup. Helium will not pass through a solid material, therefore a large change in the measured signal is obtained when a sample is placed between the source and the detector. By maximising the contrast and using transmission mode, it was much easier to verify the feasibility of the technique. However, the setup used by Koch et al. with a zone plate did not produce a high enough signal to observe the reflected signal from the surface at the time. Nevertheless, the focussing obtained with a zone plate offers the potential for improved resolution due to the small beam spot size in the future. Research into neutral helium microscopes that use a Fresnel zone plate is an active area in Holst’s group at the University of Bergen.
Since using a zone plate proved difficult due to the low focussing efficiency, alternative methods for forming a helium beam to produce images with atoms were explored.
Recent efforts have avoided focussing elements and instead are directly collimating a beam with a pinhole. The lack of atom optics means that the beam width will be significantly larger than in an electron microscope. The first published demonstration of a two-dimensional image formed by helium reflecting from the surface was by Witham and Sánchez, who used a pinhole to form the helium beam. A small pinhole is placed very close to a sample and the helium scattered into a large solid angle is fed to a detector. Images are collected by moving the sample around underneath the beam and monitoring how the scattered helium flux changes.
In parallel to the work by Witham and Sánchez, a proof of concept machine named the scanning helium microscope (SHeM) was being developed in Cambridge in collaboration with Dastoor's group from the University of Newcastle. The approach that was adopted was to simplify previous attempts that involved an atom mirror by using a pinhole, but to still use a conventional helium source to produce a high quality beam. Other differences from the Witham and Sánchez design include using a larger sample to pinhole distance, so that a larger variety of samples can be used and to use a smaller collection solid angle, so that it may be possible to observe more subtle contrast. These changes also reduced the total flux in the detector meaning that higher efficiency detectors are required (which in itself is an active area of research.
Image formation process
The atomic beam is formed through a supersonic expansion, which is a standard technique used in helium atom scattering. The centreline of the gas is selected by a skimmer to form an atom beam with a narrow velocity distribution. The gas is then further collimated by a pinhole to form a narrow beam, which is typically between 1–10 μm. The use of a focusing element (such as a zone plate) allows beam spot sizes below 1 μm to be achieved, but currently still comes with low signal intensity.
The gas then scatters from the surface and is collected into a detector. In order to measure the flux of the neutral helium atoms, they must first be ionised. The inertness of helium that makes it a gentle probe means that it is difficult to ionise and therefore reasonably aggressive electron bombardment is typically used to create the ions. A mass spectrometer setup is then used to select only the helium ions for detection.
Once the flux from a specific part of the surface is collected, the sample is moved underneath the beam to generate an image. By obtaining the value of the scattered flux across a grid of positions, then values can then be converted to an image.
The observed contrast in helium images has typically been dominated by the variation in topography of the sample. Typically, since the wavelength of the atom beam is small, surfaces appear extremely rough to the incoming atom beam. Therefore, the atoms are diffusely scattered and roughly follow Knudsen's Law [citation?] (the atom equivalent of Lambert's cosine law in optics). However, more recently work has begun to see divergence from diffuse scattering due to effects such as diffraction and chemical contrast effects. However, the exact mechanisms for forming contrast in a helium microscope is an active field of research. Most cases have some complex combination of several contrast mechanisms making it difficult to disentangle the different contributions.
Combinations of images from multiple perspectives allows stereophotogrammetry to produce partial three dimensional images, especially valuable for biological samples subject to degradation in electron microscopes.
Optimal configurations
The optimal configurations of scanning helium microscopes are geometrical configurations that maximise the intensity of the imaging beam within a given lateral resolution and under certain technological constraints.
When designing a scanning helium microscope, scientists strive to maximise the intensity of the imaging beam while minimising its width. The reason behind this is that the beam's width gives the resolution of the microscope while its intensity is proportional to its signal to noise ratio. Due to their neutrality and high ionisation energy, neutral helium atoms are hard to detect. This makes high-intensity beams a crucial requirement for a viable scanning helium microscope.
In order to generate a high-intensity beam, scanning helium microscopes are designed to generate a supersonic expansion of the gas into vacuum, that accelerates neutral helium atoms to high velocities. Scanning helium microscopes exist in two different configurations: the pinhole configuration and the zone plate configuration. In the pinhole configuration, a small opening (the pinhole) selects a section of the supersonic expansion far away from its origin, which has previously been collimated by a skimmer (essentially, another small pinhole). This section then becomes the imaging beam. In the zone plate configuration a Fresnel zone plate focuses the atoms coming from a skimmer into a small focal spot.
Each of these configurations have different optimal designs, as they are defined by different optics equations.
Pinhole configuration
For the pinhole configuration the width of the beam (which we aim to minimise) is largely given by geometrical optics. The size of the beam at the sample plane is given by the lines connecting the skimmer edges with the pinhole edges. When the Fresnel number is very small (), the beam width is also affected by Fraunhofer diffraction (see equation below).
In this equation is the Full Width at Half Maximum of the beam, is the geometrical projection of the beam and is the Airy diffraction term. is the Heaviside step function used here to indicate that the presence of the diffraction term depends on the value of the Fresnel number. Note that there are variations of this equation depending on what is defined as the "beam width" (for details compare and ). Due to the small wavelength of the helium beam, the Fraunhofer diffraction term can usually be omitted.
The intensity of the beam (which we aim to maximise) is given by the following equation (according to the Sikora and Andersen model):
Where is the total intensity stemming from the supersonic expansion nozzle (taken as a constant in the optimisation problem), is the radius of the pinhole, S is the speed ratio of the beam, is the radius of the skimmer, is the radius of the supersonic expansion quitting surface (the point in the expansion from which atoms can be considered to travel in a straight line), is the distance between the nozzle and the skimmer and is the distance between the skimmer and the pinhole. There are several other versions of this equation that depend on the intensity model, but they all show a quadratic dependency on the pinhole radius (the bigger the pinhole, the more intensity) and an inverse quadratic dependency with the distance between the skimmer and the pinhole (the more the atoms spread, the less intensity).
By combining the two equations shown above, one can obtain that for a given beam width for the geometrical optics regime the following values correspond to intensity maxima:
In here, represents the working distance of the microscope and is a constant that stems from the definition of the beam width. Note that both equations are given with respect to the distance between the skimmer and the pinhole, a. The global maximum of intensity can then be obtained numerically by replacing these values in the intensity equation above. In general, smaller skimmer radii coupled with smaller distances between the skimmer and the pinhole are preferred, leading in practice to the design of increasingly smaller pinhole microscopes.
Zone plate configuration
The zone plate microscope uses a zone plate (that acts roughly like as a classical lens) instead of a pinhole to focus the atom beam into a small focal spot. This means that the beam width equation changes significantly (see below).
Here, is the zone plate magnification and is the width of the smallest zone. Note the presence of chromatic aberrations (). The approximation sign indicates the regime in which the distance between the zone plate and the skimmer is much bigger than its focal length.
The first term in this equation is similar to the geometric contribution in the pinhole case: a bigger zone plate (taken all parameters constant) corresponds to a bigger focal spot size. The third term differs from the pinhole configuration optics as it includes a quadratic relation with the skimmer size (which is imaged through the zone plate) and a linear relation with the zone plate magnification, which will at the same time depend on its radius.
The equation to maximise, the intensity, is the same as the pinhole case with the substitution . By substitution of the magnification equation:
where is the average de-Broglie wavelength of the beam. Taking a constant , which should be made equal to the smallest achievable value, the maxima of the intensity equation with respect to the zone plate radius and the skimmer-zone plate distance can be obtained analytically. The derivative of the intensity with respect to the zone plate radius can be reduced the following cubic equation (once it has been set equal to zero):
Here some groupings are used: is a constant that gives the relative size of the smallest aperture of the zone plate compared with the average wavelength of the beam and is the modified beam width, which is used through the derivation to avoid explicitly operating with the constant airy term: .
This cubic equation is obtained under a series of geometrical assumptions and has a closed-form analytical solution that can be consulted in the original paper or obtained through any modern-day algebra software. The practical consequence of this equation is that zone plate microscopes are optimally designed when the distances between the components are small, and the radius of the zone plate is also small. This goes in line with the results obtained for the pinhole configuration, and has as its practical consequence the design of smaller scanning helium microscopes.
See also
Helium atom scattering
Atom optics
Atomic mirror
Matter wave
References
Microscopes
Nanotechnology
Atomic, molecular, and optical physics | Scanning helium microscopy | Physics,Chemistry,Materials_science,Technology,Engineering | 3,932 |
61,019,132 | https://en.wikipedia.org/wiki/Social%20golfer%20problem | In discrete mathematics, the social golfer problem (SGP) is a combinatorial-design problem derived from a question posted in the usenet newsgroup sci.op-research in May 1998. The problem is as follows: 32 golfers play golf once a week in groups of 4. Schedule these golfers to play for as many weeks as possible without any two golfers playing together in a group more than once.
More generally, this problem can be defined for any golfers who play in groups of golfers for weeks. The solution involves either verifying or refuting the existence of a schedule and, if such a schedule exists, determining the number of unique schedules and constructing them.
Challenges
The SGP is a challenging problem to solve for two main reasons:
First is the large search space resulting from the combinatorial and highly symmetrical nature of the problem. There are a total of schedules in the search space. For each schedule, the weeks , groups within each week , players within each group , and individual player can all be permuted. This leads to a total of isomorphisms, schedules that are identical through any of these symmetry operations. Due to its high symmetry, the SGP is commonly used as a standard benchmark in symmetry breaking in constraint programming (symmetry-breaking constraints).
Second is the choice of variables. The SGP can be seen as an optimization problem to maximize the number of weeks in the schedule. Hence, incorrectly defined initial points and other variables in the model can lead the process to an area in the search space with no solution.
Solutions
The SGP is the Steiner system S(2,4,32) because 32 golfers are divided into groups of 4 and both the group and week assignments of any 2 golfers can be uniquely identified. Soon after the problem was proposed in 1998, a solution for 9 weeks was found and the existence of a solution for 11 weeks was proven to be impossible. In the case of the latter, note that each player must play with 3 unique players each week. For a schedule lasting 11 weeks, a player will be grouped with a total of other players. Since there are only 31 other players in the group, this is not possible. A solution for 10 weeks could be obtained from results already published in 1996. It was independently rediscovered using a different method in 2004, which is the solution presented below.
There are many approaches to solving the SGP, namely
design theory techniques,
SAT formulations (propositional satisfiability problem),
constraint-based approaches, metaheuristic methods, and radix approach.
The radix approach assigns golfers into groups based on the addition of numbers in base . Variables in the general case of the SGP can be redefined as golfers who play in groups of golfers for any number . The maximum number of weeks that these golfers can play without regrouping any two golfers is .
Applications
Working in groups is encouraged in classrooms because it fosters active learning and development of critical-thinking and communication skills. The SGP has been used to assign students into groups in undergraduate chemistry classes and breakout rooms in online meeting software to maximize student interaction and socialization.
The SGP has also been used as a model to study tournament scheduling.
See also
Steiner system
Kirkman's schoolgirl problem
Euler's officer problem
Round-robin tournament
References
External links
Wolfram Community: Radix Approach to Solving the Social Golfer Problem and Graph Visualization
Wolfram Mathworld: Social Golfer Problem
Combinatorial design
Mathematical problems
Families of sets | Social golfer problem | Mathematics | 721 |
2,338,230 | https://en.wikipedia.org/wiki/Southern%20Local%20Supervoid | The Southern Local Supervoid is a tremendously large, nearly empty region of space (a void).
It lies next to the Local Supercluster, which contains our galaxy the Milky Way. Its center is 96 megaparsecs away and the void is 112 megaparsecs in diameter across its narrowest width. Its volume is very approximately 600 billion times that of the Milky Way. See volumes of similar orders of magnitude.
See also
List of largest voids
KBC Void
References
Voids (astronomy) | Southern Local Supervoid | Astronomy | 107 |
22,059,695 | https://en.wikipedia.org/wiki/June%202038%20lunar%20eclipse | A penumbral lunar eclipse will occur at the Moon’s descending node of orbit on Thursday, June 17, 2038, with an umbral magnitude of −0.5259. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A penumbral lunar eclipse occurs when part or all of the Moon's near side passes into the Earth's penumbra. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. Occurring about 2.7 days after perigee (on June 14, 2038, at 11:30 UTC), the Moon's apparent diameter will be larger.
This eclipse will be the second of four penumbral lunar eclipses in 2038, with the others occurring on January 21, July 16, and December 11.
Visibility
The eclipse will be completely visible over eastern North America, South America, west and southern Africa, and western Europe, seen rising over western North America and the eastern Pacific Ocean and setting over northeast Africa, eastern Europe, and the Middle East.
Eclipse details
Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. The first and last eclipse in this sequence is separated by one synodic month.
Related eclipses
Eclipses in 2038
An annular solar eclipse on January 5.
A penumbral lunar eclipse on January 21.
A penumbral lunar eclipse on June 17.
An annular solar eclipse on July 2.
A penumbral lunar eclipse on July 16.
A penumbral lunar eclipse on December 11.
A total solar eclipse on December 26.
Metonic
Followed by: Lunar eclipse of April 5, 2042
Tzolkinex
Preceded by: Lunar eclipse of May 7, 2031
Half-Saros
Preceded by: Solar eclipse of June 12, 2029
Followed by: Solar eclipse of June 23, 2047
Tritos
Preceded by: Lunar eclipse of July 18, 2027
Followed by: Lunar eclipse of May 17, 2049
Lunar Saros 111
Preceded by: Lunar eclipse of June 5, 2020
Followed by: Lunar eclipse of June 27, 2056
Inex
Preceded by: Lunar eclipse of July 7, 2009
Followed by: Lunar eclipse of May 28, 2067
Triad
Preceded by: Lunar eclipse of August 17, 1951
Followed by: Lunar eclipse of April 18, 2125
Lunar eclipses of 2038–2042
Saros 111
Half-Saros cycle
A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two total solar eclipses of Solar Saros 118.
See also
List of lunar eclipses and List of 21st-century lunar eclipses
Notes
External links
2038-06
2038-06
2038 in science | June 2038 lunar eclipse | Astronomy | 689 |
73,056,460 | https://en.wikipedia.org/wiki/Candle%20Corporation | Candle Corporation was an American software company active from 1976 to 2004. The company spent the first two decades developing system monitoring applications for a variety of IBM mainframes and their corresponding software, their first being OMEGAMON which saw quick widespread adoption in commercial enterprises. In the mid-1990s, the company made pivots toward non-mainframe monitoring software and middleware. IBM acquired the company for between $350 million and $600 million in 2004.
History
1970s – 1980s
Aubrey G. Chernick (born 1949 in Los Angeles, California), the founder of Candle, grew up in Deloraine, Manitoba, after his family moved there from California. After graduating from the University of Manitoba with a Bachelor of Science in chemistry, he landed a job at the university's environmental protection laboratory, performing analyses of the Red River of the North. The minicomputers at the lab were Chernick's first hands-on experience with computers; with a fellow employee, he learned how to program in BASIC. Following this, Chernick deviated from his original career path of medicine to work as a software developer for Computer Science Corporation (CSC)'s Canadian subsidiary in Ontario. After getting laid off from CSC after three months, he worked as a programmer for Laurentian University, working on IBM's System/360 Model 40 mainframe, and for the Government of Manitoba, where he learned how to operate and code for IBM's MFT and MVS operating systems. These jobs provided Chernick his first experiences with mainframes.
While attending meetings hosted by in Ontario SHARE—a users' group for IBM mainframe personnel—Chernick observed recurring complaints from attendees, who spoke of not being able to satisfy common needs with IBM's operating systems. In 1975, Chernick convinced Canada Life's Ontario branch to let him use their mainframes as a development platform for an application that monitored system performance, in exchange for a bargain license for the final product. The finished software, which he named OMEGAMON/MVS, took roughly five months to develop. Immediately afterward, Chernick established Candle Services Corporation from his apartment in Toronto in October 1976 and began selling OMEGAMON door-to-door to various businesses (one such being Datacrown, where he unsuccessfully vied for employment). He named the company Candle both to convey enlightenment and warmth and to avoid the glut of tech-jargon-heavy names. In 1977, Chernick moved the company to West Los Angeles and abbreviated the company name to Candle Corporation. In California, he gained clients such as Southern California Edison, Northrop, Hughes Aircraft, TRW, and Warner Bros.
Candle employed 52 people in 1980 and logged $4.5 million in revenue for that year; in December alone the company recorded sales of $1 million. In January 1981, the company released OMEGAMON/CICS, a performance monitor oriented toward the financial sector and their transaction processing systems. In September 1982, Candle released MVS, targeting system administrators. Candle's sales in 1981 totaled $10.4 million, while employment grew to 108. That same year, the company established a philanthropic medical and health foundation, the Candle Foundation.
The company reached revenues of $100 million in 1988, a year after acquiring Chicago-based Netserv, Inc. By the end of the next year, Candle employed roughly 700 people. Candle was second in market share in the field of performance measurement software, according to Software Magazine, cornering 32 percent of the market. They were narrowly defeated by IBM (33 percent); third place was Boole & Babbage (13 percent). In 1988, the company released AF/OPERATOR, one of the first console automation software packages, and AF/REMOTE an automation management utility. Shortly after the company released OMEGAMON for DB/2 relational databases, which saw quick widespread adoption. In June 1989, the company announced OMEGACENTER, an integrated performance management and automation software package for data centers and large companies running local IBM mainframes. OMEGACENTER was one of the first performance measuring applications with a graphical user interface, via the Status Monitor tool.
1990s – 2000s
In summer 1990, Candle released OMEGAMON II, which integrated several of the company's existing applications and built on the graphical user interface of Status Monitor to these integrated functions. In 1991, the company unveiled three more software utilities, including a pair for DB/2 and the OMEGAVIEW status management utility. By the end of 1990, the company reached $151.4 million in sales. In 1991, they were named the 20th largest software company in Software Magazine. In late 1992, Candle moved its headquarters to a 150,000-square-foot building in the Water Garden area of Santa Monica, in the biggest office lease within Los Angeles County in 1992. It also opened a data center of its own, in the outskirts of Los Angeles. In 1993, Candle introduced OMEGACENTER for VMS and OMEGAMON II for SMS. Candle logged $210 million in sales that year; the following year, the company collected revenues of $213 million.
In 1995, the company released Candle Command Center (CCC), a suite of network and systems management software for servers running AIX and MVS and PC-compatible workstations running OS/2. With the release of CCC, Candle began pivoting away from mainframe software, the company simultaneously launching a $500 million research and development initiative to reinforce this pivot. The company also placed its first advertisement in a trade publication to communicate this move.
In 1996, Candle made yet another pivot toward developing middleware for networked computers. To this end, Candle acquired several companies: CleverSoft, Inc., a provider of management tools for servers running Lotus Notes based in Scarborough, Maine; AMSYS North America, a service provider for MQSeries based in Mendon, Massachusetts; PowerQ Software Corporation, a maker of MQSeries software development and testing environments; and Apertus Technologies' MQView for distributed MQSeries installations. Employment in Candle reached 1,200 in 1996, approximately 550 of which working in the company's 29 branch offices. During this time, the company's mainframe software and middleware were used in roughly 5,000 mainframes, and it counted 75 percent of the Fortune 500 among its clientele.
Candle relocated its headquarters again in 1999, moving about 700 employees from Santa Monica to a four-story 335,000-square-foot building in El Segundo—a building once occupied by Rockwell International for the Design and Engineering of B-1 Lancers. In the same year, the company shifted its focus to e-commerce and launched eBA*ServiceMonitor, a comprehensive monitoring application for online businesses. By the end of the millennium, Candle reached $382 million in revenue and employed 1,800 people total.
The company reached the 2,000 employee mark in 2000, the same year sales reached $400 million. In the next year, the company released OMEGAMON XE and DE, configurations of their flagship product centered on e-businesses, and a software-as-a-service platform, CandleNet eBusiness Platform, which facilitated the deployment of e-commerce services. The split between mainframe and non-mainframe pursuits within the company was 60-to-40 by this point. In 2002, the company released PathWAI, a line of software packages and consulting services for streamlining the process of designing and developing middleware and web server back-ends. Candle reached $207 million in revenue in 2002 and $328 million in sales in 2003.
IBM acquired Candle Corporation in April 2004 for an undisclosed sum. The deal was reportedly worth between $350 million and $600 million. After the acquisition, IBM absorbed Candle's assets into their Tivoli Software division.
References
IBM acquisitions
1976 establishments in Ontario
1977 disestablishments in Ontario
1977 establishments in California
2004 disestablishments in California
American companies established in 1976
American companies disestablished in 2004
Canadian companies established in 1976
Canadian companies disestablished in 1977
Defunct software companies of Canada
Defunct software companies of the United States
Software companies established in 1976
Software companies disestablished in 2004
Middleware | Candle Corporation | Technology,Engineering | 1,679 |
48,188,587 | https://en.wikipedia.org/wiki/Cll1 | Toxin Cll1 is a toxin from the venom of the Mexican scorpion Centruroides limpidus limpidus, which changes the activation threshold of sodium channels by binding to neurotoxin binding site 4, resulting in increased excitability.
Etymology and source
The toxin Cll1 is named after its producing species, Centruroides limpidus limpidus. Along with Cll1, multiple toxins are excreted in its venom.
Chemistry
Cll1 is a long chain neuropeptide belonging to the scorpion toxin superfamily. Cll1 is classified as a member of the beta-toxin subfamily.
The global secondary structure of Cll1 is similar to that of other scorpion beta-toxins, including the alpha-helix, triple stranded antiparallel beta-sheet, and the four disulfide bridges. The higher affinity for crustacean rather than mammalian sodium channels has been attributed to the presence of Trp18, a hydrophobic amino acid at the surface of Cll1.
Target
Cll1 targets, like the classical scorpion beta-toxin, the voltage-gated sodium channels (Nav). Beta-toxins bind to the extracellular end of the voltage sensor S4 at the loop between the 3rd and 4th segment of the second domain. By binding it alters the voltage dependent opening of the channel.
Mode of action
Cll1 influences three intrinsic properties of the targeted sodium channel:
Voltage dependent activation
Cll1 binds to transmembrane segment S4 of the voltage gated sodium channels. Its binding shifts the activation threshold of the sodium channel towards more negative membrane potentials.
Seven different isoforms of the voltage gated sodium channels (Nav1.1-Nav1.7) have been studied in the presence of Cll1. In almost all of these seven isoforms, Cll1 affects voltage dependent activation. It has only a minor effect on the Nav1.1-1.4 and Nav1.7 channels, but a much larger effect on isoform Nav1.6.
Peak current
Cll1 causes a reduction of the peak current when bound to the voltage activated sodium channels. This effect was present in all but one of the seven tested isoforms (Nav1.1-Nav1.6). The only isoform that showed no reduction in peak current was Nav1.7.
Resurgent current
Cll1 can induce resurgent currents. This effect has also been demonstrated for other beta-scorpion toxins. The resurgent current is strongest in Nav1.6, but it is also present to a much lesser extent in other isoforms of the voltage activated sodium channels.
Toxicity and treatment
The LD50 of the Cll1 toxin in mice is 85 μg/kg. A possible treatment for an intoxication by Cll1 toxin is the use of single chain variable fragments (scFv).
Other possible treatments find their origin in traditional Mexican medicine. Several herbs used in traditional Mexican medicine have been proven to be effective in treating an intoxication from the whole venom from C. limpidus limpidus in mice, including Bouvardia ternifolia.
References
Neurotoxins
Scorpion toxins
Ion channel toxins | Cll1 | Chemistry | 670 |
228,626 | https://en.wikipedia.org/wiki/Parazoa | Parazoa (Parazoa, gr. Παρα-, para, "next to", and ζωα, zoa, "animals") are a taxon with sub-kingdom category that is located at the base of the phylogenetic tree of the animal kingdom in opposition to the sub-kingdom Eumetazoa; they group together the most primitive forms, characterized by not having proper tissues or that, in any case, these tissues are only partially differentiated. They generally group a single phylum, Porifera, which lack muscles, nerves and internal organs, which in many cases resembles a cell colony rather than a multicellular organism itself. All other animals are eumetazoans, which do have differentiated tissues.
On occasion, Parazoa reunites Porifera with Archaeocyatha, a group of extinct sponges sometimes considered a separate phylum. In other cases, Placozoa is included, depending on the authors.
Porifera and Archaeocyatha
Porifera and Archaeocyatha show similarities such as benthic and sessile habitat and the presence of pores, with differences such as the presence of internal walls and septa in Archaeocyatha. They have been considered separate phyla, however, the consensus is growing that Archaeocyatha was in fact a type of sponge that can be classified into Porifera.
Porifera and Placozoa
Some authors include in Parazoa the poriferous or sponge phyla and Placozoa on the basis of shared primitive characteristics: Both are simple, show a lack of true tissues and organs, have both asexual and sexual reproduction, and are invariably aquatic. As animals, they are a group that in various studies are at the base of the phylogenetic tree, albeit in a paraphyletic form. Of this group only surviving sponges, which belong to the phylum Porifera, and Trichoplax in the phylum Placozoa.
Parazoa do not show any body symmetry (they are asymmetric); all other groups of animals show some kind of symmetry. There are currently 5000 species, 150 of which are freshwater. The larvae are planktonic and the adults are sessile. The Parazoa–Eumetazoa division has been estimated to be 940 million years ago.
The Parazoa group is now considered paraphyletic. When referenced, it is sometimes considered an equivalent to the Porifera.
Some authors include the Placozoa, a phylum long thought to consist of a single species, Trichoplax adhaerens, in the division, but sometimes it is also placed in the Agnotozoa subkingdom.
Phylogeny
According to the most up-to-date phylogeny, Porifera should not have a direct relationship with Placozoa. In any case, placozoans have simplified coelenterates without common characteristics with sponges.
References
External links
Parazoa
Animals | Parazoa | Biology | 631 |
27,478,191 | https://en.wikipedia.org/wiki/Enforce%20In-order%20Execution%20of%20I/O | Enforce In-order Execution of I/O (EIEIO) is an assembly language instruction used on the PowerPC central processing unit (CPU) which prevents one memory or input/output (I/O) operation from starting until the previous memory or I/O operation completed. This instruction is needed as I/O controllers on the system bus require that accesses follow a particular order, while the CPU reorders accesses to optimize memory bandwidth usage.
Where the name comes from
Notice the pun in the name; the old children's song goes "Old MacDonald had a farm, E-I-E-I-O!". In the book Expert C Programming, Peter van der Linden comments that this instruction is "Probably designed by some old farmer named McDonald" and "There’s nothing wrong with well-placed whimsy."
References
External links
PowerPC Architecture Book version 2.2
IBM computer hardware | Enforce In-order Execution of I/O | Technology | 191 |
21,530,594 | https://en.wikipedia.org/wiki/Grocott%27s%20methenamine%20silver%20stain | In pathology, the Grocott–Gömöri's methenamine silver stain, abbreviated GMS, is a popular staining method in histology. The stain was originally named after György Gömöri, the Hungarian physician who developed the stain.
It is used widely as a screen for fungal organisms. It is particularly useful in staining carbohydrates.
It can be used to identify the yeast-like fungus Pneumocystis jiroveci, which causes a form of pneumonia called Pneumocystis pneumonia (PCP) or pneumocystosis.
The cell walls of these organisms are outlined by the brown to black stain.
The principle of GMS is the reduction of silver ions, which renders the fungal cell wall black. The fungal cell wall commonly contains polysaccharides. In a GMS procedure, chromic acid is first used to oxidize polysaccharides, generating aldehydes. Then Grocott's alkaline hexamine-silver solution is applied, where the silver ions are reduced to black amorphous silver. The reduction reaction by the fungal cell wall is often known as argentaffin reaction.
See also
Methenamine
References
Staining | Grocott's methenamine silver stain | Chemistry,Biology | 258 |
1,436,372 | https://en.wikipedia.org/wiki/Transition%20point | In the field of fluid dynamics the point at which the boundary layer changes from laminar to turbulent is called the transition point. Where and how this transition occurs depends on the Reynolds number, the pressure gradient, pressure fluctuations due to sound, surface vibration, the initial turbulence level of the flow, boundary layer suction, surface heat flows, and surface roughness. The effects of a boundary layer turned turbulent are an increase in drag due to skin friction. As speed increases, the upper surface transition point tends to move forward. As the angle of attack increases, the upper surface transition point also tends to move forward.
Position
The exact position of the transition point is hard to determine due to it being dependent on a large amount of factors. Several methods to predict it to a certain degree of accuracy do exist, however. Most of these methods revolve around analysing the stability of the (laminar) boundary layer using stability theory: a laminar boundary layer may become unstable due to small disturbances, turning it turbulent. One such method assessing the transition point this way is the eN method.
eN method
The eN method works by superimposing small disturbances on the flow, considering it to be laminar. The assumption is made that both the original and the newly disturbed flow satisfy the Navier-Stokes equations. This disturbed flow can be linearised and described with a perturbation equation. This equation may have unstable solutions. Any such case where a disturbance is caused where the perturbation equation has unstable solutions can be considered unstable, and hence could lead to a transition point. This method assumes a flow parallel to the boundary layer with a constant shape, which will not always be the case in analysis. The method can be used to determine the local (in)stability at the span-wise position, however. If a local transition occurs, it must also occur under the same circumstances on the global frame. This analysis can be repeated for multiple span-wise stations. As the transition point is determined by the first point where this happens, only the point closest to the leading edge where this happens is sought for.
A two-dimensional disturbance stream function can be defined as , from which the disturbance velocity components in the x- and y directions follow from . Here the circular frequency ω is taken to be the real in the disturbance stream, and the wave number α complex. Hence, in the case of an instability, the complex part of the wave number needs to be positive for there to be a growing disturbance. Any prior disturbance passing through will be amplified by , where x0 is the value of x where the disturbance with frequency ω first becomes unstable (known as the Orr-Sommerfeld equation). Experiments by Smith and Gamberoni, and later by Van Ingen have shown that transition occurs when the amplification factor (being the critical amplification factor) equals 9. For clean wind tunnels and for atmospheric turbulence, the critical amplification factors equals 12 and 4 in that order.
Experiments have shown that the largest factors affecting the position where this happens are the shape of the velocity profile over the lift-generating surface, the Reynolds number, and the frequency or wavelength of the disturbances itself.
Behind the transition point in a boundary layer the mean speed and friction drag increases.
References
Boundary layers | Transition point | Chemistry | 668 |
639,737 | https://en.wikipedia.org/wiki/TechSoup | TechSoup, founded in 1987 as CompuMentor and later known as TechSoup Global, is a nonprofit international network of non-governmental organizations (NGOs) that provides technical support and technological tools to other nonprofits.
History
After discussing the technology needs of nonprofits with members of the WELL, Daniel Ben-Horin founded CompuMentor (later TechSoup). His objective was to create a program in which those with technology skills ("mentors") volunteered to assist nonprofit organizations with information technologies. In 1991, Fred Silverman, Apple Computer's manager of community affairs, praised CompuMentor as "a perfect marriage of technology and volunteerism."
CompuMentor also began soliciting donations of technological products, largely from tech magazines that had large stocks of unneeded software sent to them by companies seeking coverage of their products, which CompuMentor collected and then sold to nonprofits for a nominal fee, originally $5.
In 1997, CompuMentor received $350,000 in donations, tying it with the IT Resource Center as the largest Nonprofit Technology Assistance Provider in the U.S.
On May 9, 2000, TechSoup website www.techsoup.org was launched.
In 2008, the organization changed its name to TechSoup Global.
As of 2016, TechSoup reported $30.8 million in revenue. It provides technology assistance services and NGO validation services to nongovernmental organizations, foundations, libraries, and other civil society organizations worldwide in partnerships with companies like Microsoft, Adobe, Cisco and Symantec. In partnership with Microsoft, it formed the TechSoup Global Network to support increased distribution of services to nonprofits.
The TechSoup Global Network includes Fundacja TechSoup, a separately incorporated "regional hub" established by TechSoup Global. It is based in Warsaw, Poland, and supports activities in 48 European countries.
Notable programs
TechSoup.org
Launched in January 2002, TechSoup.org is a web site serving nonprofits that provides training webinars, community forums and other resources about the use of technology in nonprofit organizations and public libraries. TechSoup partners with Microsoft to distribute Microsoft's product donations globally, and helps to connect nonprofits and libraries to corporate donors such as Adobe, Symantec, Cisco and Intuit. TechSoup.org also verifies the nonprofit status of organizations seeking donations and matches them to the donated technology products they need.
GuideStar International
GuideStar International is a global service that provides open access to accurate NGO data. GSI was begun in 2010 when TechSoup Global and GuideStar International, a U.K.-registered charity that promotes transparency and civil society organization reporting, combined operations.
NGOsource
NGOsource, a project of the Council on Foundations and TechSoup Global, is an online service for U.S. grantmakers to receive equivalency determinations, which are legal certifications that a non-U.S. NGO is equivalent to a U.S. public charity, thereby reducing the cost and complexity of international grantmaking. Launched in March 2013, it helps U.S. grantmakers streamline their global philanthropy. According to its website, NGOsource was active in 126 countries as of 2018.
NetSquared
NetSquared organized local actors to collaborate in open innovation challenges, as well as monthly face-to-face meetups. NetSquared was organized into local chapters that had monthly meetings. Chapters went by such names as Tech4Good or NetSquared Chicago. NetSquared's "ReStart Slovakia" challenge provided recognition and seed funds to help launch the "Open Courts" project to promote transparency in Slovakia's judicial system. TechSoup Connect became the successor to NetSquared in 2021, and each chapter continues to be led by a volunteer who produces local events, in-person and online.
Quad
In February 2022, TechSoup launched a subscription service, Quad, which it described as "TechSoup's peer-to-peer community where nonprofit organizations connect with tech experts and each other to do great things."
See also
Circuit Rider
NetAid
NetDay
Nonprofit technology
NTAP
NTEN
References
Non-profit organizations based in California
Non-profit technology
Organizations established in 1987
501(c)(3) organizations | TechSoup | Technology | 885 |
75,876 | https://en.wikipedia.org/wiki/Walther%20Nernst | Walther Hermann Nernst (; 25 June 1864 – 18 November 1941) was a German physical chemist known for his work in thermodynamics, physical chemistry, electrochemistry, and solid-state physics. His formulation of the Nernst heat theorem helped pave the way for the third law of thermodynamics, for which he won the 1920 Nobel Prize in Chemistry. He is also known for developing the Nernst equation in 1887.
He studied physics and mathematics at the universities of Zürich, Berlin, Graz and Würzburg, where he received his doctorate 1887. In 1889, he finished his habilitation at University of Leipzig.
Life and career
Early years
Nernst was born in Briesen, Germany (now Wąbrzeźno, Poland) to Gustav Nernst (1827–1888) and Ottilie Nerger (1833–1876). His father was a country judge. Nernst had three older sisters and one younger brother. His third sister died of cholera. Nernst went to elementary school at Graudenz, Germany (now Grudziądz, Poland).
Studies
Nernst started undergraduate studies at the University of Zürich in 1883, then after an interlude in Berlin, he returned to Zürich. He wrote his thesis at University of Graz, where Ludwig Boltzmann was professor, though he worked under the direction of Albert von Ettinghausen. They discovered the Ettingshausen and Nernst effects: that a magnetic field applied perpendicular to a metallic conductor in a temperature gradient gives rise to an electrical potential difference, reciprocally, an electric potential difference produces a thermal gradient. Next, he moved to the University of Würzburg under Friedrich Kohlrausch where he submitted and defended his thesis.
Professional career
Wilhelm Ostwald recruited him to the first department of physical chemistry at Leipzig University. Nernst moved there as an assistant, working on the thermodynamics of electrical currents in solutions. Promoted to lecturer, he taught briefly at the Heidelberg University and then moved to the University of Göttingen. Three years later, he was offered a professorship in Munich. To keep him in Prussia the government created a chair for him at Göttingen. There, he wrote a celebrated textbook Theoretical Chemistry, which was translated into English, French, and Russian. He also derived the Nernst equation for the electrical potential generated by unequal concentrations of an ion separated by a membrane that is permeable to the ion. His equation is widely used in cell physiology and neurobiology.
The carbon electric filament lamp then in use was dim and expensive because it required a vacuum in its bulb. Nernst invented a solid-body radiator with a filament of rare-earth oxides, known as the Nernst glower, it is still important in the field of infrared spectroscopy. Continuous ohmic heating of the filament results in conduction. The glower operates best in wavelengths from 2 to 14 micrometers. It gives a bright light but only after a warm-up period. Nernst sold the patent for one million marks, wisely not opting for royalties because soon the tungsten filament lamp filled with inert gas was introduced. With his riches, Nernst in 1898 bought the first of the eighteen automobiles he owned during his lifetime and a country estate of more than five hundred hectares for hunting. He increased the power of his early automobiles by carrying a cylinder of nitrous oxide that he could inject into the carburetor. After eighteen productive years at Göttingen, investigating osmotic pressure and electrochemistry and presenting a theory of how nerves conduct, he moved to Berlin, and was awarded the title of Geheimrat.
In 1905, he proposed his "New Heat Theorem", later known as the Third law of thermodynamics. He showed that as the temperature approached absolute zero, the entropy approaches zero while the free energy remains above zero. This is the work for which he is best remembered, as it enabled chemists to determine free energies (and therefore equilibrium points) of chemical reactions from heat measurements. Theodore Richards claimed that Nernst had stolen his idea, but Nernst is almost universally credited with the discovery. Nernst became friendly with Kaiser Wilhelm, whom he persuaded to found the Kaiser Wilhelm Gesellschaft for the Advancement of the Sciences with an initial capital of eleven million marks. Nernst's laboratory discovered that at low temperatures specific heats fell markedly and would probably disappear at absolute zero. This fall was predicted for liquids and solids in a 1909 paper of Albert Einstein's on the quantum mechanics of specific heats at cryogenic temperatures. Nernst was so impressed that he traveled all the way to Zürich to visit Einstein, who was relatively unknown in Zürich in 1909, so people said: "Einstein must be a clever fellow if the great Nernst comes all the way from Berlin to Zürich to talk to him." Nernst and Planck lobbied to establish a special professorship in Berlin and Nernst donated to its endowment. In 1913 they traveled to Switzerland to persuade Einstein to accept it; a dream job: a named professorship at the top university in Germany, without teaching duties, leaving him free for research.
In 1911, Nernst and Max Planck organized the first Solvay Conference in Brussels. In the following year, the impressionist painter Max Liebermann painted his portrait.
World War I
In 1914, the Nernsts were entertaining co-workers and students they had brought to their country estate in a private railway car when they learned that war had been declared. Their two older sons entered the army, while father enlisted in the voluntary driver's corps. He supported the German army against their opponent's charges of barbarism by signing the Manifesto of the Ninety-Three, On 21 August 1914, he drove documents from Berlin to the commander of the German right wing in France, advancing with them for two weeks until he could see the glow of the Paris lights at night. The tide turned at the battle of the Marne. When the stalemate in the trenches began, he returned home. He contacted Colonel Max Bauer, the staff officer responsible for munitions, with the idea of driving the defenders out of their trenches with shells releasing tear gas. When his idea was tried one of the observers was Fritz Haber, who argued that too many shells would be needed, it would be better to release a cloud of heavier-than-air poisonous gas; the first chlorine cloud attack on 22 April 1915 was not supported by a strong infantry thrust, so the chance that gas would break the stalemate was irrevocably gone. Nernst was awarded the Iron Cross second class. As a Staff Scientific Advisor in the Imperial German Army, he directed research on explosives, much of which was done in his laboratory where they developed guanidine perchlorate. Then he worked on the development of trench mortars. He was awarded the Iron Cross first class and later the Pour le Mérite. When the high command was considering unleashing unrestricted submarine warfare, he asked the Kaiser for an opportunity to warn about the enormous potential of the United States as an adversary. They would not listen, General Erich Ludendorff shouted him down for "incompetent nonsense."
Return to research
Nernst published his book The Foundations of the New Heat Theorem. In 1918, after studying photochemistry, he proposed the atomic chain reaction theory. It stated that when a reaction in which free atoms are formed that can decompose target molecules into more free atoms would result in a chain reaction. His theory is closely related to the natural process of nuclear fission.
In 1920, he and his family briefly fled abroad because he was one of the scientists on the Allied list of war criminals. Later that year he received the Nobel Prize in Chemistry in recognition of his work on thermochemistry. He was elected Rector of Berlin University for 1921–1922. He set up an agency to channel government and private funds to young scientists and declined becoming Ambassador to the United States. For two unhappy years, he was the president of the Physikalisch-Technische Reichsanstalt (National Physical Laboratory), where he could not cope with the "mixture of mediocrity and red tape". In 1924, he became director of the Institute of Physical Chemistry at Berlin.
In 1927, the decrease in specific heat at low temperatures was extended to gases. He studied the theories of cosmic rays and cosmology.
Although a press release described him as "completely unmusical", Nernst developed an electric piano, the Neo-Bechstein-Flügel in 1930 in association with the Bechstein and Siemens companies, replacing the sounding board with vacuum tube amplifiers. The piano used electromagnetic pickups to produce electronically modified and amplified sound in the same way as an electric guitar. In fact, he was a pianist, sometimes accompanying Einstein's violin.
Rejection of antisemitism
In 1933, Nernst learned that a colleague, with whom he had hoped to collaborate, had been dismissed from the department because he was a Jew. Nernst immediately taxied to see Haber to request a position in his Institute, which was not controlled by the government, only to learn that Haber was moving to England. Soon, Nernst was in trouble for declining to fill out a government form on his racial origins. He retired from his professorship but was sacked from the board of the Kaiser Wilhelm Institute. He lived quietly in the country; in 1937 he traveled to the University of Oxford to receive an honorary degree, also visiting his eldest daughter, her husband, and his three grandchildren.
Death
Nernst had a severe heart attack in 1939. He died in 1941 at Zibelle, Germany (now Niwica, Poland). He was buried three times. He was buried the first time near the place of his death. However, his remains were moved to Berlin, where he was buried a second time. Finally they were moved again and buried near the graves of Max Planck, Otto Hahn and Max von Laue in Göttingen, Germany.
Personal life and family
Nernst married Emma Lohmeyer in 1892 with whom he had two sons and three daughters. Both of Nernst's sons died fighting in World War I.
With his colleagues at the University of Leipzig, Jacobus Henricus van’t Hoff and Svante Arrhenius, was establishing the foundations of a new theoretical and experimental field of inquiry within chemistry and suggested setting fire to unused coal seams to increase the global temperature. He was a vocal critic of Adolf Hitler and of Nazism, and two of his three daughters married Jewish men. After Hitler came to power they emigrated, one to England and the other to Brazil.
Personality
Nernst was mechanically minded in that he was always thinking of ways to apply new discoveries to industry. His hobbies included hunting and fishing. His friend Albert Einstein was amused by "his childlike vanity and self-complacency" "His own study and laboratory always presented aspects of extreme chaos which his coworkers termed appropriately 'the state of maximum entropy'".
Honours
In 1923, botanist Ignatz Urban published Nernstia, which is a genus of flowering plants from Mexico, in the family Rubiaceae and named in his honour.
Publications
Walther Nernst, "Reasoning of theoretical chemistry: Nine papers (1889–1921)" (Ger., Begründung der Theoretischen Chemie : Neun Abhandlungen, 1889–1921). Frankfurt am Main : Verlag Harri Deutsch, c. 2003.
Walther Nernst, "The theoretical and experimental bases of the New Heat Theorem" (Ger., Die theoretischen und experimentellen Grundlagen des neuen Wärmesatzes). Halle [Ger.] W. Knapp, 1918 [tr. 1926]. [ed., this is a list of thermodynamical papers from the physico-chemical institute of the University of Berlin (1906–1916); Translation available by Guy Barr
Walther Nernst, "Theoretical chemistry from the standpoint of Avogadro's law and thermodynamics" (Ger., Theoretische Chemie vom Standpunkte der Avogadroschen Regel und der Thermodynamik). Stuttgart, F. Enke, 1893 [5th edition, 1923].
See also
German inventors and discoverers
Chain reaction
Cosmological constant problem
Cosmic background radiation
History of electrophoresis
Nernst–Einstein equation
Nernst–Lewis–Latimer convention
Solid state ionics
Stochastic electrodynamics
Threshold potential
Tired light
Zero-point energy
References
Cited sources
Stone, A. Douglas (2013) Einstein and the Quantum. Princeton University Press.
Further reading
External links
– Review of Diana Barkan's Walther Nernst and the Transition to Modern Physical Science
"Hermann Walther Nernst, Nobel Prize in Chemistry 1920 : Prize Presentation". Presentation Speech by Professor Gerard De Geer, President of the Royal Swedish Academy of Sciences.
Schmitt, Ulrich, "Walther Nernst". Physicochemical institute, Göttingen
including the Nobel Lecture, 12 December 1921 Studies in Chemical Thermodynamics
1864 births
1941 deaths
People from Wąbrzeźno
Scientists from the Province of Prussia
German physical chemists
Thermodynamicists
Inventors of musical instruments
19th-century German chemists
20th-century German chemists
Rare earth scientists
Humboldt University of Berlin alumni
University of Würzburg alumni
University of Graz alumni
People associated with the University of Zurich
Academic staff of the University of Göttingen
Foreign members of the Royal Society
German Nobel laureates
Nobel laureates in Chemistry
German people of World War I
Recipients of the Iron Cross (1914), 1st class
Recipients of the Pour le Mérite (civil class)
Recipients of Franklin Medal | Walther Nernst | Physics,Chemistry | 2,862 |
55,849,011 | https://en.wikipedia.org/wiki/NGC%204519 | NGC 4519 is a barred spiral galaxy located about 70 million light-years away in the constellation Virgo. NGC 4519 was discovered by astronomer William Herschel on April 15, 1784. It has a companion galaxy known as PGC 41706 and is a member of the Virgo Cluster.
Physical characteristics
NGC 4519 has an asymmetric structure that contains a well-defined bar.
See also
List of NGC objects (4001–5000)
NGC 4498
References
External links
Virgo (constellation)
Barred spiral galaxies
4519
41719
7709
Astronomical objects discovered in 1784
Virgo Cluster
Discoveries by William Herschel | NGC 4519 | Astronomy | 127 |
52,001,960 | https://en.wikipedia.org/wiki/Contact%20guidance | Contact guidance refers to a phenomenon for which the orientation of cells and stress fibers is influenced by geometrical patterns such as nano/microgrooves on substrates, or collagen fibers in gels and soft tissues. This phenomenon was discovered in 1912, and the terminology was introduced in 1945, but it is with the development of tissue engineering that researchers drew increasing attention on this topic, seeing the potential of contact guidance in influencing the morphology and organization of cells. Nevertheless, the biological processes underlying contact guidance are still unclear.
Contact guidance on two-dimensional substrates
When cells are seeded onto flat substrates, they are normally in a random orientation. However, substrates with topographical patterns influence the orientation of cells cultured on these surfaces by their geometrical cues. For example, if a substrate has nano/microgrooves running parallel to each other, cells orient along the direction of these nano/microgrooves. Based on this, cells seem to be able to sense the structural characteristics of their surrounding and consequently respond by adopting the orientation of topographical stimuli. A similar effect can be obtained when cells are cultured on flat surfaces with lines of proteins printed on top (to which cells can adhere), interspersed by repellent lines; in that case, cells also align along the patterns.
It has also been observed that the phenomenon of contact guidance on microgrooved surfaces is influenced by the groove width. For instance, osteoblast-like cells align along the nanogrooves only for grooves wider than 75 nm. A similar behavior has been observed with other cell types, such as fibroblasts, which align along these topographical patterns when the grooves are wider than 150 nm. On the other hand, grooves that are too wide can decrease the effects of contact guidance
Contact guidance in three-dimensional structures
Cells can orient in response to contact guidance when located inside three-dimensional structures, such as collagen gels, scaffolds, and soft tissues. In those conditions, the geometrical cues provided by collagen or scaffold fibers are able to influence the orientation of cells. For example, it has been observed that endothelial colony forming cells align along the direction of the fibers present in electrospun scaffolds. Similarly, the collagen fibers present in collagen gels and soft tissues can influence cell alignment, providing the most important stimulus in terms of cell orientation
Potential of contact guidance for tissue engineering
Recent research has highlighted the importance of cellular alignment for the mechanical properties and functionality of the prostheses developed using the principles of tissue engineering. Currently, scientists are investigating the mechanisms and potential of contact guidance to control cellular alignment, which would ultimately lead to the control of their cellular forces and certain aspects of collagen remodeling.
Biological mechanisms determining contact guidance
Many researchers have formulated hypotheses on the biological mechanisms determining contact guidance. In general, cellular contraction, stress fibers and focal adhesions seem to play an important role. Recently, a computational model has been developed that is able to simulate the re-alignment of cells and stress fibers on top of grooved surfaces. Briefly, it has been supposed that cells, once seeded, form focal adhesions on top of the ridges and not above the grooves.
Once formed, the focal adhesions produce a signal that starts to diffuse into the cell inducing stress fiber assembly. At this point, there are two different possibilities, depending on the groove size. On the one hand, when the groove size is small, the intracellular signal produced by focal adhesions on the ridges can homogenously reach all locations in the cell. In that case, the stress fiber assembly is isotropic, and these fibers can pull on their surroundings in an isotropic fashion, and consequently the resulting cell shape is isotropic (without a preferred alignment).
On the other hand, when the groove size is relatively large, the intracellular signal cannot reach the locations of the cell situated on top of the grooves, as diffusion is limited. As a result, stress fibers form only close to the ridges, and these acto-myosin bundles pull on their surroundings anisotropically. Due to the anisotropic cellular contraction, stress fibers and cells align along the direction of the microgrooves. Further experiments are necessary to validate this theory.
References
Cell biology
Cells
Fibers | Contact guidance | Biology | 893 |
62,955,446 | https://en.wikipedia.org/wiki/Kurt%20Koffka%20Medal | The Kurt-Koffka Medal, Kurt Koffka Medal, Kurt Koffka Award, or Koffka Prize is an annual, international award bestowed by Giessen University's Department of Psychology for "advancing the fields of perception or developmental psychology to an extraordinary extent". The prize commemorates the German psychologist Kurt Koffka, a pioneer of Gestalt Psychology, in particular in the fields of perception and developmental psychology. Koffka worked at Giessen University for 16 years, from 1911 to 1927. The medal was first awarded in 2007.
The medal is notable among psychologists.
History
Kurt Koffka (18 March 1886 – 22 November 1941) was a German psychologist. He was born and educated in Berlin. Along with Max Wertheimer and his close associate Wolfgang Köhler they established Gestalt psychology. Koffka's wide-ranging interests included perception, hearing impairments in brain-damaged patients, interpretation, learning, and the extension of Gestalt theory to developmental psychology.
Each year since 2006, a committee of Giessen University Department of Psychology has sought nominations and decided on the recipient(s) of the award. The first medal was awarded in 2007 to Martin "Marty" Banks. The one exception was 2020, when the award ceremony was deferred to 2021 because of the COVID-19 pandemic.
Description of the medal
The medal is bronze. The front (obverse) side is shown in the info box. A recipient's name is engraved on the outer ring at the bottom. The other side is an embossed version of the seal of the university.
Nominations
Nomination forms are sent by the members of the Committee to large numbers of individuals, usually in September the year before the award is made. These individuals are generally prominent academics working in a relevant area.
Selection
The members of the Committee prepare a report reflecting the advice of experts in the relevant fields.
Prizewinners
Source: Justus Liebig University, Giessen
2007: Martin Banks
2008: Claes van Hofsten
2009: Janette Atkinson and Oliver Braddick
2010: Roberta Klatzky
2011: Concetta Morrone and David Burr
2012: Sandra Trehub
2013: Stuart Anstis
2014: Elizabeth Spelke
2015: Roland S. Johansson
2016: Andrew N. Meltzoff
2017: Jan J. Koenderink and Andrea J. van Doorn
2018: Karen E. Adolph
2019: Dan Kersten
2020: No award
2021: Linda B. Smith
2022: Ted Adelson
2023: Richard Aslin
2024: Mary Hayhoe
Gender balance of recipients
Unlike some science awards, such as the Nobel Prize, the Kurt-Koffka medal has a good gender balance of recipients (by 2023, 11 men and 9 women).
See also
Nobel Prize
Fields Medal
Ig Nobel Prize
List of prizes known as the Nobel of a field
Lists of science and technology awards
List of psychology awards
References
Academic awards
International awards
Science and technology awards
German science and technology awards
University of Giessen
Perception
Psychology awards | Kurt Koffka Medal | Technology | 626 |
12,046,704 | https://en.wikipedia.org/wiki/Phosphatidylglycerol | Phosphatidylglycerol is a glycerophospholipid found in pulmonary surfactant and in the plasma membrane where it directly activates lipid-gated ion channels.
The general structure of phosphatidylglycerol consists of a L-glycerol 3-phosphate backbone ester-bonded to either saturated or unsaturated fatty acids on carbons 1 and 2. The head group substituent glycerol is bonded through a phosphomonoester. It is the precursor of surfactant and its presence (>0.3) in the amniotic fluid of the newborn indicates fetal lung maturity.
Approximately 98% of alveolar wall surface area is due to the presence of type I cells, with type II cells producing pulmonary surfactant covering around 2% of the alveolar walls. Once surfactant is secreted by the type II cells, it must be spread over the remaining type I cellular surface area. Phosphatidylglycerol is thought to be important in spreading of surfactant over the Type I cellular surface area. The major surfactant deficiency in premature infants relates to the lack of phosphatidylglycerol, even though it comprises less than 5% of pulmonary surfactant phospholipids. It is synthesized by head group exchange of a phosphatidylcholine enriched phospholipid using the enzyme phospholipase D.
Biosynthesis
Phosphatidylglycerol (PG) is formed via a complex sequential pathway whereby phosphatidic acid (PA) is first converted to CDP-diacylglyceride by the enzyme CDP-diacylglyceride synthase. Then a PGP synthase enzyme exchanges glycerol-3-phosphate (G3P) for cytidine monophosphase (CMP), forming the temporary intermediate phosphatidylglycerolphosphate (PGP). PG is finally synthesized when a PGP phosphatase enzyme catalyzes the immediate dephosphorylation of the PGP intermediate to form PG. In bacteria, another membrane phospholipid known as cardiolipin can be synthesized by condensing two molecules of phosphatidylglycerol; a reaction catalyzed by the enzyme cardiolipin-synthase. In eukaryotic mitochondria phosphatidylglycerol is converted to cardiolipin by reacting with a molecule of cytidine diphosphate diglyceride in a reaction catalyzed by cardiolipin synthase.
See also
Glycerol
Cardiolipin
Lipid-gated ion channels
References
Hostetler KY, van den Bosch H, van Deenen LL. The mechanism of cardiolipin biosynthesis in liver mitochondria. Biochim Biophys Acta. 1972 Mar 23;260(3):507-13. . PMID 4556770.
External links
Phospholipids
Membrane biology | Phosphatidylglycerol | Chemistry | 663 |
16,848,644 | https://en.wikipedia.org/wiki/TXN2 | Thioredoxin, mitochondrial also known as thioredoxin-2 is a protein that in humans is encoded by the TXN2 gene on chromosome 22. This nuclear gene encodes a mitochondrial member of the thioredoxin family, a group of small multifunctional redox-active proteins. The encoded protein may play important roles in the regulation of the mitochondrial membrane potential and in protection against oxidant-induced apoptosis.
Structure
As a thioredoxin, TXN2 is a 12-kDa protein characterized by the redox active site Trp-Cys-Gly-Pro-Cys. In its oxidized (inactive) form, the two cysteines form a disulfide bond. This bond is then reduced by thioredoxin reductase and NADPH to a dithiol, which serves as a disulfide reductase. In contrast to TXN1, TXN2 contains a putative N-terminal mitochondrial targeting sequence, responsible for its mitochondria localization, and lacks structural cysteines. Two mRNA transcripts of the TXN2 gene differ by ~330 bp in the length of the 3′-untranslated region, and both are believed to exist in vivo.
Function
This nuclear gene encodes a mitochondrial member of the thioredoxin family, a group of small multifunctional redox-active proteins. The encoded protein is ubiquitously expressed in all prokaryotic and eukaryotic organisms, but demonstrates especially high expression in tissues with heavy metabolic activity, including the stomach, testis, ovary, liver, heart, neurons, and adrenal gland. It may play important roles in the regulation of the mitochondrial membrane potential and in protection against oxidant-induced apoptosis. Specifically, the ability of TXN2 to reduce disulfide bonds enables the protein to regulate mitochondrial redox and, thus, the production of reactive oxygen species (ROS). By extension, downregulation of TXN2 can lead to increased ROS generation and cell death. The antiapoptotic function of TXN2 is attributed to its involvement in GSH-dependent mechanisms to scavenge ROS, or its interaction with, and thus regulation of, thiols in the mitochondrial permeability transition pore component adenine nucleotide translocator (ANT).
Overexpression of TXN2 was shown to have attenuated hypoxia-induced HIF-1alpha accumulation, which is in direct opposition of the cytosolic TXN1, which enhanced HIF-1alpha levels. Moreover, although both TXN2 and TXN1 are able to reduce insulin, TXN2 does not depend on the oxidative status of the protein for this activity, a quality which may contribute to their difference in function.
Clinical significance
It has been demonstrated that genetic polymorphisms in the TXN2 gene may be associated with the risk of spina bifida.
TXN2 is known to inhibit transforming growth factor (TGF)-β-stimulated ROS generation independent of Smad signaling. TGF-β is a pro-oncogenic cytokine that induces epithelial–mesenchymal transition (EMT), which is a crucial event in metastatic progression. In particular, TXN2 inhibits TGF-β-mediated induction of HMGA2, a central EMT mediator, and fibronectin, an EMT marker.
Interactions
TXN2 is shown to interact with ANT.
References
Further reading
Proteins
Genes | TXN2 | Chemistry | 750 |
23,155,814 | https://en.wikipedia.org/wiki/Jacketed%20vessel | In chemical engineering, a jacketed vessel is a container that is designed for controlling temperature of its contents, by using a cooling or heating "jacket" around the vessel through which a cooling or heating fluid is circulated.
A jacket is a cavity external to the vessel that permits the uniform exchange of heat between the fluid circulating in it and the walls of the vessel. There are several types of jackets, depending on the design:
Conventional Jackets. A second shell is installed over a portion of the vessel, creating an annular space within which cooling or heating medium flows. A simple conventional jacket, with no internal components, is generally very inefficient for heat transfer because the flow media has an extremely low velocity resulting in a low heat transfer coefficient. Condensing media, such as steam or Dowtherm A, is an exception because in this case the heat transfer coefficient doesn't depend on velocity or turbulence, but instead is related to the surface area upon which the media condenses and the efficiency of removing condensate. Internals include baffles that direct flow in a spiral pattern around the jacket, and agitating nozzles that cause high turbulence at the point where the fluid is introduced into the jacket.
Half-Pipe Coil Jackets. Pipes are split lengthwise, usually with an included angle of 180 degrees (split evenly down the middle) or 120 degrees, then wound around the vessel and welded in place.
Dimple Jackets. A thin external shell is affixed to the vessel shell with spot welds located in a regular pattern, often about 50 mm on center both horizontally and vertically. These so-called dimples impart turbulence to the heating or cooling media as it flows through the jacket (see Pillow plate heat exchanger).
Plate Coils. Often very similar to dimple jackets, but fabricated separately as fully contained jackets that are then strapped to a vessel. They are slightly less efficient than dimple jackets because there is a double layer of metal for the heat to traverse (the plate coil inside surface and the vessel shell). They also require good bonding to the vessel jacket, to prevent an insulating gap between the plate coil and the vessel.
Jackets can be applied to the entire surface of a vessel or just a portion of it. For a vertical vessel, the top head is typically left unjacketed. Jackets can be divided into zones, to divide the flow of the heating or cooling medium. Advantages include: ability to direct flow to certain portions of the jacket, such as only the bottom head when minimal heating or cooling is needed and the entire jacket when maximum heating or cooling is required; ability to provide a higher volume of flow overall (zones are piped in parallel) because the pressure drop through a zone is lower than if the entire jacket is a single zone.
Jacketed vessels can be employed as chemical reactors (to remove the elevated heat of reaction) or to reduce the viscosity of high viscous fluids (such as tar).
Agitation can be also used in jacketed vessels to improve the homogeneity of the fluid properties (such as temperature or concentration).
See also
Heat exchanger
Water jacket
Mixing (process engineering)
References
Bibliography
"API Chemical Synthesis: Trends in Reactor Heat Transfer Design", Stephen Hall and Andy Stoker. Pharmaceutical Engineering, January/February 2004
"Estimate Heat Transfer and Friction in Dimple Jackets", John Garvin, CEP Magazine, April 2001
"Heat Transfer in Agitated Jacketed Vessels", Robert Dream, Chemical Engineering, January 1999
External links
Efficiency of the Heat Transfer Process in a Jacketed Agitated Vessel Equipped with an Eccentrically Located Impeller
Spreadsheet software calculates heat transfer for jacketed vessels
Heat exchangers | Jacketed vessel | Chemistry,Engineering | 752 |
6,796,295 | https://en.wikipedia.org/wiki/Paecilomyces | Paecilomyces is a genus of fungi. A number of species in this genus are plant pathogens.
Several of the entomopathogenic species, such as "Paecilomyces fumosoroseus" have now been placed in the genus Isaria, in the order Hypocreales and family Cordycipitaceae.
In 1974, R.A. Samson transferred the nematicidal species Paecilomyces lilacinus to this genus. However, publications in the 2000s (decade) indicated that the genus Paecilomyces was not monophyletic, and the new genus Purpureocillium was created to hold the taxon which includes P. lilacinum: with both parts of the name referring to the purple conidia produced by the fungus.
Species
Traditionally, Paecilomyces held strictly asexual species and later housed numerous anamorphic forms of fungi with their teleomorph described to a separate genus. From 41 described species in 2000 and over 100 known combinations in this genus, the amount of species in Paecilomyces reduced to 10 described species in 2020 following the adaptation of the "one fungus one name" rule.
Paecilomyces brunneolus
Paecilomyces clematidis Spetik, Eichmeier, Gramaje & Berraf-Tebbal (2022)
Paecilomyces dactylethromorphus
Paecilomyces divaricatus
Paecilomyces formosus Urquhart (2023)
Paecilomyces fulvus
Paecilomyces lagunculariae
Paecilomyces niveus
Paecilomyces paravariotii Urquhart (2023)
Paecilomyces penicilliformis Jurjević & Hubka (2020)
Paecilomyces tabacinus
Paecilomyces variotii
Paecilomyces zollerniae
Species combinations not listed by Houbraken et al. 2020
Paecilomyces albus
Paecilomyces andoi
Paecilomyces aspergilloides
Paecilomyces atrovirens
Paecilomyces austriacus
Paecilomyces borysthenicus
Paecilomyces breviramosus
Paecilomyces byssochlamydoides
Paecilomyces cinnamomeus
Paecilomyces clavisporus
Paecilomyces crassipes
Paecilomyces cremeoroseus
Paecilomyces cylindricosporus
Paecilomyces echinosporus
Paecilomyces erectus
Paecilomyces griseiviridis
Paecilomyces guaensis
Paecilomyces hawkesii
Paecilomyces huaxiensis
Paecilomyces indicus
Paecilomyces laeensis
Paecilomyces lecythidis
Paecilomyces marinum
Paecilomyces maximus
Paecilomyces militaris
Paecilomyces musicola
Paecilomyces neomarinum
PAecilomyces niphetodes
Paecilomyces odonatae
Paecilomyces parvisporus
Paecilomyces pascuus
Paecilomyces persimplex
Paecilomyces puntonii
Paecilomyces ramosus
Paecilomyces rariramus
Paecilomyces simplex
Paecilomyces smilanensis
Paecilomyces spectabilis
Paecilomyces stipitatus
Paecilomyces subflavus
Paecilomyces subglobosus
Paecilomyces suffultus
Paecilomyces tenuis
Paecilomyces victoriae
Paecilomyces vinaceus
Paecilomyces viridulus
Paecilomyces zollerniae
Species to be ranked in Clavicipitaceae
Paecilomyces antarcticus
Paecilomyces niphetodes
Paecilomyces penicillatus
Paecilomyces purpureus
Paecilomyces verticillatus
Paecilomyces wawuensis
Species formerly described in Paecilomyces
Class Eurotiomycetes
Order Eurotiales
Family Aspergillaceae
Genus Evansstolkia: P. leycettanus
Genus Penicillium: P. burci, P. ehrlichii, P. lineatum, P. mandshuricus
Family Thermoascaceae
Genus Thermoascus: P. aegyptiacus, P. crustaceus, P. taitungiacus, P. verrucosus
Family Trichocomaceae
Genus Talaromyces: P. aerugineus, P. cinnabarinus
Genus Sagenomella: P. griseoviridis, P. humicola, P. striatisporus, P. variabilis,
Class Leotiomycetes
Order Helotiales
Family Pleuroascaceae
Genus Venustampulla: P. parvus
Class Sordariomycetes
Order incertae sedis
Genus Chlorocillium: P. griseus
Genus Sphaeronaemella: P. betae
Order Cephalothecales
Family Cephalothecaceae
Genus Phialemonium: P. flavescens and P. inflatus
Order Hypocreales
Genus Acremonium: P. terricola
Family Bionectriaceae
Genus Sesquicillium: P. buxi
Genus Waltergamsia: P. fusidioides
Genus Verruciconidia: P. persicinus
Family Clavicipiteae
Genus Keithomyces: P. carneus
Genus Marquandomyces: P. marquandii
Genus Metarhizium: P. paranaensis, P. reniformis, P. viridis
Family Cordycipitaceae
Genus Cordyceps: P. amoene-roseus, P. cateniannulatus, P. cateniobliquus, P. coleopterorum, P. farinosus, P. fumosoroseus, P. hibernicum and P. isarioides, P. ghanensis, P. gunnii, P. heliothis and P. tenuipes, P. javanicus, P. loushanensis
Genus Isaria: P. cicadae, P. xylariiformis
Genus Samsoniella: P. hepiali
Family Nectriaceae
Genus Mariannaea: P. elegans
Genus Neocosmospora: P. fuscus
Family Ophiocordycipitaceae
Genus Purpureocillium: P. lilacinus as well as P. nostocoides
Genus Pleurocordyceps: P. sinensis
Family Sarocladiaceae
Genus Sarocladium: P. bacillisporus, P. ochraceus
Family Tilachlidiaceae
Genus Septofusidium: P. berolinensis, P. herbarum
Family Valsonectriaceae
Genus Valsonectria: P. roseolus
Order Microascales
Family Microascaceae
Genus Microascus: P. fuscatus
Order Sordariales
Family Chaetomiaceae
Genus Acrophialophora: P. ampullaris, P. ampulliphoris, P. biformis, P. cinereus, P. curticatenatus, P. furcatus, P. fusisporus, P. major,
Order Trichosphaeriales
Family Trichosphaeriaceae
Genus Verticillium: P. baarnensis, P. coccosporus, P. eriophytis, P. insectorum, P. sulfurellum
incertae sedis Ascomycota
Genus Sagrahamata: P. iriomoteanus
Genus Spicaria: P. canadensis, P. cossus, P. fimetarius, P. longipes
See also
Cordyceps
Tarsonemidae
References
External links
University of Adelaide
Mold-Help
Carnivorous fungi
Fungal pest control agents
Eurotiomycetes genera
Taxa described in 1907 | Paecilomyces | Biology | 1,739 |
457,210 | https://en.wikipedia.org/wiki/Pure%20mathematics | Pure mathematics is the study of mathematical concepts independently of any application outside mathematics. These concepts may originate in real-world concerns, and the results obtained may later turn out to be useful for practical applications, but pure mathematicians are not primarily motivated by such applications. Instead, the appeal is attributed to the intellectual challenge and aesthetic beauty of working out the logical consequences of basic principles.
While pure mathematics has existed as an activity since at least ancient Greece, the concept was elaborated upon around the year 1900, after the introduction of theories with counter-intuitive properties (such as non-Euclidean geometries and Cantor's theory of infinite sets), and the discovery of apparent paradoxes (such as continuous functions that are nowhere differentiable, and Russell's paradox). This introduced the need to renew the concept of mathematical rigor and rewrite all mathematics accordingly, with a systematic use of axiomatic methods. This led many mathematicians to focus on mathematics for its own sake, that is, pure mathematics.
Nevertheless, almost all mathematical theories remained motivated by problems coming from the real world or from less abstract mathematical theories. Also, many mathematical theories, which had seemed to be totally pure mathematics, were eventually used in applied areas, mainly physics and computer science. A famous early example is Isaac Newton's demonstration that his law of universal gravitation implied that planets move in orbits that are conic sections, geometrical curves that had been studied in antiquity by Apollonius. Another example is the problem of factoring large integers, which is the basis of the RSA cryptosystem, widely used to secure internet communications.
It follows that, presently, the distinction between pure and applied mathematics is more a philosophical point of view or a mathematician's preference rather than a rigid subdivision of mathematics.
History
Ancient Greece
Ancient Greek mathematicians were among the earliest to make a distinction between pure and applied mathematics. Plato helped to create the gap between "arithmetic", now called number theory, and "logistic", now called arithmetic. Plato regarded logistic (arithmetic) as appropriate for businessmen and men of war who "must learn the art of numbers or [they] will not know how to array [their] troops" and arithmetic (number theory) as appropriate for philosophers "because [they have] to arise out of the sea of change and lay hold of true being." Euclid of Alexandria, when asked by one of his students of what use was the study of geometry, asked his slave to give the student threepence, "since he must make gain of what he learns." The Greek mathematician Apollonius of Perga was asked about the usefulness of some of his theorems in Book IV of Conics to which he proudly asserted,
They are worthy of acceptance for the sake of the demonstrations themselves, in the same way as we accept many other things in mathematics for this and for no other reason.
And since many of his results were not applicable to the science or engineering of his day, Apollonius further argued in the preface of the fifth book of Conics that the subject is one of those that "...seem worthy of study for their own sake."
19th century
The term itself is enshrined in the full title of the Sadleirian Chair, "Sadleirian Professor of Pure Mathematics", founded (as a professorship) in the mid-nineteenth century. The idea of a separate discipline of pure mathematics may have emerged at that time. The generation of Gauss made no sweeping distinction of the kind between pure and applied. In the following years, specialisation and professionalisation (particularly in the Weierstrass approach to mathematical analysis) started to make a rift more apparent.
20th century
At the start of the twentieth century mathematicians took up the axiomatic method, strongly influenced by David Hilbert's example. The logical formulation of pure mathematics suggested by Bertrand Russell in terms of a quantifier structure of propositions seemed more and more plausible, as large parts of mathematics became axiomatised and thus subject to the simple criteria of rigorous proof.
Pure mathematics, according to a view that can be ascribed to the Bourbaki group, is what is proved. "Pure mathematician" became a recognized vocation, achievable through training.
The case was made that pure mathematics is useful in engineering education:
There is a training in habits of thought, points of view, and intellectual comprehension of ordinary engineering problems, which only the study of higher mathematics can give.
Generality and abstraction
One central concept in pure mathematics is the idea of generality; pure mathematics often exhibits a trend towards increased generality. Uses and advantages of generality include the following:
Generalizing theorems or mathematical structures can lead to deeper understanding of the original theorems or structures
Generality can simplify the presentation of material, resulting in shorter proofs or arguments that are easier to follow.
One can use generality to avoid duplication of effort, proving a general result instead of having to prove separate cases independently, or using results from other areas of mathematics.
Generality can facilitate connections between different branches of mathematics. Category theory is one area of mathematics dedicated to exploring this commonality of structure as it plays out in some areas of math.
Generality's impact on intuition is both dependent on the subject and a matter of personal preference or learning style. Often generality is seen as a hindrance to intuition, although it can certainly function as an aid to it, especially when it provides analogies to material for which one already has good intuition.
As a prime example of generality, the Erlangen program involved an expansion of geometry to accommodate non-Euclidean geometries as well as the field of topology, and other forms of geometry, by viewing geometry as the study of a space together with a group of transformations. The study of numbers, called algebra at the beginning undergraduate level, extends to abstract algebra at a more advanced level; and the study of functions, called calculus at the college freshman level becomes mathematical analysis and functional analysis at a more advanced level. Each of these branches of more abstract mathematics have many sub-specialties, and there are in fact many connections between pure mathematics and applied mathematics disciplines. A steep rise in abstraction was seen mid 20th century.
In practice, however, these developments led to a sharp divergence from physics, particularly from 1950 to 1983. Later this was criticised, for example by Vladimir Arnold, as too much Hilbert, not enough Poincaré. The point does not yet seem to be settled, in that string theory pulls one way, while discrete mathematics pulls back towards proof as central.
Pure vs. applied mathematics
Mathematicians have always had differing opinions regarding the distinction between pure and applied mathematics. One of the most famous (but perhaps misunderstood) modern examples of this debate can be found in G.H. Hardy's 1940 essay A Mathematician's Apology.
It is widely believed that Hardy considered applied mathematics to be ugly and dull. Although it is true that Hardy preferred pure mathematics, which he often compared to painting and poetry, Hardy saw the distinction between pure and applied mathematics to be simply that applied mathematics sought to express physical truth in a mathematical framework, whereas pure mathematics expressed truths that were independent of the physical world. Hardy made a separate distinction in mathematics between what he called "real" mathematics, "which has permanent aesthetic value", and "the dull and elementary parts of mathematics" that have practical use.
Hardy considered some physicists, such as Einstein and Dirac, to be among the "real" mathematicians, but at the time that he was writing his Apology, he considered general relativity and quantum mechanics to be "useless", which allowed him to hold the opinion that only "dull" mathematics was useful. Moreover, Hardy briefly admitted that—just as the application of matrix theory and group theory to physics had come unexpectedly—the time may come where some kinds of beautiful, "real" mathematics may be useful as well.
Another insightful view is offered by American mathematician Andy Magid:
Friedrich Engels argued in his 1878 book Anti-Dühring that "it is not at all true that in pure mathematics the mind deals only with its own creations and imaginations. The concepts of number and figure have not been invented from any source other than the world of reality". He further argued that "Before one came upon the idea of deducing the form of a cylinder from the rotation of a rectangle about one of its sides, a number of real rectangles and cylinders, however imperfect in form, must have been examined. Like all other sciences, mathematics arose out of the needs of men...But, as in every department of thought, at a certain stage of development the laws, which were abstracted from the real world, become divorced from the real world, and are set up against it as something independent, as laws coming from outside, to which the world has to conform."
See also
Applied mathematics
Logic
Metalogic
Metamathematics
References
External links
What is Pure Mathematics? – Department of Pure Mathematics, University of Waterloo
The Principles of Mathematics by Bertrand Russell
Fields of mathematics
Abstraction | Pure mathematics | Mathematics | 1,847 |
26,285 | https://en.wikipedia.org/wiki/Restriction%20fragment%20length%20polymorphism | In molecular biology, restriction fragment length polymorphism (RFLP) is a technique that exploits variations in homologous DNA sequences, known as polymorphisms, populations, or species or to pinpoint the locations of genes within a sequence. The term may refer to a polymorphism itself, as detected through the differing locations of restriction enzyme sites, or to a related laboratory technique by which such differences can be illustrated. In RFLP analysis, a DNA sample is digested into fragments by one or more restriction enzymes, and the resulting restriction fragments are then separated by gel electrophoresis according to their size.
RFLP analysis is now largely obsolete due to the emergence of inexpensive DNA sequencing technologies, but it was the first DNA profiling technique inexpensive enough to see widespread application. RFLP analysis was an important early tool in genome mapping, localization of genes for genetic disorders, determination of risk for disease, and paternity testing.
RFLP analysis
The basic technique for the detection of RFLPs involves fragmenting a sample of DNA with the application of a restriction enzyme, which can selectively cleave a DNA molecule wherever a short, specific sequence is recognized in a process known as a restriction digest. The DNA fragments produced by the digest are then separated by length through a process known as agarose gel electrophoresis and transferred to a membrane via the Southern blot procedure. Hybridization of the membrane to a labeled DNA probe then determines the length of the fragments which are complementary to the probe. A restriction fragment length polymorphism is said to occur when the length of a detected fragment varies between individuals, indicating non-identical sequence homologies. Each fragment length is considered an allele, whether it actually contains a coding region or not, and can be used in subsequent genetic analysis.
Examples
There are two common mechanisms by which the size of a particular restriction fragment can vary. In the first schematic, a small segment of the genome is being detected by a DNA probe (thicker line). In allele A, the genome is cleaved by a restriction enzyme at three nearby sites (triangles), but only the rightmost fragment will be detected by the probe. In allele a, restriction site 2 has been lost by a mutation, so the probe now detects the larger fused fragment running from sites 1 to 3. The second diagram shows how this fragment size variation would look on a Southern blot, and how each allele (two per individual) might be inherited in members of a family.
In the third schematic, the probe and restriction enzyme are chosen to detect a region of the genome that includes a variable number tandem repeat (VNTR) segment (boxes in schematic diagram). In allele c, there are five repeats in the VNTR, and the probe detects a longer fragment between the two restriction sites. In allele d, there are only two repeats in the VNTR, so the probe detects a shorter fragment between the same two restriction sites. Other genetic processes, such as insertions, deletions, translocations, and inversions, can also lead to polymorphisms. RFLP tests require much larger samples of DNA than do short tandem repeat (STR) tests.
Applications
Analysis of RFLP variation in genomes was formerly a vital tool in genome mapping and genetic disease analysis. If researchers were trying to initially determine the chromosomal location of a particular disease gene, they would analyze the DNA of members of a family afflicted by the disease, and look for RFLP alleles that show a similar pattern of inheritance as that of the disease (see genetic linkage). Once a disease gene was localized, RFLP analysis of other families could reveal who was at risk for the disease, or who was likely to be a carrier of the mutant genes. RFLP test is used in identification and differentiation of organisms by analyzing unique patterns in genome. It is also used in identification of recombination rate in the loci between restriction sites.
RFLP analysis was also the basis for early methods of genetic fingerprinting, useful in the identification of samples retrieved from crime scenes, in the determination of paternity, and in the characterization of genetic diversity or breeding patterns in animal populations.
Alternatives
The technique for RFLP analysis is, however, slow and cumbersome. It requires a large amount of sample DNA, and the combined process of probe labeling, DNA fragmentation, electrophoresis, blotting, hybridization, washing, and autoradiography can take up to a month to complete. A limited version of the RFLP method that used oligonucleotide probes was reported in 1985. The results of the Human Genome Project have largely replaced the need for RFLP mapping, and the identification of many single-nucleotide polymorphisms (SNPs) in that project (as well as the direct identification of many disease genes and mutations) has replaced the need for RFLP disease linkage analysis (see SNP genotyping). The analysis of VNTR alleles continues, but is now usually performed by polymerase chain reaction (PCR) methods. For example, the standard protocols for DNA fingerprinting involve PCR analysis of panels of more than a dozen VNTRs.
RFLP is still used in marker-assisted selection. Terminal restriction fragment length polymorphism (TRFLP or sometimes T-RFLP) is a technique initially developed for characterizing bacterial communities in mixed-species samples. The technique has also been applied to other groups including soil fungi. TRFLP works by PCR amplification of DNA using primer pairs that have been labeled with fluorescent tags. The PCR products are then digested using RFLP enzymes and the resulting patterns visualized using a DNA sequencer. The results are analyzed either by simply counting and comparing bands or peaks in the TRFLP profile, or by matching bands from one or more TRFLP runs to a database of known species. A number of different software tools have been developed to automate the process of band matching, comparison and data basing of TRFLP profiles.
The technique is similar in some aspects to temperature gradient or denaturing gradient gel electrophoresis (TGGE and DGGE).
The sequence changes directly involved with an RFLP can also be analyzed more quickly by PCR. Amplification can be directed across the altered restriction site, and the products digested with the restriction enzyme. This method has been called Cleaved Amplified Polymorphic Sequence (CAPS). Alternatively, the amplified segment can be analyzed by allele-specific oligonucleotide (ASO) probes, a process that can often be done by a simple dot blot.
See also
Amplified fragment length polymorphism (AFLP)
RAPD
STR analysis
References
External links
https://www.ncbi.nlm.nih.gov/projects/genome/probe/doc/TechRFLP.shtml
Biochemistry detection methods
Genomics techniques
Molecular biology | Restriction fragment length polymorphism | Chemistry,Biology | 1,437 |
76,566,332 | https://en.wikipedia.org/wiki/Lak%C3%A1skult%C3%BAra | Lakáskultúra (, ) is a monthly interior design magazine which has been in circulation since 1964. The magazine is headquartered in Budapest, Hungary.
History and profile
Lakáskultúra was established by the Hungarian Architects’ Association in Budapest in 1964. The Association was also one of the sponsors of the magazine. Its publication became regular from 1967. The magazine was part of the Ministry of Domestic Trade between 1964 and 1987. Then it was owned by Pallas until 1989 when Axel Springer Budapest KFT acquired it. The magazine appears monthly.
During the Communist era Lakáskultúra contributed to the state ideology of socialist consumer. It was one of the most read and significant publications in this period. Its popularity was partly because of its content on distinct interior furnishings, room arrangements, decorative inclinations and homes of Hungarian families. Therefore, it hardly ever featured accessories exhibited in trade shows and fairs and professionally designed interiors. The magazine has also published apartment layouts and floor plans.
References
External links
1964 establishments in Hungary
Communist magazines
Hungarian-language magazines
Design magazines
Former state media
Magazines established in 1964
Magazines published in Budapest
Architecture magazines
Axel Springer SE | Lakáskultúra | Engineering | 229 |
20,126,705 | https://en.wikipedia.org/wiki/FERET%20%28facial%20recognition%20technology%29 | The Facial Recognition Technology (FERET) program was a government-sponsored project that aimed to create a large, automatic face-recognition system for intelligence, security, and law enforcement purposes. The program began in 1993 under the combined leadership of Dr. Harry Wechsler at George Mason University (GMU) and Dr. Jonathon Phillips at the Army Research Laboratory (ARL) in Adelphi, Maryland and resulted in the development of the Facial Recognition Technology (FERET) database. The goal of the FERET program was to advance the field of face recognition technology by establishing a common database of facial imagery for researchers to use and setting a performance baseline for face-recognition algorithms.
Potential areas where this face-recognition technology could be used include:
Automated searching of mug books using surveillance photos
Controlling access to restricted facilities or equipment
Checking the credentials of personnel for background and security clearances
Monitoring airports, border crossings, and secure manufacturing facilities for particular individuals
Finding and logging multiple appearances of individuals over time in surveillance videos
Verifying identities at ATM machines
Searching photo ID records for fraud detection
The FERET database has been used by more than 460 research groups and is currently managed by the National Institute of Standards and Technology (NIST). By 2017, the FERET database has been used to train artificial intelligence programs and computer vision algorithms to identify and sort faces.
History
The origin of facial recognition technology is largely attributed to Woodrow Wilson Bledsoe and his work in the 1960s, when he developed a system to identify faces from a database of thousands of photographs. The FERET program first began as a way to unify a large body of face-recognition technology research under a standard database. Before the program's inception, most researchers created their own facial imagery database that was attuned to their own specific area of study. These personal databases were small and usually consisted of images from less than 50 individuals. The only notable exceptions were the following:
Alex Pentland’s database of around 7500 facial images at the Massachusetts Institute of Technology (MIT)
Joseph Wilder's database of around 250 individuals at Rutgers University
Christoph von der Malsburg’s database of around 100 facial images at the University of Southern California (USC)
The lack of a common database made it difficult to compare the results of face recognition studies in the scientific literature because each report involved different assumptions, scoring methods, and images. Most of the papers that were published did not use images from a common database nor follow a standard testing protocol. As a result, researchers were unable to make informed comparisons between the performances of different face-recognition algorithms.
In September 1993, the FERET program was spearheaded by Dr. Harry Wechsler and Dr. Jonathon Phillips under the sponsorship of the U.S. Department of Defense Counterdrug Technology Development Program through DARPA with ARL serving as technical agent.
Phase I
The first facial images for the FERET database were collected from August 1993 to December 1994, a time period known as Phase I. The pictures were initially taken with a 35-mm camera at both GMU and ARL facilities, and the same physical setup was used in each photography session to keep the images consistent. For each individual, the pictures were taken in sets, including two frontal views, a right and left profile, a right and left quarter profile, a right and left half profile, and sometimes at five extra locations. Therefore, a set of images consisted of 5 to 11 images per person. At the end of Phase I, the FERET database had collected 673 sets of images, resulting in over 5000 total images.
At the end of Phase I, five organizations were given the opportunity to test their face-recognition algorithm on the newly created FERET database in order to compare how they performed against each other. There five principal investigators were:
MIT, led by Alex Pentland
Rutgers University, led by Joseph Wilder
The Analytic Science Company (TASC), led by Gale Gordon
The University of Illinois at Chicago (UIC) and the University of Illinois at Urbana-Champaign, led by Lewis Sadler and Thomas Huang
USC, led by Christoph von der Malsburg
During this evaluation, three different automatic tests were given to the principal investigators without human intervention:
The large gallery test, which served to baseline how algorithms performed against a database when it has not been properly tuned.
The false-alarm test, which tested how well the algorithm monitored an airport for suspected terrorists.
The rotation test, which measured how well the algorithm performed when the images of an individual in the gallery had different poses compared to those in the probe set.
For most of the test trials, the algorithms developed by USC and MIT managed to outperform the other three algorithms for the Phase I evaluation.
Phase II
Phase II began after Phase I, and during this time, the FERET database acquired more sets of facial images. By the start of the Phase II evaluation in March 1995, the database contained 1109 sets of images for a total of 8525 images of 884 individuals. During the second evaluation, the same algorithms from the Phase I evaluation were given a single test. However, the database now contained significantly more duplicate images (463, compared to the previous 60), making the test more challenging.
Phase III
Afterwards, the FERET program entered Phase III where another 456 sets of facial images were added to the database. The Phase III evaluation, which took place in September 1996, aimed to not only gauge the progress of the algorithms since the Phase I assessment but also identify the strengths and weaknesses of each algorithm and determine future objectives for research. By the end of 1996, the FERET database had accumulated a total of 14,126 facial images pertaining to 1199 different individuals as well as 365 duplicate sets of images.
As a result of the FERET program, researchers were able to establish a common baseline for comparing different face-recognition algorithms and create a large standard database of facial images that is open for research.
In 2003, DARPA released a high-resolution, 24-bit color version of the images in the FERET database (existing reference).
References
External links
FERET NIST Website
Color FERET Database
FERET NIST Documents
Facial recognition software
Datasets in computer vision
Biometrics
Biometric databases
Machine learning task
Automatic identification and data capture
Surveillance | FERET (facial recognition technology) | Technology | 1,266 |
5,766,547 | https://en.wikipedia.org/wiki/Lebesgue%20differentiation%20theorem | In mathematics, the Lebesgue differentiation theorem is a theorem of real analysis, which states that for almost every point, the value of an integrable function is the limiting average taken around the point. The theorem is named for Henri Lebesgue.
Statement
For a Lebesgue integrable real or complex-valued function f on Rn, the indefinite integral is a set function which maps a measurable set A to the Lebesgue integral of , where denotes the characteristic function of the set A. It is usually written
with λ the n–dimensional Lebesgue measure.
The derivative of this integral at x is defined to be
where |B| denotes the volume (i.e., the Lebesgue measure) of a ball B centered at x, and B → x means that the diameter of B tends to 0.
The Lebesgue differentiation theorem states that this derivative exists and is equal to f(x) at almost every point x ∈ Rn. In fact a slightly stronger statement is true. Note that:
The stronger assertion is that the right hand side tends to zero for almost every point x. The points x for which this is true are called the Lebesgue points of f.
A more general version also holds. One may replace the balls B by a family of sets U of bounded eccentricity. This means that there exists some fixed c > 0 such that each set U from the family is contained in a ball B with . It is also assumed that every point x ∈ Rn is contained in arbitrarily small sets from . When these sets shrink to x, the same result holds: for almost every point x,
The family of cubes is an example of such a family , as is the family (m) of rectangles in R2 such that the ratio of sides stays between m−1 and m, for some fixed m ≥ 1. If an arbitrary norm is given on Rn, the family of balls for the metric associated to the norm is another example.
The one-dimensional case was proved earlier by . If f is integrable on the real line, the function
is almost everywhere differentiable, with Were defined by a Riemann integral this would be essentially the fundamental theorem of calculus, but Lebesgue proved that it remains true when using the Lebesgue integral.
Proof
The theorem in its stronger form—that almost every point is a Lebesgue point of a locally integrable function f—can be proved as a consequence of the weak–L1 estimates for the Hardy–Littlewood maximal function. The proof below follows the standard treatment that can be found in , , and .
Since the statement is local in character, f can be assumed to be zero outside some ball of finite radius and hence integrable. It is then sufficient to prove that the set
has measure 0 for all α > 0.
Let ε > 0 be given. Using the density of continuous functions of compact support in L1(Rn), one can find such a function g satisfying
It is then helpful to rewrite the main difference as
The first term can be bounded by the value at x of the maximal function for f − g, denoted here by :
The second term disappears in the limit since g is a continuous function, and the third term is bounded by |f(x) − g(x)|. For the absolute value of the original difference to be greater than 2α in the limit, at least one of the first or third terms must be greater than α in absolute value. However, the estimate on the Hardy–Littlewood function says that
for some constant An depending only upon the dimension n. The Markov inequality (also called Tchebyshev's inequality) says that
thus
Since ε was arbitrary, it can be taken to be arbitrarily small, and the theorem follows.
Discussion of proof
The Vitali covering lemma is vital to the proof of this theorem; its role lies in proving the estimate for the Hardy–Littlewood maximal function.
The theorem also holds if balls are replaced, in the definition of the derivative, by families of sets with diameter tending to zero satisfying the Lebesgue's regularity condition, defined above as family of sets with bounded eccentricity. This follows since the same substitution can be made in the statement of the Vitali covering lemma.
Discussion
This is an analogue, and a generalization, of the fundamental theorem of calculus, which equates a Riemann integrable function and the derivative of its (indefinite) integral. It is also possible to show a converse – that every differentiable function is equal to the integral of its derivative, but this requires a Henstock–Kurzweil integral in order to be able to integrate an arbitrary derivative.
A special case of the Lebesgue differentiation theorem is the Lebesgue density theorem, which is equivalent to the differentiation theorem for characteristic functions of measurable sets. The density theorem is usually proved using a simpler method (e.g. see Measure and Category).
This theorem is also true for every finite Borel measure on Rn instead of Lebesgue measure (a proof can be found in e.g. ). More generally, it is true of any finite Borel measure on a separable metric space such that at least one of the following holds:
the metric space is a Riemannian manifold,
the metric space is a locally compact ultrametric space,
the measure is doubling.
A proof of these results can be found in sections 2.8–2.9 of (Federer 1969).
See also
Lebesgue's density theorem
References
Theorems in real analysis
Theorems in measure theory | Lebesgue differentiation theorem | Mathematics | 1,149 |
35,862,964 | https://en.wikipedia.org/wiki/Geomagic | Geomagic is the professional engineering software brand of Hexagon AB. The products are focused on computer-aided design, with an emphasis on 3D scanning and other non-traditional design methodologies, such as voxel-based modeling with haptic input.
History
Geomagic was founded in 1997 by Ping Fu and Herbert Edelsbrunner in Morrisville, North Carolina. It was acquired by 3D Systems in February 2013 and combined with that company's other software businesses (namely Rapidform acquired by 3D Systems in October 2012 and Albre in July 2011). Geomagic had previously acquired SensAble technologies. On 12 December 2024, it was acquired by Hexagon AB. It will be part of their Manufacturing Intelligence Division.
Products
Geomagic Capture is an integrated system consisting of a blue LED structured-light 3D scanner and one of several pieces of application-specific software. The systems are marketed for use as scan-based design tools, wherein a physical object is 3D scanned and then converted into a 3D CAD model, or inspection tools, wherein a physical object is scanned and then dimensionally verified by comparing to a nominal 3D CAD model.
Geomagic Design is a mechanical Computer-Aided Design software, with an emphasis on the design of mechanical systems and assemblies. Geomagic Freeform and Sculpt software are voxel-based modeling software packages, and are bundled with 3D Systems' Touch haptic devices which interface with the software to deliver force-feedback to the user. Geomagic Design X, Design Direct, Studio and Wrap are software products that offer different workflows for creating manufacturing-ready 3D models, including Solid modeling and NURBS surfacing.
Geomagic Qualify and Geomagic Verify focus on delivering measurement, comparison and reporting software tools for first-article and automated inspection processes. Data from point cloud and 3D scanners and Coordinate Measuring Machines (CMMs) can be used, as well as CAD data imported into the system.
References
External links
3D Systems website
Software companies based in North Carolina
Manufacturing software
Computer-aided design
Defunct software companies of the United States | Geomagic | Engineering | 425 |
19,467,882 | https://en.wikipedia.org/wiki/Urbilaterian | The urbilaterian (from German ur- 'original') is the hypothetical last common ancestor of the bilaterian clade, i.e., all animals having a bilateral symmetry.
Appearance
Its appearance is a matter of debate, for no representative has been (or may or may not ever be) identified in the fossil record. Two reconstructed urbilaterian morphologies can be considered: first, the less complex ancestral form forming the common ancestor to Xenacoelomorpha and Nephrozoa; and second, the more complex (coelomate) urbilaterian ancestral to both protostomes and deuterostomes, sometimes referred to as the "urnephrozoan". Since most protostomes and deuterostomes share features — e.g. nephridia (and the derived kidneys), through guts, blood vessels and nerve ganglia— that are useful only in relatively large (macroscopic) organisms, their common ancestor ought also to have been macroscopic. However, such large animals should have left traces in the sediment in which they moved, and evidence of such traces first appear relatively late in the fossil record — long after the urbilaterian would have lived. This leads to suggestions of a small urbilaterian (around 1 mm) which is the supposed state of the ancestor of protostomes, deuterostomes and acoelomorphs.
Dating the urbilaterian
The first evidence of bilateria in the fossil record comes from trace fossils in sediments towards the end of the Ediacaran period (about ), and the first fully accepted fossil of a bilaterian organism is Kimberella, dating to . There are earlier, controversial fossils: Vernanimalcula has been interpreted as a bilaterian, but may simply represent a fortuitously infilled bubble. Fossil embryos are known from around the time of Vernanimalcula (), but none of these have bilaterian affinities. This may reflect a genuine absence of bilateria, however it is likely this is the case as bilateria may not have laid their eggs in sediment, where they would be likely to fossilise.
Molecular techniques can generate expected dates of the divergence between the bilaterian clades, and thus an assessment of when the urbilaterian lived. These dates have huge margins of error, though they are becoming more accurate with time. More recent estimates are compatible with an Ediacaran bilaterian, although it is possible, especially if early bilaterians were small, that the bilateria had a long cryptic history before they left any evidence in the fossil record.
Characteristics of the urbilaterian
Eyes
Light detection (photosensitivity) is present in organisms as simple as seaweeds; the definition of a true eye varies, but in general eyes must have directional sensitivity, and thus have screening pigments so only light from the target direction is detected. Thus defined, they need not consist of more than one photoreceptor cell.
The presence of genetic machinery (the Pax6 and Six genes) common to eye formation in all bilaterians suggests that this machinery - and hence eyes - was present in the urbilaterian. The most likely candidate eye type is the simple pigment-cup eye, which is the most widespread among the bilateria.
Since two types of opsin, the c-type and r-type, are found in all bilaterians, the urbilaterian must have possessed both types - although they may not have been found in a centralised eye, but used to synchronise the body clock to daily or lunar variations in lighting.
Complexity
Proponents of a complex urbilaterian point to the shared features and genetic machinery common to all bilateria. They argue that (1) since these are similar in so many respects, they could have evolved only once; and (2) since they are common to all bilateria, they must have been present in the ancestral bilaterian animal.
However, as biologists' understanding of the major bilaterian lineages increases, it is beginning to appear that some of these features may have evolved independently in each lineage. Further, the bilaterian clade has recently been expanded to include the acoelomorphs — a group of relatively simple flatworms. This lineage lacks key bilaterian features, and if it truly does reside within the bilaterian "family", many of the features listed above are no longer common to all bilateria. Instead, some features — such as segmentation and possession of a heart — are restricted to a sub-set of the bilateria, the deuterostomes and protostomes. Their last common ancestor would still have to be large and complex, but the bilaterian ancestor could be much simpler. However, some scientists stop short of including the acoelomorph clade in the bilateria. This shifts the position of the cladistic node which is being discussed; consequently the urbilaterian in this context is farther out the evolutionary tree and is more derived than the common ancestor of deuterostomes, protostomes and acoelomorphs.
Genetic reconstructions are unfortunately not much help. They work by considering the genes common to all bilateria, but problems arise because very similar genes can be co-opted for different functions. For instance, the gene Pax6 has a function in eye development, but is absent in some animals with eyes; some cnidaria have genes which in bilateria control the development of a layer of cells that the cnidaria do not have. This means that even if a gene can be identified as present in the urbilaterian, we cannot necessary tell what the gene's function was. Before this was realised, genetic reconstructions implied an implausibly complex urbilaterian.
The evolutionary developmental biologist Lewis Held notes that both centipedes and snakes use the oscillating mechanism based on the Notch signaling pathway to produce segments from the growing tip at the rear of the embryo. Further, both groups make use of "the obtuse process of 'resegmentation', whereby the phase of their metameres shifts by half a unit of wavelength, i.e. somites splitting to make vertebrae or parasegments splitting to form segments." Held comments that all this makes it difficult to imagine that their urbilaterian common ancestor was not segmented.
Reconstructing the urbilaterian
The absence of a fossil record gives a starting point for the reconstruction — the urbilaterian must have been small enough not to leave any traces as it moved over or lived in the sediment surface. This means it must have been well below a centimetre in length. As all Cambrian animals are marine, one can reasonably assume that the urbilaterian was too.
Furthermore, a reconstruction of the urbilateria must rest on identifying morphological similarities between all bilateria. While some bilateria live attached to a substrate, this appears to be a secondary adaptation, and the urbilaterian was probably mobile. Its nervous system was probably dispersed, but with a small central "brain". Since acoelomorphs lack a heart, coelom or organs, the urbilaterian probably did too — it would presumably have been small enough for diffusion to do the job of transporting compounds through the body. A small, narrow gut was probably present, which would have had only one opening — a combined mouth and anus.
Functional considerations suggest that the surface of the bilaterian was probably covered with cilia, which it could have used for locomotion or feeding.
there is still no consensus on whether the characteristics of the deuterostomes and protostomes evolved once or many times. Features such as a heart and a blood-circulation system may therefore not have been present even in the deuterostome-protostome ancestor, which would mean that this too could have been small (hence explaining the lack of fossil record).
Possible models of the Urbilaterian
It is possible that the common ancestor of all bilaterals looked similar to:
Colonial-Pennatulacean hypothesis: (Colonialy fusion of cnidarian-like)
The proposal that bilaterals arose from the fusion between pennatulacean-like cnidarian zooids was granted by Dewel, implies that the body plans of bilaterals originated from a colonial ancestor.
This proposal has little or no support in the existing data, and has been commonly used as a justification against the sedentary/semi-sedentary models of urbilaterians as a whole.
Larval Hypothesis (Pelagic larvae and adult ancestor)
Panarticulata hypothesis: (Segmentated annelid-like ancestor)
Cloudinomorpha hypothesis: (Biphasic Sedentary sessile adult and Pelagic larvae)
The recent model by Alexander V. Martynov and Tatiana A. Korshunova revives the idea of a sessile sedentary biphasic ancestor.
Consider that the urbilaterian is an organism whose adult life is sessile sedentary with a juvenile or free and pelagic larval phase. This hypothesis is a derivative of Nielsen's larval hypothesis, but now also considering the homology of the adult forms of choanozoans (except Ctenophora). It also considers various phylogenetic, paleontological and molecular data, relates the adult and ancestral form of anthozoans (from which jellyfish, placozoans, nephrozoans, and perhaps proarticulate are derived), in turn derived from an ancestral organization shared between choanoflagellates, sponges and parahoxozoans.
The current strong bias towards a mobile urbilaterian is considered to cause problems with palaeontological and morphological data in relation to groups within and outside Bilateria.
So members of Proarticulata are an evolutionary dead end rather than the ancestors of nephrozoans. It is possible that the Cloudinids (Cloudina, Conotubus and Multiconotubus) are basal (and therefore bilateral) nephrozoans, because they have considerable similarity with the tubariums of sedentary pterobranchs, as well as with the shells of semi-mobile hyoliths and mobile mollusks, this taking into account the ontogeny of the cloudinids.
This implies that the Cloudinomorpha is not a polyphyletic group as would have been proposed but rather is a paraphyletic grade from which several taxa derive that may or may not conserve the ancestral clonality of basal metazoans, but instead of cloudinids having an annelid-type gut, it would instead be a U-shaped digestive tube, in fact the relationship between Cloudina and annelids is denied.
The hypothesis of annelid-like ancestor is rejected, due to the independent evolution of segmentation and complete metamerism of several groups of bilaterians (annelids, panarthropods, chordates and proarticulates); On the other hand, the urbilaterian would be an animal with a U-shaped gut, with deuterostomic characteristics that hemichordates and lophophorates among other groups conserve, a stolon that holds the organism inside a tube secreted from the embryonic form as a dome or protoconch, a semi-metamerism derived from the formation of mesoderm from the gastrovascular cavity of an anthozoan-like animal.
This form of urbilaterian:
Smooths the transition between anthozoan-like polypoids and various groups of bilaterians.
Taking into account the paraphyly of Cycloneuralia, Lophophorata and potentially Deuterostomia.
The basal location of priapulids among ecdysozoans. Followed by the zero similarity between the priapulids with the cephalozoans that at the time were pointed out as ancestors of the arthropods.
The hastily rejected possible homology of ambulacrarian, bryozoan and brachiozoan tentacles.
The qualities of the common ancestor of mollusks as an animal with a single shell rather than a qiton-like animal.
The location of basal polychaetes such as Oweniidae with still conserved deuterostome characteristics.
The similarities between hyoliths and mollusks.
The derived and non-ancestral position of the annelids, flatworms and perhaps the xenacoelomorphs.
The common ancestor of modern bilaterals would then be more similar to modern pterobranchs, although they would not be completely identical to them.
The location of Ctenophora (Myriazoa hypothesis) should not change the hypothesis since it has been left aside taking only into account the molecular and morphological development of Choanoflagellatea, Porifera and Cnidaria.
See also
References
External links
Solène Song, Viktor Starunov, Xavier Bailly, Christine Ruta, Pierre Kerner, Annemiek J. M. Cornelissen, Guillaume Balavoine: Globins in the marine annelid Platynereis dumerilii shed new light on hemoglobin evolution in bilaterians. In: BMC Evolutionary Biology Vol. 20, Issue 165. 29 December 2020. doi:10.1186/s12862-020-01714-4. See also:
A single gene 'invented' haemoglobin several times . On: EurekAlert! 29 December 2020. Source: CNRS
Evolution of animals
Ediacaran life
Evolutionary biology
Most recent common ancestors | Urbilaterian | Biology | 2,859 |
43,672,606 | https://en.wikipedia.org/wiki/Openpass | OpenPass is a method for data recording on RFID card in an integrated access control system, with proprietary software by different providers.
OpenPass is released under the GPL license.
The OpenPass system
The OpenPass system consist of:
contactless smartcard ISO 15693, without proprietary restrictions;
ticket counters and gates compatible with the standard
central server and open platform for data collection
web service connection and data transmission real-time or [batch].
To allow the exchange of information between heterogeneous access control systems, OpenPass defines an interchange format through the use of metalanguage XML. Access credentials are stored on the smartcard and organized through the use of markers, represented in XML. The XML representations are made public by the server OpenPass.
With a single RFID card, the user access to all sites in the system: each company is able to issue the ticket yourself and the card is recognized by every member of the system independently.
The OpenPass standard defines a distributed network of data centers, that are web service connected with the server. The information collected by data centers regarding sales and passages to the gates and they are sent to the central server. The exchange format between server and collection centers is XML.
The OpenPass server receives the data and stores them in a centralized SQL database, where each data item is related to a UID and any personal details of the customer.
Highlights of OpenPass
The methodology OpenPass is characterized by:
data accessible and understandable by users;
fast and "hands-free" passage to the gates;
online recharge and presale pass;
safety of access credentials are encrypted in emission;
multifunctionality;
management division of the profits from sales of tickets valid for multiple domains;
possibility of batch operation of control system, in hostile contexts where the connection is not continuous;
reducing the cost of cards and equipment since it is not bound to proprietary software.
OpenPass projects
For now, OpenPass is applied to control systems access to the ski lifts, in Italy and France:
Italy: Skipass Lombardia
Skipass Lombardia was the first example in Europe of open standard for integrated access control systems with heterogeneous and proprietary software. OpenPass has created an integrated system for all companies of skilifts in the Lombardy Region: 310 ski lifts, 46 companies in 30 ski areas.
France: Nordic Pass Rhône Alpes
The Federation of Nordic skiing in the French region of Rhône-Alpes (FRAN) has promoted the Nordic Pass Rhône Alpes: a project of integrated access to 5000 km of ski runs in 83 stations of Nordic skiing, with OpenPass standard.
Italy: SkiArea VCO
In the Alps of Piedmont, Neveazzurra ski resort has implemented SkiArea VCO: a project of integrated access to the stations of Neveazzurra resort: Alpe Devero, Antrona Cheggio, Ceppo Morelli, Domobianca (Domodossola), Druogno, Formazza, Macugnaga, Mottarone, Piana di Vigezio, Pian di Sole, San Domenico (Varzo).
References
Radio-frequency identification
Software using the GNU General Public License | Openpass | Engineering | 651 |
46,259,599 | https://en.wikipedia.org/wiki/List%20of%20computers%20with%20on-board%20BASIC | This is a list of computers with on-board BASIC. They shipped standard with a version of BASIC that was installed in the computer. The computers can access the BASIC language without the user inserting cartridges or loading software from external media.
BASICs with Bitwise Ops use -1 as true and the AND and OR operators perform a bitwise operation on the arguments.
FOR/NEXT skip means that body of the loop is skipped if the initial value of the loop times the sign of the step exceeds the final value times the sign of the step (such as 2 TO 1 STEP 1 or 1 TO 2 STEP -1). The statements inside the FOR/NEXT loop will not be executed at all.
Numeric support indicates if a BASIC supports Integers and/or Floating Point.
Variable Name Length is how many characters of a variable name are used to determine uniqueness.
Full tokenization means that all keywords are converted to tokens and all extra space characters are removed. Partial tokenization leaves extra space characters in the source. None means that no tokenization is done. How to test for full tokenization:
10 PRINT "HELLO"
LIST
If it is fully tokenized it should return 10 PRINT "HELLO" without all the extra spaces that were entered.
References
On-Board Basic | List of computers with on-board BASIC | Technology | 257 |
15,343 | https://en.wikipedia.org/wiki/Intron | An intron is any nucleotide sequence within a gene that is not expressed or operative in the final RNA product. The word intron is derived from the term intragenic region, i.e., a region inside a gene. The term intron refers to both the DNA sequence within a gene and the corresponding RNA sequence in RNA transcripts. The non-intron sequences that become joined by this RNA processing to form the mature RNA are called exons.
Introns are found in the genes of most eukaryotes and many eukaryotic viruses and they can be located in both protein-coding genes and genes that function as RNA (noncoding genes). There are four main types of introns: tRNA introns, group I introns, group II introns, and spliceosomal introns (see below). Introns are rare in Bacteria and Archaea (prokaryotes).
Discovery and etymology
Introns were first discovered in protein-coding genes of adenovirus, and were subsequently identified in genes encoding transfer RNA and ribosomal RNA genes. Introns are now known to occur within a wide variety of genes throughout organisms, bacteria, and viruses within all of the biological kingdoms.
The fact that genes were split or interrupted by introns was discovered independently in 1977 by Phillip Allen Sharp and Richard J. Roberts, for which they shared the Nobel Prize in Physiology or Medicine in 1993, though credit was excluded for the researchers and collaborators in their labs that did the experiments resulting in the discovery, Susan Berget and Louise Chow. The term intron was introduced by American biochemist Walter Gilbert:
"The notion of the cistron [i.e., gene] ... must be replaced by that of a transcription unit containing regions which will be lost from the mature messenger – which I suggest we call introns (for intragenic regions) – alternating with regions which will be expressed – exons." (Gilbert 1978)
The term intron also refers to intracistron, i.e., an additional piece of DNA that arises within a cistron.
Although introns are sometimes called intervening sequences, the term "intervening sequence" can refer to any of several families of internal nucleic acid sequences that are not present in the final gene product, including inteins, untranslated regions (UTR), and nucleotides removed by RNA editing, in addition to introns.
Distribution
The frequency of introns within different genomes is observed to vary widely across the spectrum of biological organisms. For example, introns are extremely common within the nuclear genome of jawed vertebrates (e.g. humans, mice, and pufferfish (fugu)), where protein-coding genes almost always contain multiple introns, while introns are rare within the nuclear genes of some eukaryotic microorganisms, for example baker's/brewer's yeast (Saccharomyces cerevisiae). In contrast, the mitochondrial genomes of vertebrates are entirely devoid of introns, while those of eukaryotic microorganisms may contain many introns.
A particularly extreme case is the Drosophila dhc7 gene containing a ≥3.6 megabase (Mb) intron, which takes roughly three days to transcribe. On the other extreme, a 2015 study suggests that the shortest known metazoan intron length is 30 base pairs (bp) belonging to the human MST1L gene. The shortest known introns belong to the heterotrich ciliates, such as Stentor coeruleus, in which most (> 95%) introns are 15 or 16 bp long.
Classification
Splicing of all intron-containing RNA molecules is superficially similar, as described above. However, different types of introns were identified through the examination of intron structure by DNA sequence analysis, together with genetic and biochemical analysis of RNA splicing reactions. At least four distinct classes of introns have been identified:
Introns in nuclear protein-coding genes that are removed by spliceosomes (spliceosomal introns)
Introns in nuclear and archaeal transfer RNA genes that are removed by proteins (tRNA introns)
Self-splicing group I introns that are removed by RNA catalysis
Self-splicing group II introns that are removed by RNA catalysis
Group III introns are proposed to be a fifth family, but little is known about the biochemical apparatus that mediates their splicing. They appear to be related to group II introns, and possibly to spliceosomal introns.
Spliceosomal introns
Nuclear pre-mRNA introns (spliceosomal introns) are characterized by specific intron sequences located at the boundaries between introns and exons. These sequences are recognized by spliceosomal RNA molecules when the splicing reactions are initiated. In addition, they contain a branch point, a particular nucleotide sequence near the 3' end of the intron that becomes covalently linked to the 5' end of the intron during the splicing process, generating a branched intron. Apart from these three short conserved elements, nuclear pre-mRNA intron sequences are highly variable. Nuclear pre-mRNA introns are often much longer than their surrounding exons.
tRNA introns
Transfer RNA introns that depend upon proteins for removal occur at a specific location within the anticodon loop of unspliced tRNA precursors, and are removed by a tRNA splicing endonuclease. The exons are then linked together by a second protein, the tRNA splicing ligase. Note that self-splicing introns are also sometimes found within tRNA genes.
Group I and group II introns
Group I and group II introns are found in genes encoding proteins (messenger RNA), transfer RNA and ribosomal RNA in a very wide range of living organisms. Following transcription into RNA, group I and group II introns also make extensive internal interactions that allow them to fold into a specific, complex three-dimensional architecture. These complex architectures allow some group I and group II introns to be self-splicing, that is, the intron-containing RNA molecule can rearrange its own covalent structure so as to precisely remove the intron and link the exons together in the correct order. In some cases, particular intron-binding proteins are involved in splicing, acting in such a way that they assist the intron in folding into the three-dimensional structure that is necessary for self-splicing activity. Group I and group II introns are distinguished by different sets of internal conserved sequences and folded structures, and by the fact that splicing of RNA molecules containing group II introns generates branched introns (like those of spliceosomal RNAs), while group I introns use a non-encoded guanosine nucleotide (typically GTP) to initiate splicing, adding it on to the 5'-end of the excised intron.
On the accuracy of splicing
The spliceosome is a very complex structure containing up to one hundred proteins and five different RNAs. The substrate of the reaction is a long RNA molecule and the transesterification reactions catalyzed by the spliceosome require the bringing together of sites that may be thousands of nucleotides apart. All biochemical reactions are associated with known error rates and the more complicated the reaction the higher the error rate. Therefore, it is not surprising that the splicing reaction catalyzed by the spliceosome has a significant error rate even though there are spliceosome accessory factors that suppress the accidental cleavage of cryptic splice sites.
Under ideal circumstances, the splicing reaction is likely to be 99.999% accurate (error rate of 10−5) and the correct exons will be joined and the correct intron will be deleted. However, these ideal conditions require very close matches to the best splice site sequences and the absence of any competing cryptic splice site sequences within the introns and those conditions are rarely met in large eukaryotic genes that may cover more than 40 kilobase pairs. Recent studies have shown that the actual error rate can be considerably higher than 10−5 and may be as high as 2% or 3% errors (error rate of 2 or 3 x 10−2) per gene. Additional studies suggest that the error rate is no less than 0.1% per intron. This relatively high level of splicing errors explains why most splice variants are rapidly degraded by nonsense-mediated decay.
The presence of sloppy binding sites within genes causes splicing errors and it may seem strange that these sites haven't been eliminated by natural selection. The argument for their persistence is similar to the argument for junk DNA.
Although mutations which create or disrupt binding sites may be slightly deleterious, the large number of possible such mutations makes it inevitable that some will reach fixation in a population. This is particularly relevant in species, such as humans, with relatively small long-term effective population sizes. It is plausible, then, that the human genome carries a substantial load of suboptimal sequences which cause the generation of aberrant transcript isoforms. In this study, we present direct evidence that this is indeed the case.
While the catalytic reaction may be accurate enough for effective processing most of the time, the overall error rate may be partly limited by the fidelity of transcription because transcription errors will introduce mutations that create cryptic splice sites. In addition, the transcription error rate of 10−5 – 10−6 is high enough that one in every 25,000 transcribed exons will have an incorporation error in one of the splice sites leading to a skipped intron or a skipped exon. Almost all multi-exon genes will produce incorrectly spliced transcripts but the frequency of this background noise will depend on the size of the genes, the number of introns, and the quality of the splice site sequences.
In some cases, splice variants will be produced by mutations in the gene (DNA). These can be SNP polymorphisms that create a cryptic splice site or mutate a functional site. They can also be somatic cell mutations that affect splicing in a particular tissue or a cell line. When the mutant allele is in a heterozygous state this will result in production of two abundant splice variants; one functional and one non-functional. In the homozygous state the mutant alleles may cause a genetic disease such as the hemophilia found in descendants of Queen Victoria where a mutation in one of the introns in a blood clotting factor gene creates a cryptic 3' splice site resulting in aberrant splicing. A significant fraction of human deaths by disease may be caused by mutations that interfere with normal splicing; mostly by creating cryptic splice sites.
Incorrectly spliced transcripts can easily be detected and their sequences entered into the online databases. They are usually described as "alternatively spliced" transcripts, which can be confusing because the term does not distinguish between real, biologically relevant, alternative splicing and processing noise due to splicing errors. One of the central issues in the field of alternative splicing is working out the differences between these two possibilities. Many scientists have argued that the null hypothesis should be splicing noise, putting the burden of proof on those who claim biologically relevant alternative splicing. According to those scientists, the claim of function must be accompanied by convincing evidence that multiple functional products are produced from the same gene.
Biological functions and evolution
While introns do not encode protein products, they are integral to gene expression regulation. Some introns themselves encode functional RNAs through further processing after splicing to generate noncoding RNA molecules. Alternative splicing is widely used to generate multiple proteins from a single gene. Furthermore, some introns play essential roles in a wide range of gene expression regulatory functions such as nonsense-mediated decay and mRNA export.
After the initial discovery of introns in protein-coding genes of the eukaryotic nucleus, there was significant debate as to whether introns in modern-day organisms were inherited from a common ancient ancestor (termed the introns-early hypothesis), or whether they appeared in genes rather recently in the evolutionary process (termed the introns-late hypothesis). Another theory is that the spliceosome and the intron-exon structure of genes is a relic of the RNA world (the introns-first hypothesis). There is still considerable debate about the extent to which of these hypotheses is most correct but the popular consensus at the moment is that following the formation of the first eukaryotic cell, group II introns from the bacterial endosymbiont invaded the host genome. In the beginning these self-splicing introns excised themselves from the mRNA precursor but over time some of them lost that ability and their excision had to be aided in trans by other group II introns. Eventually a number of specific trans-acting introns evolved and these became the precursors to the snRNAs of the spliceosome. The efficiency of splicing was improved by association with stabilizing proteins to form the primitive spliceosome.
Early studies of genomic DNA sequences from a wide range of organisms show that the intron-exon structure of homologous genes in different organisms can vary widely. More recent studies of entire eukaryotic genomes have now shown that the lengths and density (introns/gene) of introns varies considerably between related species. For example, while the human genome contains an average of 8.4 introns/gene (139,418 in the genome), the unicellular fungus Encephalitozoon cuniculi contains only 0.0075 introns/gene (15 introns in the genome). Since eukaryotes arose from a common ancestor (common descent), there must have been extensive gain or loss of introns during evolutionary time. This process is thought to be subject to selection, with a tendency towards intron gain in larger species due to their smaller population sizes, and the converse in smaller (particularly unicellular) species. Biological factors also influence which genes in a genome lose or accumulate introns.
Alternative splicing of exons within a gene after intron excision acts to introduce greater variability of protein sequences translated from a single gene, allowing multiple related proteins to be generated from a single gene and a single precursor mRNA transcript. The control of alternative RNA splicing is performed by a complex network of signaling molecules that respond to a wide range of intracellular and extracellular signals.
Introns contain several short sequences that are important for efficient splicing, such as acceptor and donor sites at either end of the intron as well as a branch point site, which are required for proper splicing by the spliceosome. Some introns are known to enhance the expression of the gene that they are contained in by a process known as intron-mediated enhancement (IME).
Actively transcribed regions of DNA frequently form R-loops that are vulnerable to DNA damage. In highly expressed yeast genes, introns inhibit R-loop formation and the occurrence of DNA damage. Genome-wide analysis in both yeast and humans revealed that intron-containing genes have decreased R-loop levels and decreased DNA damage compared to intronless genes of similar expression. Insertion of an intron within an R-loop prone gene can also suppress R-loop formation and recombination. Bonnet et al. (2017) speculated that the function of introns in maintaining genetic stability may explain their evolutionary maintenance at certain locations, particularly in highly expressed genes.
Starvation adaptation
The physical presence of introns promotes cellular resistance to starvation via intron enhanced repression of ribosomal protein genes of nutrient-sensing pathways.
As mobile genetic elements
Introns may be lost or gained over evolutionary time, as shown by many comparative studies of orthologous genes. Subsequent analyses have identified thousands of examples of intron loss and gain events, and it has been proposed that the emergence of eukaryotes, or the initial stages of eukaryotic evolution, involved an intron invasion. Two definitive mechanisms of intron loss, reverse transcriptase-mediated intron loss (RTMIL) and genomic deletions, have been identified, and are known to occur. The definitive mechanisms of intron gain, however, remain elusive and controversial. At least seven mechanisms of intron gain have been reported thus far: intron transposition, transposon insertion, tandem genomic duplication, intron transfer, intron gain during double-strand break repair (DSBR), insertion of a group II intron, and intronization. In theory it should be easiest to deduce the origin of recently gained introns due to the lack of host-induced mutations, yet even introns gained recently did not arise from any of the aforementioned mechanisms. These findings thus raise the question of whether or not the proposed mechanisms of intron gain fail to describe the mechanistic origin of many novel introns because they are not accurate mechanisms of intron gain, or if there are other, yet to be discovered, processes generating novel introns.
In intron transposition, the most commonly purported intron gain mechanism, a spliced intron is thought to reverse splice into either its own mRNA or another mRNA at a previously intron-less position. This intron-containing mRNA is then reverse transcribed and the resulting intron-containing cDNA may then cause intron gain via complete or partial recombination with its original genomic locus.
Transposon insertions have been shown to generate thousands of new introns across diverse eukaryotic species. Transposon insertions sometimes result in the duplication of this sequence on each side of the transposon. Such an insertion could intronize the transposon without disrupting the coding sequence when a transposon inserts into the sequence AGGT or encodes the splice sites within the transposon sequence. Where intron-generating transposons do not create target site duplications, elements include both splice sites GT (5') and AG (3') thereby splicing precisely without affecting the protein-coding sequence. It is not yet understood why these elements are spliced, whether by chance, or by some preferential action by the transposon.
In tandem genomic duplication, due to the similarity between consensus donor and acceptor splice sites, which both closely resemble AGGT, the tandem genomic duplication of an exonic segment harboring an AGGT sequence generates two potential splice sites. When recognized by the spliceosome, the sequence between the original and duplicated AGGT will be spliced, resulting in the creation of an intron without alteration of the coding sequence of the gene. Double-stranded break repair via non-homologous end joining was recently identified as a source of intron gain when researchers identified short direct repeats flanking 43% of gained introns in Daphnia. These numbers must be compared to the number of conserved introns flanked by repeats in other organisms, though, for statistical relevance. For group II intron insertion, the retrohoming of a group II intron into a nuclear gene was proposed to cause recent spliceosomal intron gain.
Intron transfer has been hypothesized to result in intron gain when a paralog or pseudogene gains an intron and then transfers this intron via recombination to an intron-absent location in its sister paralog. Intronization is the process by which mutations create novel introns from formerly exonic sequence. Thus, unlike other proposed mechanisms of intron gain, this mechanism does not require the insertion or generation of DNA to create a novel intron.
The only hypothesized mechanism of recent intron gain lacking any direct evidence is that of group II intron insertion, which when demonstrated in vivo, abolishes gene expression. Group II introns are therefore likely the presumed ancestors of spliceosomal introns, acting as site-specific retroelements, and are no longer responsible for intron gain. Tandem genomic duplication is the only proposed mechanism with supporting in vivo experimental evidence: a short intragenic tandem duplication can insert a novel intron into a protein-coding gene, leaving the corresponding peptide sequence unchanged. This mechanism also has extensive indirect evidence lending support to the idea that tandem genomic duplication is a prevalent mechanism for intron gain. The testing of other proposed mechanisms in vivo, particularly intron gain during DSBR, intron transfer, and intronization, is possible, although these mechanisms must be demonstrated in vivo to solidify them as actual mechanisms of intron gain. Further genomic analyses, especially when executed at the population level, may then quantify the relative contribution of each mechanism, possibly identifying species-specific biases that may shed light on varied rates of intron gain amongst different species.
See also
Structure:
Exon
mRNA
Eukaryotic chromosome fine structure
Small t intron
Splicing:
Alternative splicing
Exitron
Minor spliceosome
Outron
Function
MicroRNA
Others:
Exon shuffling
Intein
Noncoding DNA
Noncoding RNA
Selfish DNA
Twintron
Exon-intron database
References
External links
A search engine for exon/intron sequences defined by NCBI
Bruce Alberts, Alexander Johnson, Julian Lewis, Martin Raff, Keith Roberts, and Peter Walter Molecular Biology of the Cell, 2007, . Fourth edition is available online through the NCBI Bookshelf: link
Jeremy M Berg, John L Tymoczko, and Lubert Stryer, Biochemistry 5th edition, 2002, W H Freeman. Available online through the NCBI Bookshelf: link
Intron finding tool for plant genomic sequences
Exon-intron graphic maker
Gene expression
DNA
Spliceosome
RNA splicing
Non-coding DNA | Intron | Chemistry,Biology | 4,566 |
32,895,691 | https://en.wikipedia.org/wiki/Kune%20%28software%29 | Kune was a free/open source distributed social network focused on collaboration rather than just on communication. That is, it focused on online real-time collaborative editing, decentralized social networking and web publishing, while focusing on workgroups rather than just on individuals. It aimed to allow for the creation of online spaces for collaborative work where organizations and individuals can build projects online, coordinate common agendas, set up virtual meetings, publish on the web, and join organizations with similar interests. It had a special focus on Free Culture and social movements needs. Kune was a project of the Comunes Collective. The project seems abandoned since 2017, with no new commits, blog entries or site activity.
Technical details
Kune was programmed using the Java-based GWT in the client-side, integrating Apache Wave (formerly Google Wave) and using mainly the open protocols XMPP and Wave Federation Protocol. GWT Java sources on the client side generates obfuscated and deeply optimized JavaScript conforming a single page application. Wave extensions (gadgets, bots) run on top of Kune (as in Facebook apps) and can be programmed in Java+GWT, JavaScript or Python.
The last version was under development since 2007 until 2017. The code was hosted in the GIT of Gitorious, with a development site and its main node maintained by the Comunes Collective.
Kune is 100% free software and was built only using free software. Its software is licensed under the AGPL license, while the art is under a Creative Commons BY-SA.
Philosophy
Kune was born in order to face a growing concern from the community behind it. Nowadays, groups (a group of friends, activists, a NGO, a small start-up) that need to work together typically will use multiple free (like beer) commercial centralized for-profit services (e.g. Google Docs, Google Groups, Facebook, Wordpress.com, Dropbox, Flickr, eBay ...) in order to communicate and collaborate online. However, "If you're not paying for it, you're the product". In order to avoid that, such groups of users may ask a technical expert to build them mailing lists, a webpage and maybe to set up an etherpad. However, technicians are needed for any new list (as they cannot configure e.g. GNU Mailman), configuration change, etc., creating a strong dependency and ultimately a bottleneck.
Kune aims to cover all those needs of groups to communicate and collaborate, in a usable way and thus without depending on technical experts. It aims to be a free/libre web service (and thus in the cloud), but decentralized as email, so a user can choose the server they want and still interoperate transparently with the rest.
Opposite to most distributed social networks, this software focuses on collaboration and building, not only on communication and sharing. Thus, Kune does not aim to ultimately replace Facebook, but also all the above-mentioned commercial services. Kune has a strong focus on the construction of Free Culture and eventually facilitate Commons-based peer production.
History
The origin of Kune relies on the community behind Ourproject.org. Ourproject aimed to provide for Free Culture (social/cultural projects) what SourceForge and other software forges meant for free software: a collection of communication and collaboration tools that would boost the emergence of community-driven free projects. However, although Ourproject was relatively successful, it was far from the original aims. The analysis of the situation in 2005 concluded that only the groups that had a techie among them (who would manage Mailman or install a CMS) were able to move forward, while the rest would abandon the service. Thus, new free collaborative tools were needed, more usable and suitable for anyone, as the available free tools required a high degree of technical expertise. This is why Kune, whose name means "together" in Esperanto, was developed.
The first prototypes of Kune were developed using Ruby on Rails and Pyjamas (later known as Pyjs). However, with the release of Java and the Google Web Toolkit as free software, the community embraced these technologies since 2007. In 2009, with a stable codebase and about to release a major version of Kune, Google announced the Google Wave project and promised it would be released as free software. Wave was using the same technologies of Kune (Java + GWT, Guice, XMPP protocol) so it would be easy to integrate after its release. Besides, Wave was offering an open federated protocol, easy extensibility (through gadgets), easy control versioning, and very good real-time edition of documents. Thus, the community decided to halt the development of Kune, and wait for its release... in the meanwhile developing gadgets that would be integrated in Kune later on. In this same period, the community established the Comunes Association (with an acknowledged inspiration in Software in the Public Interest) as a non-profit legal umbrella for free software tools for encouraging the Commons and facilitating the work of social movements. The umbrella covered Ourproject, Kune and Move Commons, together with some other minor projects.
In November 2010, the free Apache Wave (previously Wave-in-a-Box) was released, under the umbrella of the Apache Foundation. Since then, the community began integrating its source code within the Kune previous codebase, and with the support of the IEPALA Foundation. Kune released its Beta and moved to production in April 2012.
Since then, Kune has been catalogued as "activism 2.0" and citizen tool, a tool for NGOs, multi-tool for general purpose (and following that, criticized for the risk of falling on the second-system effect) and example of the new paradigm. It was selected as "open website of the week" by the Open University of Catalonia, and as one of the #Occupy Tech projects. Nowadays, there are plans of another federated social network, Lorea (based on Elgg), to connect with Kune.
Feature list
All the functionalities of Apache Wave, that is collaborative federated real-time editing, plus
Communication
Chat and chatrooms compatible with Gmail and Jabber through XMPP (with several XEP extensions), as it integrates Emite
Social networking (federated)
Real-time collaboration for groups in:
Documents: as in Google Docs
Wikis
Lists: as in Google Groups but minimizing emails, through waves
Group Tasks
Group Calendar: as in Google Calendar, with ical export
Group Blogs
Web-creation: aiming to publish contents directly on the web (as in WordPress, with a dashboard and public view) (in development)
Bartering: aiming to decentralize bartering as in eBay
Advanced email
Waves: aims to replace most uses of email
Inbox: as in email, all your conversations and documents in all kunes are controlled from your inbox
Email notifications (Projected: replies from email)
Multimedia & Gadgets
Image or Video galleries integrated in any doc
Maps, mindmaps, Twitter streams, etc.
Polls, voting, events, etc.
and more via Apache Wave extensions, easy to program (as in Facebook apps, they run on top of Kune)
Federation
Distributed Social Networking the same way as e-mail: from one inbox you control all your activity in all kunes, and you can collaborate with anyone or any group regardless of the kune where they were registered.
Interoperable with any Kune server or Wave-based system
Chat interoperable with any XMPP server
Usability
Strong focus on usability for any user
Animated tutorials for each tool
Drag&Drop for sharing contents, add users to a doc, change roles, delete contents, etc.
Shortcuts
Free culture
Developed using free software and released under AGPL
Easy assistant for choosing content licenses for groups. Default license is Creative Commons BY-SA.
Developer-friendly
Debian/Ubuntu package for easy installation
Wave Gadgets can be programmed in Java+GWT, JavaScript or Python
Supporters and adopters
Kune has the active support of several organizations and institutions:
Comunes Association, whose community is behind Kune development. It used to host a Kune server for free projects: https://kune.cc
IEPALA Foundation, which was supporting the development with economical and technical resources. It used to host a Kune server for non-governmental organizations: "Social Gloobal"
Grasia Software Agent Research Group of the Complutense University of Madrid provided technical resources.
Interns from the Master of Free Software from the King Juan Carlos University participated in the development.
Trainees from the American University of Science and Technology (Lebanon) participated in the system administration.
Paulo Freire Institute in Brazil participated in the early design and prototypes.
The Kune workgroup of the Medialab Prado participated in the beta-testing.
See also
Apache Wave
Comunes Collective
Distributed social network
Comparison of software and protocols for distributed social networking
Ourproject.org
Wave Federation Protocol
References
External links
Kune.cc main site
Project hosting websites
Creative Commons-licensed websites
Free project management software
Social information processing
Groupware
Collaborative real-time editors
2012 software
Free software programmed in Java (programming language)
Internet properties established in 2007
Software using the GNU Affero General Public License | Kune (software) | Technology | 1,939 |
10,366,709 | https://en.wikipedia.org/wiki/Co-chaperone | Co-chaperones are proteins that assist chaperones in protein folding and other functions. Co-chaperones are the non-client binding molecules that assist in protein folding mediated by Hsp70 and Hsp90. They are particularly essential in stimulation of the ATPase activity of these chaperone proteins. There are a great number of different co-chaperones however based on their domain structure most of them fall into two groups: J-domain proteins and tetratricopeptide repeats (TPR).
Co-chaperones assist heat shock proteins in the protein folding process. These co-chaperones can function in a number of ways. Primarily co-chaperones are involved in the ATPase functionality of their associated heat shock proteins. Co-chaperones catalyze the hydrolysis ATP to ADP on their respective chaperones which then allows them undergo a large conformational change that allows them to either bind to their substrates with higher affinity or aid in the release of the substrate following protein folding, as in the case of co-chaperone p23.
J-proteins, DnaJ or Hsp40 are important co-chaperones for Hsp70 and have the ability to bind to polypeptides and then recruit chaperone protein DnaK and passes the polypeptide along to this chaperone by catalyzing ATP hydrolysis that allows DnaK to bind to the unfolded polypeptide with high affinity. Another co-chaperone, GrpE, comes in following the folding of this protein to cause a conformational change in DnaK that allows it to release the folded protein. The mechanism of TPR proteins is less studied these domains have been shown to interact with Hsp90 and Hsp70 and may be involved in the creation of an Hsp70-Hsp90 multi-chaperone complex.
Co-chaperones may also play an important role in misfolding diseases such as cystic fibrosis. An interaction between Hsp90 and its co-chaperone, Aha1, is essential to the proper folding of cystic fibrosis transmembrane conductance regulator (CFTR). Other examples of co-chaperone's role in illness include neurodegenerative diseases. Alzheimer’s and Parkinson’s disease have a number of proteins that can aggregate if not properly chaperoned. Co-chaperones CSPα (DNAJC5), auxilin (DNAJC6) and RME-8 (DNAJC13) are important for preserving folding and assembly, therefore preventing protein aggregation. Detection of mutations in these proteins have been associated with the early onset of neurodegenerative diseases.
List of co-chaperones
Aha1
auxilin
BAG1
CAIR-1/Bag-3
CDC37/p50
Chp1
Cysteine string protein (CSP)
Cyp40
Djp1
DnaJ
E3/E4-ubiquitin ligase
FKBP4
GAK
GroES
GrpE
Hch1
Hip (Hsc70-interacting protein)/ST13
Hop (Hsp70/Hsp90 organizing protein)/STIP1
Mrj
PP5
Sacsin
SGT
Snl1
SODD/Bag-4
Swa2/Aux1
Tom34
Tom70
UNC-45
WISp39
See also
Chaperone (protein)
Heat shock protein
References
Further reading | Co-chaperone | Chemistry | 713 |
13,649,130 | https://en.wikipedia.org/wiki/ISO/IEC%20JTC%201/SC%2022 | ISO/IEC JTC 1/SC 22 Programming languages, their environments and system software interfaces is a standardization subcommittee of the Joint Technical Committee ISO/IEC JTC 1 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) that develops and facilitates standards within the fields of programming languages, their environments and system software interfaces. ISO/IEC JTC 1/SC 22 is also sometimes referred to as the "portability subcommittee". The international secretariat of ISO/IEC JTC 1/SC 22 is the American National Standards Institute (ANSI), located in the United States.
History
ISO/IEC JTC 1/SC 22 was created in 1985, with the intention of creating a JTC 1 subcommittee that would address standardization within the field of programming languages, their environments and system software interfaces. Before the creation of ISO/IEC JTC 1/SC 22, programming language standardization was addressed by ISO TC 97/SC 5. Many of the original working groups of ISO/IEC JTC 1/SC 22 were inherited from a number of the working groups of ISO TC 97/SC 5 during its reorganization, including ISO/IEC JTC 1/SC 22/WG 2 – Pascal (originally ISO TC 97/SC 5/WG 4), ISO/IEC JTC 1/SC 22/WG 4 – COBOL (originally ISO TC 97/SC 5/ WG 8), and ISO/IEC JTC 1/SC 22/WG 5 – Fortran (originally ISO TC 97/SC 5/WG 9). Since then, ISO/IEC JTC 1/SC 22 has created and disbanded many of its working groups in response to the changing standardization needs of programming languages, their environments and system software interfaces.
Scope and mission
The scope of ISO/IEC JTC 1/SC 22 is the standardization of programming languages (such as COBOL, Fortran, Ada, C, C++, and Prolog), their environments (such as POSIX and Linux), and systems software interfaces, such as:
Specification techniques
Common facilities and interfaces
ISO/IEC JTC 1/SC 22 also produces common language-independent specifications to facilitate standardized bindings between programming languages and system services, as well as greater interaction between programs written in different languages.
The scope of ISO/IEC JTC 1/SC 22 does not include specialized languages or environments within the program of work of other subcommittees or technical committees.
The mission of ISO/IEC JTC 1/SC 22 is to improve portability of applications, productivity and mobility of programmers, and compatibility of applications over time within high level programming environments. The three main goals of ISO/IEC JTC 1/SC 22 are:
To support the current global investment in software applications through programming languages standardization
To improve programming language standardization based on previous specification experience in the field
To respond to emerging technological opportunities
Structure
Although ISO/IEC JTC 1/SC 22 has had a total of 24 working groups (WGs), many have been disbanded when the focus of the working group was no longer applicable to the current standardization needs. ISO/IEC JTC 1/SC 22 is currently made up of eight (8) active working groups, each of which carries out specific tasks in standards development within the field of programming languages, their environments and system software interfaces. The focus of each working group is described in the group’s terms of reference. Working groups of ISO/IEC JTC 1/SC 22 are:
Collaborations
ISO/IEC JTC 1/SC 22 works in close collaboration with a number of other organizations or subcommittees, some internal to ISO, and others external to it. Organizations in liaison with ISO/IEC JTC 1/SC 22, internal to ISO are:
ISO/IEC JTC 1/SC 2, Coded character sets
ISO/IEC JTC 1/SC 7, Software and systems engineering
ISO/IEC JTC 1/SC 27, IT Security techniques
ISO/TC 37, Terminology and other language and content resources
ISO/TC 215, Health informatics
Organizations in liaison to ISO/IEC JTC 1/SC 22 that are external to ISO are:
Ecma International
Linux Foundation
Association for Computing Machinery Special Interest Group on Ada (ACM SIGAda)
Ada-Europe
MISRA
Member countries
Countries pay a fee to ISO to be members of subcommittees.
The 23 "P" (participating) members of ISO/IEC JTC 1/SC 22 are: Austria, Bulgaria, Canada, China, Czech Republic, Denmark, Finland, France, Germany, Israel, Italy, Japan, Kazakhstan, Republic of Korea, Netherlands, Poland, Russian Federation, Slovenia, Spain, Switzerland, Ukraine, United Kingdom, and United States of America.
The 21 "O" (observing) members of ISO/IEC JTC 1/SC 22 are: Argentina, Belgium, Bosnia and Herzegovina, Cuba, Egypt, Ghana, Greece, Hungary, Iceland, India, Indonesia, Islamic Republic of Iran, Ireland, Democratic People’s Republic of Korea, Malaysia, New Zealand, Norway, Portugal, Romania, Serbia, and Thailand.
Published standards and technical reports
ISO/IEC JTC 1/SC 22 currently has 98 published standards in programming languages, their environments and system software interfaces. Some standards published by ISO/IEC JTC 1/SC 22 within this field include:
See also
ISO/IEC JTC 1
List of ISO standards
American National Standards Institute
International Organization for Standardization
International Electrotechnical Commission
References
External links
ISO/IEC JTC 1/SC 22 page at ISO
022
Programming language standards | ISO/IEC JTC 1/SC 22 | Technology | 1,130 |
60,934,902 | https://en.wikipedia.org/wiki/Trillion | Trillion is a number with two distinct definitions:
1,000,000,000,000, i.e. one million million, or (ten to the twelfth power), as defined on the short scale. This is now the meaning in both American and British English.
1,000,000,000,000,000,000, i.e. (ten to the eighteenth power), as defined on the long scale. This is one million times larger than the short scale trillion. This is the historical meaning in English and the current use in many non-English-speaking countries where trillion and billion (ten to the twelfth power) maintain their long scale definitions.
Usage
Originally, the United Kingdom used the long scale trillion. However, since 1974, official UK statistics have used the short scale. Since the 1950s, the short scale has been increasingly used in technical writing and journalism, although the long scale definition still has some limited usage.
American English has always used the short scale definition.
Other countries use the word trillion (or words cognate to it) to denote either the long scale or short scale trillion. For details, see current usage.
During the height of hyperinflation in Zimbabwe in 2008, people became accustomed to speaking about their daily expenses in terms of trillions.
When Italy used the lira as currency, eventually converted at about 2,000 lira to the euro, it was found that Italians were more comfortable with words for large numbers such as trillion than British people.
Etymology
Whilst the words billion and trillion, or variations thereof were first used by French mathematicians in the 15th century, the word trillion was first used in English in the 1680s and comes from the Italian word trilione.
The word originally meant the third power of one million. As a result, it was mainly used to express the concept of an enormous number, similar to the words zillion and gazillion. However, it was more commonly used in the US.
See also
Names of large numbers
Billion, another ambiguous numerical word
References
Large numbers
English words | Trillion | Mathematics | 418 |
77,276,707 | https://en.wikipedia.org/wiki/Transport%20ecology | Transport ecology is the science of the human-transport-environment system. There are two chairs of transport ecology in Germany, in Dresden and Karlsruhe.
Vocabulary
Mobility is about satisfying the need to travel. To achieve mobility, means of transport are needed. Mobility corresponds to the human need to travel - recognised by article 13 of the Universal Declaration of Human Rights - while transport is a means of achieving mobility.
In public debate, mobility is often confused with transport. The "Dresden Declaration" calls for people's mobility needs to be met in a cost-effective and environmental-friendly way.
Suggested measures
Then the proposed measures (whether they involve transport modes, the concept of "traffic avoidance, change of transport mode, technical improvements", the tautology of transport ecology or the "4 E", i.e. Enforcement, Education, Engineering, Economy/Encouragement) are scrutinised for transparency, fairness (polluters pay), unwanted side-effects and the application of the measure ("are there other examples of application elsewhere? ").
Traffic avoidance, modal shift and finally technical improvements
The concept of « traffic avoidance, modal shift and technical improvements » involves firstly reducing the volume of transport, then promoting intermodality and finally making technical improvements to vehicles and making the energy they consume sustainable.
This means in fact implementing the Kaya identity applied to transport (see below).
Enforcement, Education, Engineering, Economy/Encouragement
These methods are also known as "4E". Enforcement refers to measures of order, whether obligations or prohibitions. Education refers to training, communication. Engineering is of a purely technical nature, whereas Economy/Encouragement re incentive systems, which may well be financial.
Tautology of transport ecology
As long as pollution is proportional to the distance travelled, Udo Becker defines tautology of transport ecology (in German « verkehrsökologische Tautologie ») as follows :
with :
: pollution ;
: Transportation demand (in passenger-km) ;
: vehicle traffic (in vehicle-km) :
: inverse of vehicle occupancy (in vehicle-km per passenger-km) ;
is pollution per vehicle-km.
Demand can be decomposed according to:
with :
: population ;
: number of journeys per person;
: mean distance of a journey.
Pollution can therefore be expressed as the sum of pollution according to the modes of transport :
with :
: Modal shift (dimensionless quantity) ;
: inverse of occupancy according to the mode of transport (in vehicle-km per passenger-km) ;
is the pollution per vehicle-km according to the mode of transport.
Kaya identity applied to transport
The general formulation takes on a more specific form when it comes to decarbonising transport, following Kaya identity.
Pollution being identified to CO2 is replaced by
with :
: inverse of efficiency according to the mode of transport (for instance in kWh/100 km per vehicle) ;
: carbon intensity of the energy acoording to the mode of transport (for instance in g CO2 eq./kWh).
CO2 emissions can be decomposed according:
See also
Green transport hierarchy
Health and environmental impact of transport
References
External links
Chair of transport ecology, Dresden University of Technology
Institute for Transport Systems & Infrastructure, Karlsruhe University of Applied Sciences
Sustainable transport | Transport ecology | Physics | 675 |
2,898,771 | https://en.wikipedia.org/wiki/Lambda%20Aurigae | Lambda Aurigae, Latinized from λ Aurigae, is the Bayer designation for a solar analog star in the northern constellation of Auriga. It is visible to the naked eye with an apparent visual magnitude of 4.71. Based upon parallax measurements, it is approximately distant from the Earth. The star is drifting further away with a high radial velocity of +66.5 km/s, having come to within some 117,300 years ago. It has a high proper motion, traversing the celestial sphere at the rate of per year.
Properties
This is a G-type main sequence star with a stellar classification of G1 V. It is sometimes listed with a class of G1.5 IV-V Fe-1, which indicates the spectrum is showing some features of a more evolved subgiant star along with a noticeable underabundance of iron. In terms of composition it is similar to the Sun, while the mass and radius are slightly larger. It is 73% more luminous than the Sun and radiates this energy from its outer atmosphere at an effective temperature of . At this heat, the star glows with the yellow hue of a G-type star. It has a low level of surface activity and is a candidate Maunder minimum analog.
Lambda Aurigae has been examined for the presence of excess infrared emission that may indicate the presence of a circumstellar disk of dust, but no significant surplus has been observed. It is a possible member of the Epsilon Indi Moving Group of stars that share a common motion through space. The space velocity components of this star are = .
Name
This star may have been called by the name Al Hurr, meaning the fawn in Arabic. Lambda Aurigae, along with μ Aur and σ Aur, were Kazwini's Al Ḣibāʽ (ألحباع), the Tent. According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, Al Ḣibāʽ were the title for three stars : λ Aur as Al Ḣibāʽ I, μ Aur as Al Ḣibāʽ II and σ Aur as Al Ḣibāʽ III.
In Chinese, (), meaning Pool of Harmony, refers to an asterism consisting of λ Aurigae, ρ Aurigae and HD 36041. Consequently, the Chinese name for λ Aurigae itself is (, .)
Observation
From Earth, Lambda Aurigae has an apparent magnitude of 4.71. The closest large neighboring star to Lambda Aurigae is Capella, located away. Hypothetically viewed from Lambda Aurigae, Capella's quadruple star system would have an apparent magnitude of approximately -5.48, about 40 times brighter than Sirius can be seen at maximum brightness from Earth.
References
External links
Image Lambda Aurigae
G-type main-sequence stars
G-type subgiants
Aurigae, Lambda
Maunder Minimum
Auriga
Aurigae, Lambda
BD+39 1248
Aurigae, 15
0197
034411
024813
1729 | Lambda Aurigae | Astronomy | 646 |
12,821 | https://en.wikipedia.org/wiki/Gate | A gate or gateway is a point of entry to or from a space enclosed by walls. The word derived from old Norse "gat" meaning road or path; But other terms include yett and port. The concept originally referred to the gap or hole in the wall or fence, rather than a barrier which closed it. Gates may prevent or control the entry or exit of individuals, or they may be merely decorative. The moving part or parts of a gateway may be considered "doors", as they are fixed at one side whilst opening and closing like one.
A gate may have a latch that can be raised and lowered to both open a gate or prevent it from swinging. Gate operation can be either automated or manual. Locks are also used on gates to increase security.
Larger gates can be used for a whole building, such as a castle or fortified town. Doors can also be considered gates when they are used to block entry as prevalent within a gatehouse.
Purpose-specific types of gate
Baby gate: a safety gate to protect babies and toddlers
Badger gate: gate to allow badgers to pass through rabbit-proof fencing
City gate of a walled city
Hampshire gate (a.k.a. New Zealand gate, wire gate, etc.)
Kissing gate on a footpath
Lychgate with a roof
Mon Japanese: gate. The religious torii compares to the Chinese pailou (paifang), Indian torana, Indonesian Paduraksa and Korean hongsalmun. Mon are widespread, in Japanese gardens.
Portcullis of a castle
Race gate used for checkpoints on race tracks.
Slip gate on footpaths
Turnstile
Watergate of a castle by navigable water
Slalom skiing gates
Wicket gate
Image gallery
See also
City Gate
Barricade
Boom barrier (a.k.a. boom gate)
Border
Gate tower
Gopuram
Leave the gate as you found it
Portal (architecture)
Portcullis
Threshold (disambiguation)
Triumphal arch
List of scandals with "-gate" suffix
Watergate, as used in politics
References
External links
Doors
Fortification (architectural elements)
Garden features
Buildings and structures by type | Gate | Engineering | 430 |
3,057,614 | https://en.wikipedia.org/wiki/P-form%20electrodynamics | In theoretical physics, -form electrodynamics is a generalization of Maxwell's theory of electromagnetism.
Ordinary (via. one-form) Abelian electrodynamics
We have a one-form , a gauge symmetry
where is any arbitrary fixed 0-form and is the exterior derivative, and a gauge-invariant vector current with density 1 satisfying the continuity equation
where is the Hodge star operator.
Alternatively, we may express as a closed -form, but we do not consider that case here.
is a gauge-invariant 2-form defined as the exterior derivative .
satisfies the equation of motion
(this equation obviously implies the continuity equation).
This can be derived from the action
where is the spacetime manifold.
p-form Abelian electrodynamics
We have a -form , a gauge symmetry
where is any arbitrary fixed -form and is the exterior derivative, and a gauge-invariant -vector with density 1 satisfying the continuity equation
where is the Hodge star operator.
Alternatively, we may express as a closed -form.
is a gauge-invariant -form defined as the exterior derivative .
satisfies the equation of motion
(this equation obviously implies the continuity equation).
This can be derived from the action
where is the spacetime manifold.
Other sign conventions do exist.
The Kalb–Ramond field is an example with in string theory; the Ramond–Ramond fields whose charged sources are D-branes are examples for all values of . In eleven-dimensional supergravity or M-theory, we have a 3-form electrodynamics.
Non-abelian generalization
Just as we have non-abelian generalizations of electrodynamics, leading to Yang–Mills theories, we also have nonabelian generalizations of -form electrodynamics. They typically require the use of gerbes.
References
Henneaux; Teitelboim (1986), "-Form electrodynamics", Foundations of Physics 16 (7): 593-617,
Navarro; Sancho (2012), "Energy and electromagnetism of a differential -form ", J. Math. Phys. 53, 102501 (2012)
Electrodynamics
String theory | P-form electrodynamics | Astronomy,Mathematics | 459 |
20,635,292 | https://en.wikipedia.org/wiki/SOAtest | Parasoft SOAtest is a testing and analysis tool suite for testing and validating APIs and API-driven applications (e.g., cloud, mobile apps, SOA). Basic testing functionality include functional unit testing, integration testing, regression testing, system testing, security testing, simulation and mocking, runtime error detection, web UI testing, interoperability testing, WS-* compliance testing, and load testing.
Supported technologies include Web services, REST, JSON, MQ, JMS, TIBCO, HTTP, XML, EDI, mainframes, and custom message formats.
Parasoft SOAtest introduced Service virtualization via server emulation and stubs in 2002; by 2007, it provided an intelligent stubs platform that emulated the behavior of dependent services that were otherwise difficult to access or configure during development and testing. Extended service virtualization functionality is now in Parasoft Virtualize, while SOAtest provides intelligent stubbing.
References
External links
Parasoft SOAtest.
API Testing (solution featuring Parasoft SOAtest)
Service Virtualization (related service virtualization product)
Computer security software
Load testing tools
Security testing tools
Software testing tools
Static program analysis tools
Unit testing frameworks
Web service development tools | SOAtest | Engineering | 261 |
23,433,902 | https://en.wikipedia.org/wiki/C14H18N4O3 | {{DISPLAYTITLE:C14H18N4O3}}
The molecular formula C14H18N4O3 (molar mass: 290.318 g/mol, exact mass: 290.1379 u) may refer to:
Benomyl
Trimethoprim (TMP)
Molecular formulas | C14H18N4O3 | Physics,Chemistry | 68 |
1,406,322 | https://en.wikipedia.org/wiki/List%20of%20garden%20features | Garden features are physical elements, both natural and manmade, used in garden design.
Artificial waterfall
Avenue
Aviary
Bog garden
Borrowed scenery
Bosquet
Broderie
Belvedere
Chashitsu (tea house)
Chōzubachi (basin)
Deck
Dirty kitchen
Exedra
Fish pond
Folly
Footbridge
Fountain
Garden pond
Garden railway
Garden room
Gazebo
Gloriette
Greenhouse
Green wall
Grotto
Shell grotto
Ha-ha
Hedge
Hedge maze
Herbaceous border
Herb garden
Jeux d'eau
Kitchen garden
Knot garden
Koi pond
Lawn
Tapestry lawn
Moss lawn
Monopteros
Moon bridge
Moon gate
Mound
Nine-turn bridge
Nymphaeum
Orangery
Pagoda
Parterre
Patio
Pavilion
Pergola
Reflecting pool
Rockery
Scandinavian grillhouse
Scholar's rock
Stepping stones
Stumpery
Sylvan theater
Summerhouse
Terrace
Topiary
Tōrō (lantern)
Trellis
Turf maze
Water feature
Water garden
Woodland garden
Zig-zag bridge
Gallery
See also
Eyecatchers
Garden ornament
Lawn ornament
:Category:Types of garden
:Category:Garden ornaments
Garden features
Garden features
Gardening aids | List of garden features | Engineering | 213 |
49,248,459 | https://en.wikipedia.org/wiki/Transactivation%20domain | The transactivation domain or trans-activating domain (TAD) is a transcription factor scaffold domain which contains binding sites for other proteins such as transcription coregulators. These binding sites are frequently referred to as activation functions (AFs). TADs are named after their amino acid composition. These amino acids are either essential for the activity or simply the most abundant in the TAD. Transactivation by the Gal4 transcription factor is mediated by acidic amino acids, whereas hydrophobic residues in Gcn4 play a similar role. Hence, the TADs in Gal4 and Gcn4 are referred to as acidic or hydrophobic, respectively.
In general we can distinguish four classes of TADs:
acidic domains (called also “acid blobs” or “negative noodles", rich in D and E amino acids, present in Gal4, Gcn4 and VP16).
glutamine-rich domains (contains multiple repetitions like "QQQXXXQQQ", present in SP1)
proline-rich domains (contains repetitions like "PPPXXXPPP" present in c-jun, AP2 and Oct-2)
isoleucine-rich domains (repetitions "IIXXII", present in NTF-1)
Alternatively, since similar amino acid compositions does not necessarily mean similar activation pathways, TADs can be grouped by the process they stimulate, either initiation or elongation.
Acidic/9aaTAD
Nine-amino-acid transactivation domain (9aaTAD) defines a domain common to a large superfamily of eukaryotic transcription factors represented by Gal4, Oaf1, Leu3, Rtg3, Pho4, Gln3, Gcn4 in yeast, and by p53, NFAT, NF-κB and VP16 in mammals. The definition largely overlaps with an "acidic" family definition. A 9aaTAD prediction tool is available. 9aaTADs tend to have an associated 3-aa hydrophobic (usually Leu-rich) region immediately to its N-terminal.
9aaTAD transcription factors p53, VP16, MLL, E2A, HSF1, NF-IL6, NFAT1 and NF-κB interact directly with the general coactivators TAF9 and CBP/p300. p53 9aaTADs interact with TAF9, GCN5 and with multiple domains of CBP/p300 (KIX, TAZ1,TAZ2 and IBiD).
The KIX domain of general coactivators Med15(Gal11) interacts with 9aaTAD transcription factors Gal4, Pdr1, Oaf1, Gcn4, VP16, Pho4, Msn2, Ino2 and P201. Positions 1, 3-4, and 7 of the 9aaTAD are the main residues that interact with KIX. Interactions of Gal4, Pdr1 and Gcn4 with Taf9 have been observed. 9aaTAD is a common transactivation domain which recruits multiple general coactivators TAF9, MED15, CBP/p300 and GCN5.
Glutamine-rich
Glutamine (Q)-rich TADs are found in POU2F1 (Oct1), POU2F2 (Oct2), and Sp1 (see also Sp/KLF family). Although such is not the case for every Q-rich TAD, Sp1 is shown to interact with TAF4 (TAFII 130), a part of the TFIID assembly.
See also
DNA-binding protein
Transcription factor
References
External links
9aaTAD prediction tool
Transcription factors
Protein domains | Transactivation domain | Chemistry,Biology | 790 |
35,728,290 | https://en.wikipedia.org/wiki/Algebraic%20semantics%20%28computer%20science%29 | In computer science, algebraic semantics is a form of axiomatic semantics based on algebraic laws for describing and reasoning about program specifications in a formal manner.
Syntax
The syntax of an algebraic specification is formulated in two steps: (1) defining a formal signature of data types and operation symbols, and (2) interpreting the signature through sets and functions.
Definition of a signature
The signature of an algebraic specification defines its formal syntax. The word "signature" is used like the concept of "key signature" in musical notation.
A signature consists of a set of data types, known as sorts, together with a family of sets, each set containing operation symbols (or simply symbols) that relate the sorts.
We use to denote the set of operation symbols relating the sorts to the sort .
For example, for the signature of integer stacks, we define two sorts, namely, and , and the following family of operation symbols:
where denotes the empty string.
Set-theoretic interpretation of signature
An algebra interprets the sorts and operation symbols as sets and functions.
Each sort is interpreted as a set , which is called the carrier of of sort , and each symbol in is mapped to a function , which is called an operation of .
With respect to the signature of integer stacks, we interpret the sort as the set of integers, and interpret the sort as the set of integer stacks. We further interpret the family of operation symbols as the following functions:
Semantics
Semantics refers to the meaning or behavior. An algebraic specification provides both the meaning and behavior of the object in question.
Equational axioms
The semantics of an algebraic specifications is defined by axioms in the form of conditional equations.
With respect to the signature of integer stacks, we have the following axioms:
For any and ,
where "" indicates "not found".
Mathematical semantics
The mathematical semantics (also known as denotational semantics) of a specification refers to its mathematical meaning.
The mathematical semantics of an algebraic specification is the class of all algebras that satisfy the specification.
In particular, the classic approach by Goguen et al. takes the initial algebra (unique up to isomorphism) as the "most representative" model of the algebraic specification.
Operational semantics
The operational semantics of a specification means how to interpret it as a sequence of computational steps.
We define a ground term as an algebraic expression without variables. The operational semantics of an algebraic specification refers to how ground terms can be transformed using the given equational axioms as left-to-right rewrite rules, until such terms reach their normal forms, where no more rewriting is possible.
Consider the axioms for integer stacks. Let "" denote "rewrites to".
Canonical property
An algebraic specification is said to be confluent (also known as Church-Rosser) if the rewriting of any ground term leads to the same normal form. It is said to be terminating if the rewriting of any ground term will lead to a normal form after a finite number of steps. The algebraic specification is said to be canonical (also known as convergent) if it is both confluent and terminating. In other words, it is canonical if the rewriting of any ground term leads to a unique normal form after a finite number of steps.
Given any canonical algebraic specification, the mathematical semantics agrees with the operational semantics.
As a result, canonical algebraic specifications have been widely applied to address program correctness issues. For example, numerous researchers have applied such specifications to the testing of observational equivalence of objects in object-oriented programming. See Chen and Tse as a secondary source that provides a historical review of prominent research from 1981 to 2013.
See also
Algebraic semantics (mathematical logic)
OBJ (programming language)
Joseph Goguen
References
Formal methods
Logic in computer science
Formal specification languages
Programming language semantics | Algebraic semantics (computer science) | Mathematics,Engineering | 770 |
20,740,416 | https://en.wikipedia.org/wiki/Human-centered%20design | Human-centered design (HCD, also human-centered design, as used in ISO standards) is an approach to problem-solving commonly used in process, product, service and system design, management, and engineering frameworks that develops solutions to problems by involving the human perspective in all steps of the problem-solving process. Human involvement typically takes place in initially observing the problem within context, brainstorming, conceptualizing, developing concepts and implementing the solution.
Human-centered design builds upon participatory action research by moving beyond participants' involvement and producing solutions to problems rather than solely documenting them. Initial stages usually revolve around immersion, observing, and contextual framing— in which innovators immerse themselves in the problem and community. Subsequent stages may then focus on community brainstorming, modeling and prototyping and implementation in community spaces.
Development
Human-centered design has its origins at the intersection of numerous fields including engineering, psychology, anthropology and the arts. As an approach to creative problem-solving in technical and business fields its origins are often traced to the founding of the Stanford University design program in 1958 by Professor John E. Arnold who first proposed the idea that engineering design should be human-centered. This work coincided with the rise of creativity techniques and the subsequent design methods movement in the 1960s. Since then, as creative design processes and methods have been increasingly popularized for business purposes, the standardized and defined human-centered design is mistakenly equated with the vaguely outlined "design thinking".
In Architect or Bee?, Mike Cooley coined the term "human-centered systems" in the context of the transition in his profession from traditional drafting at a drawing board to computer-aided design. Human-centered systems, as used in economics, computing and design, aim to preserve or enhance human skills, in both manual and office work, in environments in which technology tends to undermine the skills that people use in their work.
User participation
The user-oriented framework relies heavily on user participation and user feedback in the planning process. Users are able to provide new perspective and ideas, which can be considered in a new round of improvements and changes. It is said that increased user participation in the design process can garner a more comprehensive understanding of the design issues, due to more contextual and emotional transparency between researcher and participant. A key element of human centered design is applied ethnography, which is a research method adopted from cultural anthropology. This research method requires researchers to be fully immersed in the observation so that implicit details are also recorded.
Rationale for adoption
Even after decades of thought on Human Centered Design, management and finance systems still believe that "another's liability is one's asset" could be true of porous human bodies, embedded in nature and inseparable from each other. On the contrary, our biological and ecological interconnections ensure that "another's liability is our liability". Sustainable business systems can only emerge if these biological and ecological interconnections are accepted and accounted for.
Using a human-centered approach to design and development has substantial economic and social benefits for users, employers and suppliers. Highly usable systems and products tend to be more successful both technically and commercially. In some areas, such as consumer products, purchasers will pay a premium for well-designed products and systems. Support and help-desk costs are reduced when users can understand and use products without additional assistance. In most countries, employers and suppliers have legal obligations to protect users from risks to their health, and safety and human-centered methods can reduce these risks (e.g. musculoskeletal risks). Systems designed using human-centered methods improve quality, for example, by:
increasing the productivity of users and the operational efficiency of organizations
being easier to understand and use, thus reducing training and support costs
increasing usability for people with a wider range of capabilities and thus increasing accessibility
improving user experience
reducing discomfort and stress
providing a competitive advantage, for example by improving brand image
contributing towards sustainability objectives
Human-centered design may be utilized in multiple fields, including sociological sciences and technology. It has been noted for its ability to consider human dignity, access, and ability roles when developing solutions. Because of this, human-centered design may more fully incorporate culturally sound, human-informed, and appropriate solutions to problems in a variety of fields rather than solely product and technology-based fields. Because human-centered design focuses on the human experience, researchers and designers can address "issues of social justice and inclusion and encourage ethical, reflexive design."
Human-centered design arises from underlying principles of human factors. When it comes to those two concepts, they are quite interconnected; human factors are about discovering the attributes of human cognition and behavior that are important for making technology work for people. It is what allows humans as a species to innovate over time. Human-centered design was used to discover that Blackberries have less human usability than an iPhone and that important controls on a panel that look too similar will be easily confused and may cause an increased risk of human error.
An important distinction between human-centered design and any other form of design is that human-centered design is not just about aesthetics, and is not always designing for interfaces. It could be designing for controls in the world, tasks in the world, hardware, decision-making, or cognition. For instance, if a nurse is too tired from a long shift, they might confuse the pumps through which might be administered a bag of penicillin to a patient. In this case, the human-centered design would encompass a task redesign, a possible institute policy redesign, and an equipment redesign.
Typically, human-centered design is more focused on "methodologies and techniques for interacting with people in such a manner as to facilitate the detection of meanings, desires and needs, either by verbal or non-verbal means." In contrast, user-centered design is another approach and framework of processes which considers the human role in product use, but focuses largely on the production of interactive technology designed around the user's physical attributes rather than social problem-solving.
Human-centered design approach in Health
In the context of health-seeking behaviors, Human Centered Design can be used to understand why people do or do not seek out health services, even when those services are available and affordable. Human centered design is a powerful tool for improving health-seeking behaviors. This understanding can then be used to develop interventions to address the barriers and promote desired behaviors. Demand-related challenges associated with the acceptability, responsiveness, and quality of services can be addressed by working directly with users to understand their needs and perspectives. HCD can help in designing interventions that are more likely to be effective.
Critiques
Human-centered design has been both lauded and criticised for its ability to actively solve problems with affected communities. Criticisms include the inability of human-centered design to push the boundaries of available technology by solely tailoring to the demands of present-day solutions, rather than focus on possible future solutions. In addition, human-centered design often considers context, but does not offer tailored approaches for very specific groups of people. New research on innovative approaches include youth-centered health design, which focuses on youth as the central aspect with particular needs and limitations not always addressed by human-centered design approaches. Nevertheless, human-centered design that doesn't reflect very specific groups of users and their needs is human-centered design poorly executed, since the principles of human-system interaction require the reflection of those specified needs.
Whilst users are very important for some types of innovation (namely incremental innovation), focusing too much on the user may result in producing an outdated or no longer necessary product or service. This is because the insights that you achieve from studying the user today are insights that are related to the users of today and the environment she or he lives in today. If your solution will be available only two or three years from now, your user may have developed new preferences, wants and needs by then.
Modern Advances in Human-Centered Design
Human-Centered Design with Artificial Intelligence
Human-Centered AI (HCAI) is a methodical approach to AI system design that prioritizes human values and requirements. This method places a strong emphasis on boosting human self-efficacy, encouraging innovation, guaranteeing accountability, and promoting social interaction. By putting these human goals first, HCAI also tackles important concerns like privacy, security, environmental preservation, social justice, and human rights. This represents a dramatic change from an algorithmic approach to a human-centered system design, which has been compared to a second Copernican Revolution.
HCAI introduces a two-dimensional framework that demonstrates the possibility of combining high levels of human control with high levels of automation. This framework suggests a move away from viewing AI as autonomous teammates, instead positioning AI as powerful tools and tele-operated devices that empower users.
Furthermore, HCAI proposes a three-level governance structure to enhance the reliability and trustworthiness of AI systems. At the first level, software engineering teams are encouraged to develop robust and dependable systems. At the second level, managers are urged to cultivate a safety culture across their organizations. At the third level, industry-wide certification can help establish standards that promote trustworthy HCAI systems.
These concepts are designed to be dynamic, inviting challenge, refinement, and extension to accommodate new technologies. They aim to reframe design discussions for AI products and services, offering an opportunity to restart and reshape these conversations. The ultimate goal is to deliver greater benefits to individuals, families, communities, businesses, and society, ensuring that AI developments align with human values and societal goals
Integration of Human-Centered Design and Community Based Participatory Research
By joining two people-centered approaches, Human-Centered Design (HCD) and Community-Based Participatory Research (CBPR) offer a fresh way to tackle challenging real-world issues. While CBPR has been used in academic and community partnerships to address health inequities through social action and empowerment, HCD has historically been used in the business sector to guide the creation of products and services. Although the public sector has just started using HCD concepts to inform public policy, more research is still needed to fully understand its cycle and how it might be strategically applied to health promotion. By combining CBPR's emphasis on community trust and collaboration with HCD's emphasis on user-centric design, this integration provides a complimentary approach. The potential of these approaches to improve public health outcomes is demonstrated by CBPR initiatives, such as those that try to lower the spread of STIs and improve handwashing among farmworkers. The combined strategy can result in more lasting and successful health interventions by addressing pertinent concerns, establishing partnerships, and involving community members.
Human-Centered Design in SEIPS 3.0
In order to improve quality and safety in healthcare, Human Factors and Ergonomics (HFE) are integrated using the Systems Engineering Initiative for Patient Safety (SEIPS) models. These models are based on a human-centered design approach, which gives patients' and healthcare practitioners' wants and experiences top priority when designing systems. By extending the "process" component to handle the intricacies of contemporary healthcare delivery, SEIPS 3.0 builds upon this.
The idea of the patient journey is introduced by the SEIPS 3.0 model as healthcare becomes more dispersed across different locations and eras. This journey-centric approach emphasizes a comprehensive view of patients' experiences over time by mapping their contacts with various care venues. By emphasizing the patient journey, SEIPS 3.0 emphasizes how crucial it is to create systems that can adapt to patients' changing demands in order to provide seamless, secure, and encouraging care.
In order to implement human-centered design in SEIPS 3.0, HFE professionals must take into account a variety of viewpoints and encourage sincere involvement from all parties involved, including patients, caregivers, and medical professionals. In order to increase interactions across various healthcare settings and capture the intricacies of patient experiences, this approach calls for creative techniques. By putting people first, SEIPS 3.0 seeks to develop healthcare systems that improve the general happiness and well-being of both patients and caregivers in addition to preventing harm
See also
User-centered design
Design thinking
Human-Centered Artificial Intelligence
Humanistic economics
References
External links
International Standards
ISO 13407:1999 Human-centred design processes for interactive systems – Now Withdrawn
ISO 9241-210:2010 Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems – Now Withdrawn
ISO 9241-210:2019 Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems – UPDATED 2019
Design | Human-centered design | Engineering | 2,601 |
55,708,307 | https://en.wikipedia.org/wiki/Clean%20Boating%20Act%20of%202008 | The Clean Boating Act of 2008 (CBA) is a United States law that requires recreational vessels to implement best management practices to control pollution discharges. The law exempts these vessels from requirements to obtain a discharge permit under the Clean Water Act (i.e. they are exempt from coverage under the EPA Vessels General Permit).
The CBA amended the Clean Water Act (CWA) and directs the U.S. Environmental Protection Agency (EPA) to develop performance standard regulations. The regulations will not apply to sewage discharges from recreational vessels, which are already regulated under the CWA. (See Marine sanitation device.) The CBA designated the U.S. Coast Guard as the enforcing agency.
In 2011 EPA conducted public meetings to obtain public comment about developing CBA regulations. As of 2020, EPA has not announced a schedule for issuing the regulations.
See also
Regulation of ship pollution in the United States
References
Acts of the 110th United States Congress
Environmental impact of shipping
Maritime history of the United States
Ocean pollution
Pollution in the United States
Water pollution in the United States
United States admiralty law
United States federal environmental legislation | Clean Boating Act of 2008 | Chemistry,Environmental_science | 226 |
1,813,588 | https://en.wikipedia.org/wiki/Digital%20distribution | Digital distribution, also referred to as content delivery, online distribution, or electronic software distribution, among others, is the delivery or distribution of digital media content such as audio, video, e-books, video games, and other software.
The term is generally used to describe distribution over an online delivery medium, such as the Internet, thus bypassing physical distribution methods, such as paper, optical discs, and VHS videocassettes. The term online distribution is typically applied to freestanding products; downloadable add-ons for other products are more commonly known as downloadable content. With the advancement of network bandwidth capabilities, online distribution became prominent in the 21st century, with prominent platforms such as Amazon Video, and Netflix's streaming service starting in 2007.
Content distributed online may be streamed or downloaded, and often consists of books, films and television programs, music, software, and video games. Streaming involves downloading and using content at a user's request, or "on-demand", rather than allowing a user to store it permanently. In contrast, fully downloading content to a hard drive or other forms of storage media may allow offline access in the future.
Specialist networks known as content delivery networks help distribute content over the Internet by ensuring both high availability and high performance. Alternative technologies for content delivery include peer-to-peer file sharing technologies. Alternatively, content delivery platforms create and syndicate content remotely, acting like hosted content management systems.
Unrelated to the above, the term "digital distribution" is also used in film distribution to describe the distribution of content through physical digital media, in opposition to distribution by analog media such as photographic film and magnetic tape (see: digital cinema).
Impact on traditional retail
The rise of online distribution has provided controversy for the traditional business models and resulted in challenges as well as new opportunities for traditional retailers and publishers. Online distribution affects all of the traditional media markets, including music, press, and broadcasting. In Britain, the iPlayer, a software application for streaming television and radio, accounts for 5% of all bandwidth used in the United Kingdom.
Music
The move towards online distribution led to a dip in sales in the 2000s; CD sales were nearly cut in half around this time. One such example of online distribution taking its toll on a retailer is the Canadian music chain Sam the Record Man; the company blamed online distribution for having to close a number of its traditional retail venues in 2007–08. One main reason that sales took such a big hit was that unlicensed downloads of music were very accessible. With copyright infringement affecting sales, the music industry realized it needed to change its business model to keep up with the rapidly changing technology. The step that was taken to move the music industry into the online space has been successful for several reasons. The development of lossy audio compression file formats such as MP3 could take 30 MB for a typical 3-minute song and bring it down to 3 MB without any serious loss of quality. Lossless FLAC files can be up to six times larger than an MP3 while, in comparison, the same song might require 30–40 megabytes of storage on a CD. The smaller file size yields much greater Internet transfer speeds.
The transition into the online space has boosted sales, and profit for some artists. It has also allowed for potentially lower expenses such as lower coordination costs, lower distribution costs, as well as the possibility for redistributed total profits. These lower costs have aided new artists in breaking onto the scene and gaining recognition. In the past, some emerging artists have struggled to find a way to market themselves and compete in the various distribution channels. The Internet may give artists more control over their music in terms of ownership, rights, creative process, pricing, and more. In addition to providing global users with easier access to content, online stores allow users to choose the songs they wish instead of having to purchase an entire album from which there may only be one or two titles that the buyer enjoys.
The number of downloaded single tracks rose from 160 million in 2004 to 795 million in 2006, which accounted for a revenue boost from US$397 million to US$2 billion. Downloading peaked in the US in 2012, after which it started falling due to the rise of music streaming services. In 2017, physical formats overtook downloading again for the first time in six years, but despite the vinyl revival and CDs holding its own, the physical formats account for only 11% revenue as of 2023, while streaming services are dominant with 84% of the US industry.
Videos
Many traditional network television shows, movies and other video content is now available online, either from the content owner directly or from third-party services. YouTube, Netflix, Hulu, Vudu, Amazon Prime Video, DirecTV, SlingTV and other Internet-based video services allow content owners to let users access their content on computers, smartphones, tablets or by using appliances such as video game consoles, set-top boxes or Smart TVs.
Many film distributors also include a Digital Copy, also called Digital HD, with Blu-ray disc, Ultra HD Blu-ray, 3D Blu-ray or a DVD.
Books
Some companies, such as Bookmasters Distribution, which invested US$4.5 million in upgrading its equipment and operating systems, have had to direct capital toward keeping up with the changes in technology. The phenomenon of books going digital has given users the ability to access their books on handheld digital book readers. One benefit of electronic book readers is that they allow users to access additional content via hypertext links. These electronic book readers also give users portability for their books since a reader can hold multiple books depending on the size of its hard drive. Companies that are able to adapt and make changes to capitalize on the digital media market have seen sales surge. Vice President of Perseus Books Group stated that since shifting to electronic books (e-books), it saw sales rise by 68%. Independent Publishers Group experienced a sales boost of 23% in the first quarter of 2012 alone.
Tor Books, a major publisher of science fiction and fantasy books, started to sell e-books DRM-free by July 2012. One year later the publisher stated that they will keep this model as removing DRM was not hurting their digital distribution ebook business. Smaller e-book publishers such as O'Reilly Media, Carina Press and Baen Books had already forgone DRM previously.
Video games
Online distribution is changing the structure of the video game industry. Gabe Newell, creator of the digital distribution service Steam, formulated the advantages over physical retail distribution as such:
Since the 2000s, there has been an increasing number of smaller and niche titles available and commercially successful, e.g. remakes of classic games. The new possibility of the digital distribution stimulated also the creation of game titles of very small video game producers like Independent game developer and Modders (e.g. Garry's Mod), which were before not commercially feasible.
The years after 2004 saw the rise of many digital distribution services on the PC, such as Amazon Services, Desura, GameStop, Games for Windows – Live, Impulse, Steam, Origin, Battle.net, Direct2Drive, GOG.com, Epic Games Store and GamersGate. The offered properties differ significantly: while most of these digital distributors do not allow reselling of bought games, Green Man Gaming allows this. Another example is gog.com which has a strict non-DRM policy while most other services allow various (strict or less strict) forms of DRM.
Digital distribution is also more eco-friendly than physical. Optical discs are made of polycarbonate plastic and aluminum. The creation of 30 of them requires the use of 300 cubic feet of natural gas, two cups of oil and 24 gallons of water. The protective cases for an optical disc is made from polyvinyl chloride (PVC), a known carcinogen.
Challenges
A general issue is the large number of incompatible data formats in which content is delivered, possibly restricting the devices that may be used, or making data conversion necessary. Streaming services can have several drawbacks: requiring a constant Internet connection to use content; the restriction of some content to never be stored locally; the restriction of content from being transferred to physical media; and the enabling of greater censorship at the discretion of owners of content, infrastructure, and consumer devices.
Decades after the launch of the World Wide Web, in 2019 businesses were still adapting to the evolving world of distributing content digitally—even regarding the definition and understanding of basic terminology.
See also
App store
Digital ecosystem
Online shopping
Cloud gaming
Comparison of digital music stores
Content delivery network
Digital distribution of video games
Ebook
Electronic publishing
E-commerce
Film distribution
Film distributor
Internet pornography
List of mobile app distribution platforms
Streaming media
Uberisation
Video on demand
References
Distribution
Film distribution
Non-store retailing
.
.
Software delivery methods
.
. | Digital distribution | Technology | 1,796 |
24,955,243 | https://en.wikipedia.org/wiki/OpenWire%20%28library%29 | OpenWire is an open-source dataflow programming library that extends the functionality of Embarcadero Delphi and C++ Builder by providing pin type component properties. The properties can be connected to each other. The connections can be used to deliver data or state information between the pins, simulating the functionality of LabVIEW, Agilent VEE and Simulink. OpenWire is available for Visual Component Library (VCL) and FireMonkey (FMX).
History
The project started in 1997 as an attempt for visual design of text parsers. Later it was used for designing signal processing libraries, and was expanded to support any data type.
Pins
Pins form the connections between the components.
OpenWire defines 4 types of pins:
SourcePin usually provides data. Can connect to one or more SinkPins and to one StatePin.
SinkPin usually receives data. Can be connected to one SourcePin.
MultiSinkPin usually receives data. Can be connected to one or more SourcePin.
StatePin usually is used to share state between components. Can be connected to one or more StatePins or SinkPins, and to one SourcePin.
Pin Lists
Pin lists can contain and group pins.
OpenWire defines 2 types of pin lists:
PinList contains pins but is not responsible to create or destroy them.
PinListOwner contains pins and is responsible to create or destroy them.
Data Types
Two pins in OpenWire can connect and exchange data only if they support compatible data types. Each pin can support one or more data types. The data types are distinguished by GUID unique for each data type.
Format Converters
The latest version of OpenWire supports automatic data conversion. If two pins can't connect directly due to incompatible data types, a data format converter can be used automatically to convert the data between the pins. The developers can create and register format converters associated with different data types.
Multi-threading
OpenWire is designed as thread-safe and well suited for multi-threading VCL and FireMonkey component development.
Version history
The following is a rough outline of product release information.
Future development
A graphical OpenWire editor is under development. The latest version of the editor is available from the OpenWire Homepage.
References
External links
Free computer libraries
Free software programmed in Delphi
Pascal (programming language) libraries
Computer libraries
Pascal (programming language) software | OpenWire (library) | Technology | 480 |
1,966,125 | https://en.wikipedia.org/wiki/Christer%20Fuglesang | Arne Christer Fuglesang (born 18 March 1957) is a Swedish physicist and an ESA astronaut. He was first launched aboard the STS-116 Space Shuttle mission on 10 December 2006, making him the first Swedish citizen in space.
Married with three children, he was a Fellow at CERN and taught mathematics at the Royal Institute of Technology before being selected to join the European Astronaut Corps in 1992. He has participated in two Space Shuttle missions and five spacewalks, and is the first person outside of the United States or Russian space programs to participate in more than three spacewalks.
Early life and education
Fuglesang was born in Stockholm to a Swedish mother and a Norwegian father, who became a Swedish citizen shortly before Fuglesang's birth. Fuglesang graduated from the Bromma Gymnasium, Stockholm in 1975, earned a master's degree in engineering physics from the Royal Institute of Technology (KTH), in Stockholm in 1981, and received a doctorate in experimental particle physics from Stockholm University in 1987. He became an associate professor (docent) of particle physics at Stockholm University in 1991.
He married Elisabeth (Lisa) Fuglesang (née Walldie) in 1983, whom he met at the Royal Institute of Technology (KTH). They have three children.
Fuglesang is a prominent member of the Swedish skeptics association Vetenskap och Folkbildning and identifies strongly with skeptics and atheists.
In 2012, Fuglesang received the Royal Institute of Technology 2012 Alumni of the Year award.
Career
As a graduate student, Fuglesang worked at CERN in Geneva on the UA5 experiment, which studied proton–antiproton collisions. In 1988 he became a Fellow of CERN, where he worked on the CPLEAR experiment studying the subtle CP-violation of kaon particles. After a year he became a Senior Fellow and head of the particle identification subdetector. In November 1990, Fuglesang obtained a position at the Manne Siegbahn Institute of Physics, Stockholm, but remained stationed at CERN for another year working towards the new Large Hadron Collider project. Since 1990, when stationed in Sweden, Fuglesang taught mathematics at the Royal Institute of Technology.
In May 1992, Fuglesang was selected to join the European Astronaut Corps of the European Space Agency (ESA) based at the European Astronaut Centre (EAC) in Cologne, Germany. In 1992 he attended an introductory training programme at EAC and a four-week training program at the Yuri Gagarin Cosmonaut Training Center (TsPK) in Star City, Russia, with a view to future ESA–Russian collaboration on the Mir Space Station. In July 1993, he completed the basic astronaut training course at EAC.
In May 1993, Fuglesang and fellow ESA astronaut Thomas Reiter were selected for the Euromir 95 mission and commenced training at TsPK (Moscow) in preparation for their onboard engineer tasks, extra-vehicular activities (spacewalks) and operation of the Soyuz spacecraft. The Euromir 95 experiment training was organized and mainly carried out at EAC.
On 17 March 1995 he was selected as a member of Crew 2, the backup crew for the Euromir 95 mission, joining Gennadi Manakov and Pavel Vinogradov. During the mission, which lasted 179 days, Fuglesang was the prime crew interface coordinator. From the Russian Mission Control Center (TsUP) in Korolyov, he was the main contact with ESA Astronaut, Thomas Reiter, on Mir, and acted as coordinator between Mir and the Euromir 95 Payloads Operations Control Center, located in Oberpfaffenhofen, Germany, and project management. Between March and June 1996, he underwent specialized training in TsPK on Soyuz operations for de-docking, atmospheric re-entry and landing.
In 1996, ESA selected Fuglesang to train as a mission specialist for NASA Space Shuttle missions. He joined the Mission Specialist Class at NASA Johnson Space Center, Houston, in August 1996, and qualified for flight assignment as a mission specialist in April 1998.
From May to October 1998, he resumed training at TsPK on Soyuz-TM spacecraft operations for de-docking, atmospheric re-entry and landing. He was awarded the Russian Soyuz Return Commander certificate, which qualifies him to command a three-person Soyuz capsule on its return from space.
In October 1998, he returned to NASA and was assigned technical duties in the Station Operations System Branch of the NASA Astronaut Office, working on Russian Soyuz and Progress transfer vehicles. Later he worked as prime Increment Crew Support Astronaut for the second International Space Station expedition crew. Fuglesang also continued with some scientific work and was involved with the SilEye experiment which investigated light flashes in astronauts' eyes on Mir between 1995 and 1999. This work is continuing on the International Space Station (ISS) with the Alteino and ALTEA apparatuses. He has also initiated the DESIRE project to simulate and estimate the radiation environment inside ISS.
Missions
STS-116
Fuglesang's first spaceflight mission was as a mission specialist on STS-116 in 2006, an assembly and crew-rotation mission to the International Space Station. This flight was called the Celsius Mission by ESA in recognition of Anders Celsius, the Swedish 18th-century astronomer who invented the Celsius temperature scale.
Spacewalks during STS-116 Mission
First spacewalk with the primary task of Installation of the P5 truss segment performed together with Astronaut Robert Curbeam as EV1, 12 December.
EV2 during second spacewalk which included first part of rewiring the power system of the ISS specifically channel 2 and 3. Also performed together with Astronaut Robert Curbeam as EV1, 14 December.
An extra spacewalk (EVA4) attempting, successfully, to fix a problem when retracting a solar panel. Also performed together with Astronaut Robert Curbeam as EV1, 18 December. EVA duration: 6h 38min.
Total EVA time during STS-116: 18 hours and 15 minutes.
'Maximum Time Aloft'
Fuglesang, once a Swedish national Frisbee champion, held the national title in "maximum time aloft" in 1978, and subsequently competed in the 1981 World Frisbee Championship. Fuglesang took one of his personal frisbees to the International Space Station. On Dec 15 in 2006, he set a new "world record" for Time Aloft by free-floating a spinning frisbee for 20 seconds in the microgravity environment of the ISS. It was done during a live broadcast interview with a space exhibition in Stockholm Sweden. The record attempt was recognised by the sports governing body, the World Flying Disc Federation, and the record was accepted. But since it was set "outside the earth's atmosphere" it was recorded as 'Galactic Record'.
STS-128
On 15 July 2008 Fuglesang was selected as a mission specialist of the STS-128, launched on 28–29 August 2009. STS-128 (ISS assembly mission "17A") delivered equipment allowing the ISS crew to be expanded from three to six astronauts.
During STS-128 Fuglesang also became the first spacewalker outside Russia and United States to do more than three spacewalks. With the completion of two more EVAs, he has performed five spacewalks.
EVA
Total EVA time from five spacewalks adds up to 31 hours 54 minutes giving Fuglesang a 29th place in history as of 14 September 2009.
Award and honors
Honorary Doctorate from Umeå University, Sweden, 1999
Honorary Doctorate from the University of Nova Gorica, Slovenia, 2007
NASA Space Flight Medal, 2007
H. M. The King's Medal (Stockholm, 2007).
NASA Exceptional Service Medal, 2010
References
External links
ESA profile page
NASA biography
Spacefacts biography
1957 births
Living people
People associated with CERN
Particle physicists
Swedish physicists
Swedish astronauts
ESA astronauts
KTH Royal Institute of Technology alumni
Stockholm University alumni
Swedish atheists
Swedish skeptics
Swedish male bloggers
Swedish people of Norwegian descent
European amateur radio operators
Writers from Stockholm
Space Shuttle program astronauts
Spacewalkers | Christer Fuglesang | Physics | 1,661 |
11,816,342 | https://en.wikipedia.org/wiki/Cercospora%20solani-tuberosi | Cercospora solani-tuberosi is a fungal plant pathogen.
References
solani-tuberosi
Fungal plant pathogens and diseases
Fungus species | Cercospora solani-tuberosi | Biology | 33 |
78,135,250 | https://en.wikipedia.org/wiki/Elisa%20Facio | Elisa Facio became Uruguay's Minister of Industry, Energy and Mining of Uruguay in November 2023.
Life
Facio attended the University of the Republic where she studied computer engineering. She graduated and then went on to take a master's degree at the same university before she became a computer engineer.
She was the general director of the Ministry of Industry, Energy, and Mining before she became the minister of Industry, Energy and Mining of Uruguay in November 2023. She joined President Luis Alberto Lacalle Pou's Cabinet, replacing Omar Paganini who was promoted to foreign minister. The cabinet reshuffle was required because of the scandal associated with the sudden resignation of Francisco Bustillo.
Facio's responsibilities are in her job title but she is also responsible for intellectial property and the medical use of nuclear technology. She has spoken about the opportunities to develop alternative energy sources including wind, solar, and hydrogen. Uruguay is working in these areas but they have only exploited a small fraction of what is possible. They are hewld back by lack of finance and she sees other countries including those in Europe as potential investors. In November 2023 she went to China.
In May 2024 she was with a large delegation from Uruguay at the World Hydrogen Summit in Rotterdam. She explained her ambitionsin support of decarbonisation by 2050. An Innovation Hub program had been established to deliver these ambitions.
In October 2024 she declared amethyst to be Uruguay's national stone. She noted that the country's exports included $60m obtained from amethyst export and the industry created 2,000 jobs.
References
Living people
Uruguayan politicians
Uruguayan computer scientists
Software engineers
Government ministers of Uruguay
Year of birth missing (living people) | Elisa Facio | Engineering | 350 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.