id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
1,561,945 | https://en.wikipedia.org/wiki/Heesch%27s%20problem | In geometry, the Heesch number of a shape is the maximum number of layers of copies of the same shape that can surround it with no overlaps and no gaps. Heesch's problem is the problem of determining the set of numbers that can be Heesch numbers. Both are named for geometer Heinrich Heesch, who found a tile with Heesch number 1 (the union of a square, equilateral triangle, and 30-60-90 right triangle) and proposed the more general problem.
For example, a square may be surrounded by infinitely many layers of congruent squares in the square tiling, while a circle cannot be surrounded by even a single layer of congruent circles without leaving some gaps. The Heesch number of the square is infinite and the Heesch number of the circle is zero. In more complicated examples, such as the one shown in the illustration, a polygonal tile can be surrounded by several layers, but not by infinitely many; the maximum number of layers is the tile's Heesch number.
Formal definitions
A tessellation of the plane is a partition of the plane into smaller regions called tiles. The zeroth corona of a tile is defined as the tile itself, and for k > 0 the kth corona is the set of tiles sharing a boundary point with the (k − 1)th corona. The Heesch number of a figure S is the maximum value k such that there exists a tiling of the plane, and tile t within that tiling, for which that all tiles in the zeroth through kth coronas of t are congruent to S. In some work on this problem this definition is modified to additionally require that the union of the zeroth through kth coronas of t is a simply connected region.
If there is no upper bound on the number of layers by which a tile may be surrounded, its Heesch number is said to be infinite. In this case, an argument based on Kőnig's lemma can be used to show that there exists a tessellation of the whole plane by congruent copies of the tile.
Example
Consider the non-convex polygon P shown in the figure to the right, which is formed from a regular hexagon by adding projections on two of its sides and matching indentations on three sides. The figure shows a tessellation consisting of 61 copies of P, one large infinite region, and four small diamond-shaped polygons within the fourth layer. The first through fourth coronas of the central polygon consist entirely of congruent copies of P, so its Heesch number is at least four. One cannot rearrange the copies of the polygon in this figure to avoid creating the small diamond-shaped polygons, because the 61 copies of P have too many indentations relative to the number of projections that could fill them. By formalizing this argument, one can prove that the Heesch number of P is exactly four. According to the modified definition that requires that coronas be simply connected, the Heesch number is three. This example was discovered by Robert Ammann.
Known results
It is unknown whether all positive integers can be Heesch numbers. The first examples of polygons with Heesch number 2 were provided by , who showed that infinitely many polyominoes have this property. Casey Mann has constructed a family of tiles, each with the Heesch number 5. Mann's tiles have Heesch number 5 even with the restricted definition in which each corona must be simply connected. In 2020, Bojan Bašić found a figure with Heesch number 6, the highest finite number until the present.
For the corresponding problem in the hyperbolic plane, the Heesch number may be arbitrarily large.
References
Sources
Further reading
External links
Numberphile video about Heesch Numbers.
Tessellation
Unsolved problems in geometry | Heesch's problem | [
"Physics",
"Mathematics"
] | 791 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Tessellation",
"Euclidean plane geometry",
"Unsolved problems in geometry",
"Planes (geometry)",
"Mathematical problems",
"Symmetry"
] |
1,561,997 | https://en.wikipedia.org/wiki/Contorsion%20tensor | The contorsion tensor in differential geometry is the difference between a connection with and without torsion in it. It commonly appears in the study of spin connections. Thus, for example, a vielbein together with a spin connection, when subject to the condition of vanishing torsion, gives a description of Einstein gravity. For supersymmetry, the same constraint, of vanishing torsion, gives (the field equations of) eleven-dimensional supergravity. That is, the contorsion tensor, along with the connection, becomes one of the dynamical objects of the theory, demoting the metric to a secondary, derived role.
The elimination of torsion in a connection is referred to as the absorption of torsion, and is one of the steps of Cartan's equivalence method for establishing the equivalence of geometric structures.
Definition in metric geometry
In metric geometry, the contorsion tensor expresses the difference between a metric-compatible affine connection with Christoffel symbol and the unique torsion-free Levi-Civita connection for the same metric.
The contorsion tensor is defined in terms of the torsion tensor as (up to a sign, see below)
where the indices are being raised and lowered with respect to the metric:
.
The reason for the non-obvious sum in the definition of the contorsion tensor is due to the sum-sum difference that enforces metric compatibility. The contorsion tensor is antisymmetric in the first two indices, whilst the torsion tensor itself is antisymmetric in its last two indices; this is shown below.
The full metric compatible affine connection can be written as:
where the torsion-free Levi-Civita connection:
Definition in affine geometry
In affine geometry, one does not have a metric nor a metric connection, and so one is not free to raise and lower indices on demand. One can still achieve a similar effect by making use of the solder form, allowing the bundle to be related to what is happening on its base space. This is an explicitly geometric viewpoint, with tensors now being geometric objects in the vertical and horizontal bundles of a fiber bundle, instead of being indexed algebraic objects defined only on the base space. In this case, one may construct a contorsion tensor, living as a one-form on the tangent bundle.
Recall that the torsion of a connection can be expressed as
where is the solder form (tautological one-form). The subscript serves only as a reminder that this torsion tensor was obtained from the connection.
By analogy to the lowering of the index on torsion tensor on the section above, one can perform a similar operation with the solder form, and construct a tensor
Here is the scalar product. This tensor can be expressed as
The quantity is the contorsion form and is exactly what is needed to add to an arbitrary connection to get the torsion-free Levi-Civita connection. That is, given an Ehresmann connection , there is another connection that is torsion-free.
The vanishing of the torsion is then equivalent to having
or
This can be viewed as a field equation relating the dynamics of the connection to that of the contorsion tensor.
Derivation
One way to quickly derive a metric compatible affine connection is to repeat the sum-sum difference idea used in the derivation of the Levi–Civita connection but not take torsion to be zero. Below is a derivation.
Convention for derivation (Choose to define connection coefficients this way. The motivation is that of connection-one forms in gauge theory):
We begin with the Metric Compatible condition:
Now we use sum-sum difference (Cycle the indices on the condition):
We now use the below torsion tensor definition (for a holonomic frame) to rewrite the connection:
Note that this definition of torsion has the opposite sign as the usual definition when using the above convention for the lower index ordering of the connection coefficients, i.e. it has the opposite sign as the coordinate-free definition in the below section on geometry. Rectifying this inconsistency (which seems to be common in the literature) would result in a contorsion tensor with the opposite sign.
Substitute the torsion tensor definition into what we have:
Clean it up and combine like terms
The torsion terms combine to make an object that transforms tensorially. Since these terms combine together in a metric compatible fashion, they are given a name, the Contorsion tensor, which determines the skew-symmetric part of a metric compatible affine connection.
We will define it here with the motivation that it match the indices of the left hand side of the equation above.
Cleaning by using the anti-symmetry of the torsion tensor yields what we will define to be the contorsion tensor:
Subbing this back into our expression, we have:
Now isolate the connection coefficients, and group the torsion terms together:
Recall that the first term with the partial derivatives is the Levi-Civita connection expression used often by relativists.
Following suit, define the following to be the torsion-free Levi-Civita connection:
Then we have that the full metric compatible affine connection can now be written as:
Relationship to teleparallelism
In the theory of teleparallelism, one encounters a connection, the Weitzenböck connection, which is flat (vanishing Riemann curvature) but has a non-vanishing torsion. The flatness is exactly what allows parallel frame fields to be constructed. These notions can be extended to supermanifolds.
See also
Belinfante–Rosenfeld stress–energy tensor
References
Tensors
Riemannian geometry
Connection (mathematics) | Contorsion tensor | [
"Engineering"
] | 1,160 | [
"Tensors"
] |
1,562,061 | https://en.wikipedia.org/wiki/Video%20Privacy%20Protection%20Act | The Video Privacy Protection Act (VPPA) is a bill that was passed by the United States Congress in 1988 as and signed into law by President Ronald Reagan. It was created to prevent what it refers to as "wrongful disclosure of video tape rental or sale records" or similar audio visual materials, to cover items such as video games. Congress passed the VPPA after Robert Bork's video rental history was published during his Supreme Court nomination and it became known as the "Bork bill". It makes any "video tape service provider" that discloses rental information outside the ordinary course of business liable for up to $2,500 in actual damages unless the consumer has consented, the consumer had the opportunity to consent, or the data was subject to a court order or warrant.
In 2013, the law was amended to add provisions allowing consumers to electronically consent to sharing video rental histories and to extend the time that consent can last to up to two years. The law became a focus of attention in the legal industry once again in the twenty-first century with the rise of audiovisual content sharing through digital media. Its revival is part of a trend in the filing of consumer privacy class actions, both through new laws like the California Consumer Privacy Act and older laws like the VPPA and wiretapping statutes.
Computer-based VPPA litigation
Toward the end of the 2010's and beginning of the 2020's, the 1988 law experienced a resurgence in consumer class action lawsuits. The numerous lawsuits filed as part of this trend alleged that companies violated the VPPA by collecting and disclosing consumers' video viewing history through their websites, mobile apps, and other smart devices.
While the language of the VPPA focuses on "video tape service providers," consumers have argued that the law also protects the privacy of their personal information that is collected while they watch audiovisual content online. Cookies and other website behavior tracking technologies commonly found on popular websites allow the website operators to connect visitors' browsers with third parties who collect information from their website visit. This information can be shared with the third parties for various purposes including website functionality, language preferences and other personalization, and third party advertising. The recent resurgence of VPPA lawsuits is premised on the idea that data collected through the various tracking technologies may include personal information protected by the VPPA. Consumer plaintiffs assert that if that information is shared with third parties for analytics, advertising, or any other purpose that falls outside the exceptions in the VPPA, it is unlawful.
Prior to 2007, VPPA had not been cited by privacy attorneys as a cause of action involving electronic computing devices. Early lawsuits raising the VPPA in the context of data shared through the internet included a 2008 lawsuit against Facebook and thirty-three companies, including Blockbuster, Zappos, and Overstock.com, as well as the Lane v. Facebook, Inc. class action lawsuit, involving alleged privacy violations caused by the Facebook Beacon program.
The online advertising industry, in association with analytic companies, increasingly used video-based ads and at the same time gathered data from webpages and smart TV's showing digital video. By tracking web traffic online, consumers and their attorneys gather evidence of the data being collected by third parties through cookies and other tracking technologies when a person visits a website. Consumers use that traffic analysis to determine whether their protected personal information has been shared with third parties when they visited a particular website. For example, attorneys use software applications to log HTTP/HTTPS traffic between a computer's web browser and the Internet to produce evidence of tracking activities. This approach led to a $9.5 million settlement in the Lane v. Facebook, Inc. case.
2013 Amendments
Following VPPA litigation against Netflix and other digital media industry giants, in January 2013, President Barack Obama signed into law H.R. 6671 amending the VPPA. The amendments allow video rental companies to share rental information on social networking sites after obtaining customer permission.
Netflix, which had expressed concerns about violating the VPPA with its increasingly social video viewing services, reportedly lobbied for the change. Netflix cited the VPPA in 2011 following the announcement of its global integration with Facebook. The company noted that the VPPA was the sole reason why the new feature was not immediately available in the United States, and encouraged its customers to contact their representatives in support of legislation that would clarify the language of the law. In 2012, Netflix changed its privacy rules so that it no longer retained records for people who have left the site, a change that was reported to have been inspired by VPPA litigation.
Further results of VPPA litigation after the passage of these amendments were initially mixed. In 2015, the United States Court of Appeals for the Eleventh Circuit found that the law's protections do not reach the users of a free Android app, even when the app assigns each user a unique identification number and shares user behavior with a third party data analytics company.
References
1988 in American law
United States federal privacy legislation
Computer law | Video Privacy Protection Act | [
"Technology"
] | 1,008 | [
"Computer law",
"Computing and society"
] |
1,562,127 | https://en.wikipedia.org/wiki/Shekel%20function | The Shekel function or also Shekel's foxholes is a multidimensional, multimodal, continuous, deterministic function commonly used as a test function for testing optimization techniques.
The mathematical form of a function in dimensions with maxima is:
or, similarly,
Global minima
Numerically certified global minima and the corresponding solutions were obtained using interval methods for up to .
See also
Test functions for optimization
Test functions for optimization
Functions and mappings
References
Further reading
Shekel, J. 1971. "Test Functions for Multimodal Search Techniques." Fifth Annual Princeton Conference on Information Science and Systems. | Shekel function | [
"Mathematics"
] | 126 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical analysis stubs",
"Applied mathematics",
"Mathematical objects",
"Applied mathematics stubs",
"Mathematical relations"
] |
1,562,440 | https://en.wikipedia.org/wiki/Acetyl%20chloride | Acetyl chloride () is an acyl chloride derived from acetic acid (). It belongs to the class of organic compounds called acid halides. It is a colorless, corrosive, volatile liquid. Its formula is commonly abbreviated to AcCl.
Synthesis
On an industrial scale, the reaction of acetic anhydride with hydrogen chloride produces a mixture of acetyl chloride and acetic acid:
Laboratory routes
Acetyl chloride was first prepared in 1852 by French chemist Charles Gerhardt by treating potassium acetate with phosphoryl chloride.
Acetyl chloride is produced in the laboratory by the reaction of acetic acid with chlorodehydrating agents such as phosphorus trichloride (), phosphorus pentachloride (), sulfuryl chloride (), phosgene, or thionyl chloride (). However, these methods usually give acetyl chloride contaminated by phosphorus or sulfur impurities, which may interfere with the organic reactions.
Other methods
When heated, a mixture of dichloroacetyl chloride and acetic acid gives acetyl chloride. It can also be synthesized from the catalytic carbonylation of methyl chloride.
Occurrence
Acetyl chloride is not expected to exist in nature, because contact with water would hydrolyze it into acetic acid and hydrogen chloride. In fact, if handled in open air it releases white "smoke" resulting from hydrolysis due to the moisture in the air. The smoke is actually small droplets of hydrochloric acid and acetic acid formed by hydrolysis.
Uses
Acetyl chloride is used for acetylation reactions, i.e., the introduction of an acetyl group. Acetyl is an acyl group having the formula . For further information on the types of chemical reactions compounds such as acetyl chloride can undergo, see acyl halide. Two major classes of acetylations include esterification and the Friedel-Crafts reaction.
Acetic acid esters and amide
Acetyl chloride is a reagent for the preparation of esters and amides of acetic acid, used in the derivatization of alcohols and amines. One class of acetylation reactions are esterification, for example the reaction with ethanol to produce ethyl acetate and hydrogen chloride:
Frequently such acylations are carried out in the presence of a base such as pyridine, triethylamine, or DMAP, which act as catalysts to help promote the reaction and as bases neutralize the resulting HCl. Such reactions will often proceed via ketene.
Friedel-Crafts acetylations
A second major class of acetylation reactions are the Friedel-Crafts reactions.
See also
Acetic acid
Acetyl bromide
Acetyl fluoride
Acetyl iodide
References
External links
Acyl chlorides
Acetylating agents
Organic compounds with 2 carbon atoms
Acetyl compounds | Acetyl chloride | [
"Chemistry"
] | 609 | [
"Organic compounds",
"Reagents for organic chemistry",
"Acetylating agents",
"Organic compounds with 2 carbon atoms"
] |
1,562,475 | https://en.wikipedia.org/wiki/Silicon%20Glen | Silicon Glen is the nickname given to the high tech sector of Scotland, the name inspired by Silicon Valley in California. It is applied to the Central Belt triangle between Dundee, Inverclyde and Edinburgh, which includes Fife, Glasgow and Stirling; although electronics facilities outside this area may also be included in the term. The term has been in use since the 1980s. It does not technically represent a glen as it covers a much wider area than just one valley.
History
Origins
Silicon Glen had its origins in the electronics business with Ferranti establishing a plant in Edinburgh in 1943, relocating facilities from Manchester during the Second World War. When Ferranti remained in Edinburgh, other defence electronics companies also established themselves in Scotland, including the Marconi Company and Barr & Stroud. Major US companies followed in the late 1940s, including Honeywell and NCR Corporation, the latter setting up cash register and adding machine manufacturing in Dundee. IBM decided to establish a presence in the region in 1951, opening a manufacturing facility in Greenock in 1953. Indeed, this was typical of much of the early days of Silicon Glen, which were dominated by electronics manufacturing for foreign companies much more than research and development or the establishment of home grown companies.
Electronic dominance
The emphasis on electronics came about due to the decline in traditional Scottish heavy industries such as shipbuilding and mining. The government development agencies saw electronics manufacturing as being a positive replacement for people made redundant through heavy industry closures and the associated training and reskilling was relatively easy to achieve.
Semiconductors
Like the bedrock of Silicon Valley was in semiconductors, Silicon Glen also had a significant influence in semiconductor design and manufacturing starting in 1960 with Hughes Aircraft (now Raytheon) establishing its first facility outside the US in Glenrothes to manufacture germanium and silicon diodes. In 1965 Elliott Automation established a production facility in Glenrothes followed by a MOS research laboratory in 1967. This was followed in 1969 by the establishment of wafer fabs by General Instrument in Glenrothes, Motorola (now Freescale) in East Kilbride and National Semiconductor in Greenock. Signetics also opened a facility in Linlithgow in 1969.
In 1970, Compugraphic relocated from Aldershot to Glenrothes to provide photomask manufacturing for these companies. Other companies who developed semiconductor wafer fabrication or other manufacturing plants included SGS in Falkirk, NEC, Burr-Brown Corporation, IPS (then Seagate Technology) and Kymata (now Kaiam) in Livingston, CST in Glasgow and Micronas in Glenrothes.
There were some other notable successes such as the large Sun Microsystems plant in Linlithgow and the Digital Equipment Corporation semiconductor manufacturing plant in South Queensferry where the pioneering 64-bit Alpha 21064 and its derivatives were made. Digital also opened an office in Livingston, developing their flagship OpenVMS operating system. Digital's South Queensferry facility, opened in 1990 at an estimated cost of , was eventually sold to Motorola in 1995. At the time, Motorola itself employed 4,000 people at its own semiconductor plant at East Kilbride, as well as operating a cellular telephone plant at Easter Inch.
European single market
The potential and implications of a single European market motivated foreign companies, particularly those from the United States, to establish operations in Silicon Glen. By having a presence in a European Economic Community member country, companies could formally participate in standards committees and thus exert a degree of influence. Emerging European tariff rules concerning the origin of products were also strong motivators for the establishment of local manufacturing operations, with the EEC having updated its rules in 1989 to consider the location of the wafer diffusion phase of semiconductor production as determining the origin of the manufactured product. Local infrastructure support for the semiconductor industry was well regarded in Scotland, with local universities offering "a strong design base".
Rodime of Glenrothes pioneered the 3.5 inch hard disk drive in 1983 and spent subsequent years defending its patents against (and collecting royalties from) Seagate, Quantum, IBM and others.
Computing
The manufacturing sector grew to such an extent that at its peak it produced approximately 30% of Europe's PCs, 80% of its workstations, 65% of its ATMs and a significant percentage of its integrated circuits.
Recent history
Electronic decline
The heavy dependency on electronics manufacturing hit Silicon Glen hard after the collapse of the hi-tech economy in 2000. Viasystems, National Semiconductor (now Texas Instruments), Motorola and Chunghwa Picture Tubes all laid off substantial numbers of employees or closed factories completely. The effects of the Viasystems closure are still felt in the Scottish Borders today. Digital sold their Alpha facility to Motorola who eventually closed it down. Motorola also closed their factory in Bathgate and the substantial NEC plant in Livingston was also closed. In 2009 Sun ceased manufacturing at its Linlithgow plant and, after successive years of downsizing, NCR ended all manufacturing in Dundee.
However, there are many promising signs as well as a recognition that diversification away from electronics and manufacturing produces a more balanced and stronger economy. There is also more of an interest in encouraging home grown talent.
Scotland had 1,000 companies in electronics employing 25,000 people in 2004, this number has been in decline since 2000 when 48,000 people were employed in the industry in Scotland. However, by 2016 the Silicon Glen has begun to boom once again, with new digital start ups - such as Skyscanner - choosing Scotland for headquarters or offices.
Global services
To diversify away from electronics and manufacturing, the development agencies now see global services as being a potential area of growth, but there is also substantial interest in the software development industry, including Rockstar North, developers of the market leading Grand Theft Auto series. There is also a dynamic and fast growing electronics design and development industry, based around links between the very strong universities and indigenous companies and projects like the Alba Campus. The software sector has also notably attracted Amazon.com to set up a software development centre in Edinburgh, the first such centre outside the US. There remains a significant presence of global players like National Semiconductor, IBM, Shin Etsu Handotai Europe Ltd and Freescale. The move from a primarily manufacturing dominated region to a wealth creation one has been successful as demonstrated in a report from UBS Wealth Management in 2006 showing Scotland with more venture backed companies per capita than any other UK region.
In addition to the indigenous companies, Silicon Glen continues to have quite a significant semiconductor design community of inward investment companies including Atmel, Freescale, Texas Instruments, Micrel, Analog Devices, Allegro MicroSystems, Micro Linear, Micronas and ST Microelectronics.
Semefab, the former General Instrument semiconductor foundry, has been funded as the UK's Primary Centre for the development of microelectromechanical systems (MEMS) and nanotechnology.
The Open Source Awards (formerly the Scottish Open Source Awards) have been run from Scotland since 2007. It was initially a subset of the Scottish Software Awards.
Notable companies
Many high technology companies are established in Silicon Glen, including:
Unity (game engine)
Microsoft
Amazon.com
Codeplay
FanDuel
IBM
Oracle Corporation
Rockstar North
Adobe Systems
Canon Medical Systems Corporation
Skyscanner
Motorola
NCR
Proper Games
Raytheon
Texas Instruments
Freescale
3Com
Agilent
Analog Devices
Atmel
Atos
Axeon
ST Microelectronics
Broadcom Corporation
Cadence Design Systems
Cirrus Logic
Dialog Semiconductor
Dynamo Games
IndigoVision
Thales Optronics
Toshiba Medical Visualization Systems
Version 1
Linn
Maxim Integrated Products
Memex Technology Limited
Micrel
Braindead Ape Games
Micronas Intermetall
Leonardo MW Limited
Semefab
Allegro MicroSystems
AND Digital
Waracle
WFS Technologies
ATEEDA
Codestuff
Compugraphics
ClinTec International
Clyde Space
Infinity Works
Youmanage
Digital Goldfish
Kaiam Europe limited
MEP Technologies Ltd
Brand Rex
Elonics
Kumulos
Optos
Micro Linear
BI Technologies
Mage Control Systems Ltd
CRC Group
Shin Etsu Handotai Europe Ltd
See also
List of places with 'Silicon' names
References and notes
External links
Scotland IS, the trade body for the Scottish IT sector
Pico and General Instrument's 1970 development of a single chip calculator processor chip Possibly pre-dating Intel and TI.
The death and rebirth of Silicon Glen BBC News
Open Tech Calendar, free and open list of tech events in Scotland
Economy of Scotland
High-technology business districts in the United Kingdom
Silicon Glen
Science and technology in Scotland
Information technology places | Silicon Glen | [
"Technology"
] | 1,723 | [
"Information technology",
"Information technology places"
] |
1,562,509 | https://en.wikipedia.org/wiki/Voltage%20regulator%20module | A voltage regulator module (VRM), sometimes called processor power module (PPM), is a buck converter that provides the microprocessor and chipset the appropriate supply voltage, converting , or to lower voltages required by the devices, allowing devices with different supply voltages be mounted on the same motherboard. On personal computer (PC) systems, the VRM is typically made up of power MOSFET devices.
Overview
Most voltage regulator module implementations are soldered onto the motherboard. Some processors, such as Intel Haswell and Ice Lake CPUs, feature some voltage regulation components on the same CPU package, reduce the VRM design of the motherboard; such a design brings certain levels of simplification to complex voltage regulation involving numerous CPU supply voltages and dynamic powering up and down of various areas of a CPU. A voltage regulator integrated on-package or on-die is usually referred to as fully integrated voltage regulator (FIVR) or simply an integrated voltage regulator (IVR).
Most modern CPUs require less than , as CPU designers tend to use lower CPU core voltages; lower voltages help in reducing CPU power dissipation, which is often specified through thermal design power (TDP) that serves as the nominal value for designing CPU cooling systems.
Some voltage regulators provide a fixed supply voltage to the processor, but most of them sense the required supply voltage from the processor, essentially acting as a continuously-variable adjustable regulator. In particular, VRMs that are soldered to the motherboard are supposed to do the sensing, according to the Intel specification.
Modern video cards also use a VRM due to higher power and current requirements. These VRMs may generate a significant amount of heat and require heat sinks separate from the GPU.
Voltage identification
The correct supply voltage and current is communicated by the microprocessor to the VRM at startup via a number of bits called VID (voltage identification definition). In particular, the VRM initially provides a standard supply voltage to the VID logic, which is the part of the processor whose only aim is to then send the VID to the VRM. When the VRM has received the VID identifying the required supply voltage, it starts acting as a voltage regulator, providing the required constant voltage and current supply to the processor.
Instead of having a power supply unit generate some fixed voltage, the CPU uses a small set of digital signals, the VID lines, to instruct an on-board power converter of the desired voltage level. The switch-mode buck converter then adjusts its output accordingly. The flexibility so obtained makes it possible to use the same power supply unit for CPUs with different nominal supply voltages and to reduce power consumption during idle periods by lowering the supply voltage.
For example, a unit with 5-bit VID would output one of at most 32 (25) distinct output voltages. These voltages are usually (but not always) evenly spaced within a given range. Some of the code words may be reserved for special functions such as shutting down the unit, hence a 5-bit VID unit may have fewer than 32 output voltage levels. How the numerical codes map to supply voltages is typically specified in tables provided by component manufacturers. Since 2008 VID comes in 5-, 6- and 8-bit varieties and is mostly applied to power modules outputting between and .
VRM and overclocking
The VRMs are essential for overclocking. The quality of a VRM directly impacts the motherboard’s overclocking potential. The same overclocked processor can exhibit noticeable performance differences when paired with different VRMs. The reason for this is that a steady power supply is needed for successful overclocking. When a chip is pushed past its factory settings, that increases the power draw, so the VRM needs to match its output accordingly.
See also
Switched-mode power supply applications (SMPS) applications
Pulse-width modulation
References
External links
"Microprocessor Power Management"
Module
Digital electronics
MOSFETs | Voltage regulator module | [
"Physics",
"Engineering"
] | 828 | [
"Physical quantities",
"Digital electronics",
"Voltage regulation",
"Electronic engineering",
"Voltage"
] |
1,562,591 | https://en.wikipedia.org/wiki/Vertical%20market%20software | Vertical market software is aimed at addressing the needs of any given business within a discernible vertical market (specific industry or market). While horizontal market software can be useful to a wide array of industries (such as word processors or spreadsheet programs), vertical market software is developed for and customized to a specific industry's needs.
Vertical market software is readily identifiable by the application specific graphical user interface which defines it. One example of vertical market software is point-of-sale software.
See also
Horizontal market software
Horizontal market
Product software implementation method
Enterprise resource planning
Customer relationship management
Content management system
Supply chain management
Resources
Microsoft ships first Windows OS for vertical market from InfoWorld
The Limits of Open Source - Vertical Markets Present Special Obstacles
Software by type | Vertical market software | [
"Technology"
] | 150 | [
"Software by type"
] |
1,562,789 | https://en.wikipedia.org/wiki/New%20World%20rats%20and%20mice | The New World rats and mice are a group of related rodents found in North and South America. They are extremely diverse in appearance and ecology, ranging from the tiny Baiomys to the large Kunsia. They represent one of the few examples of muroid rodents (along with the voles) in North America, and the only example of muroid rodents to have made it into South America.
The New World rats and mice are often considered part of a single subfamily, Sigmodontinae, but the recent trend among muroid taxonomists is to recognize three separate subfamilies. This strategy better represents the extreme diversity of species numbers and ecological types.
Some molecular phylogenetic studies have suggested that the New World rats and mice are not a monophyletic group, but this is yet to be confirmed. Their closest relatives are clearly the hamsters and voles.
The New World rats and mice are divided into 3 subfamilies, 12 tribes, and 84 genera.
Classification
Family Cricetidae - hamsters, voles, and New World rats and mice
Subfamily Tylomyinae
Otonyctomys
Nyctomys
Tylomys
Ototylomys
Subfamily Neotominae
Tribe Baiomyini
Baiomys
Scotinomys
Tribe Neotomini
Neotoma
Xenomys
Hodomys
Nelsonia
Tribe Ochrotomyini
Ochrotomys
Tribe Reithrodontomyini
Peromyscus
Reithrodontomys
Onychomys
Neotomodon
Podomys
Isthmomys
Megadontomys
Habromys
Osgoodomys
Subfamily Sigmodontinae
Rhagomys incertae sedis
Tribe Oryzomyini
Oryzomys
Nesoryzomys
Melanomys
Sigmodontomys
Nectomys
Amphinectomys
Oligoryzomys
Neacomys
Zygodontomys
Lundomys
Holochilus
Pseudoryzomys
Microakodontomys
Oecomys
Microryzomys
Scolomys
Tribe Thomasomyini
Chilomys
Abrawayaomys
Delomys
Thomasomys
Wilfredomys
Aepomys
Phaenomys
Rhipidomys
Tribe Wiedomyini
Wiedomys
Tribe Akodontini
Akodon
Bibimys
Bolomys
Podoxymys
Thalpomys
Abrothrix
Chroeomys
Chelemys
Notiomys
Pearsonomys
Geoxus
Blarinomys
Juscelinomys
Oxymycterus
Lenoxus
Brucepattersonius
Scapteromys
Kunsia
Bibimys
Tribe Phyllotini
Calomys
Eligmodontia
Andalgalomys
Graomys
Salinomys
Phyllotis
Loxodontomys
Auliscomys
Galenomys
Chinchillula
Punomys
Andinomys
Irenomys
Euneomys
Neotomys
Reithrodon
Tribe Sigmodontini
Sigmodon
Tribe Ichthyomyini
Neusticomys
Rheomys
Anotomys
Chibchanomys
Ichthyomys
References
Centers for Disease Control, 2002. "Hantavirus Pulmonary Syndrome — United States: Updated Recommendations for Risk Reduction." Mortality and Morbidity Weekly Report, 51:09. Retrieved on 2007-07-13.
D'Elia, G. 2003. Phylogenetics of Sigmodontinae (Rodentia, Muroidea, Cricetidae), with special reference to the akodont group, and with additional comments on historical biogeography. Cladistics 19:307-323.
Mares, M. A., and J. K. Braun. 2000. Graomys, the genus that ate South America: A reply to Steppan and Sullivan. Journal of Mammalogy 81:271-276.
McKenna, M. C. and S. K. Bell. 1997. Classification of Mammals above the Species Level. Columbia University Press, New York.
Steppan, S. J., R. A. Adkins, and J. Anderson. 2004. Phylogeny and divergence date estimates of rapid radiations in muroid rodents based on multiple nuclear genes. Systematic Biology, 53:533-553.
Cricetidae
Paraphyletic groups | New World rats and mice | [
"Biology"
] | 889 | [
"Phylogenetics",
"Paraphyletic groups"
] |
1,562,796 | https://en.wikipedia.org/wiki/Aresti%20Catalog | The Aresti Catalog is the Fédération Aéronautique Internationale (FAI) standards document enumerating the aerobatic manoeuvers permitted in aerobatic competition. Designed by Spanish aviator Colonel José Luis Aresti Aguirre (1919–2003), each figure in the catalog is represented by lines, arrows, geometric shapes and numbers representing the precise form of a manoeuver to be flown.
The catalog broadly classifies manoeuvers into numbered families. Families 1 through 8 depict basic figures, such as turns, loops and vertical lines; family 9 depicts rotational elements that can be added to basic figures to increase difficulty, change the direction of flight or invert the g-loading of the aircraft.
Notation
In Aresti notation, solid lines represent upright or positive-g manoeuvers and dashed lines represent inverted or negative-g manoeuvers; these are sometimes depicted in red. Thick dot represents the beginning of the manoeuver, while a short perpendicular line represents the end. Stalled wing manoeuvers such as spins and snap (flick) rolls are represented by triangles. Arrows represent rolling manoeuvers with numbers representing the extent and number of segments of the roll.
The catalog assigns each manoeuver a unique identifier, called a catalog number, and difficulty factor, represented by the symbol K. When a basic figure is combined with one or more rolling elements, the resultant figure K is the sum of all component Ks. During an aerobatics competition, judges grade the execution of each manoeuver with a value between 10 (perfect) and 0 (highly flawed). Each figure's grades are multiplied by its K and summed to yield a total raw score for the flight.
Notational systems for aerobatic manoeuvers have been used since the 1920s.
The first system accepted worldwide was published by French aviator François d'Huc Dressler in 1955 and 1956. It was used for international competitions through 1962.
José Aresti's development of a notation for aerobatic figures began while serving as an instructor in the Jerez Pilot Training School in the 1940s. By the end of 1961 Aresti published a dictionary of some 3,000 aerobatic manoeuvers, the Sistema Aerocryptographica Aresti. Then employed throughout Spain, the Spanish Aero Club urged its adoption internationally. The FAI's Aerobatics Commission, CIVA, elected to use the catalog beginning at the World Aerobatic Championships held in Bilbao, Spain in 1964; it has been in use worldwide and has evolved continually since then. Though the catalog had grown at one time to some 15,000 manoeuvers, a CIVA working group substantially streamlined it in the mid-1980s.
Following Aresti's death, a court fight ensued between his heirs and FAI, which once provided a free catalog online. The catalog is now only available in printed form for a fee from Aresti System S.L.
Software is available to design and display aerobatic sequences using Aresti notation.
Notes
External links
An article explaining Aresti notation.
Aerobatic competitions
Notation | Aresti Catalog | [
"Mathematics"
] | 640 | [
"Symbols",
"Notation"
] |
1,563,166 | https://en.wikipedia.org/wiki/HVC%20127-41-330 | HVC 127-41-330 is a high-velocity cloud in the constellation of Pisces. The three numbers that compose its name indicate, respectively, the galactic longitude and latitude, and velocity towards Earth in km/s. It is 20,000 light years in diameter and is located 2.3 million light years (700 kiloparsecs) from Earth, between M31 and M33. This cloud of neutral hydrogen (detectable via 21 cm H-I emissions), unlike other HVCs shows a rotational component and dark matter. 80% of the mass of the cloud is dark matter. It is also the first HVC discovered not associated with the Milky Way galaxy or subgroup (subcluster).
Astronomer Josh Simon considers it a candidate for being a dark galaxy. With its rotation, it may be a very low density dwarf galaxy of unused hydrogen (no stars), a remnant of the formation of the Local Group.
See also
Dark galaxy
VIRGOHI21
LSB galaxy
References
High-velocity clouds
Dark galaxies
Local Group
Pisces (constellation) | HVC 127-41-330 | [
"Physics",
"Astronomy"
] | 221 | [
"Dark matter",
"Unsolved problems in physics",
"Astronomy stubs",
"Constellations",
"Dark galaxies",
"Nebula stubs",
"Pisces (constellation)"
] |
1,563,378 | https://en.wikipedia.org/wiki/Orienting%20response | The orienting response (OR), also called orienting reflex, is an organism's immediate response to a change in its environment, when that change is not sudden enough to elicit the startle reflex. The phenomenon was first described by Russian physiologist Ivan Sechenov in his 1863 book Reflexes of the Brain, and the term ('ориентировочный рефлекс' in Russian) was coined by Ivan Pavlov, who also referred to it as the Shto takoye? (Что такое? or What is it?) reflex. The orienting response is a reaction to novel or significant stimuli. In the 1950s the orienting response was studied systematically by the Russian scientist Evgeny Sokolov, who documented the phenomenon called "habituation", referring to a gradual "familiarity effect" and reduction of the orienting response with repeated stimulus presentations.
Researchers have found a number of physiological mechanisms associated with OR, including changes in phasic and tonic skin conductance response (SCR), electroencephalogram (EEG), and heart rate following a novel or significant stimulus. These observations all occur within seconds of stimulus introduction. In particular, EEG studies of OR have corresponded particularly with the P300 wave and P3a component of the OR-related event-related potential (ERP).
Neural correlates
Current understanding of the localization of OR in the brain is still unclear. In one study using fMRI and SCR, researchers found novel visual stimuli associated with SCR responses typical of an OR also corresponded to activation in the hippocampus, anterior cingulate gyrus, and ventromedial prefrontal cortex. These regions are also believed to be largely responsible for emotion, decision making, and memory. Increases in cerebellar and extrastriate cortex were also recorded, which are significantly implicated in visual perception and processing.
Function
When an individual encounters a novel environmental stimulus, such as a bright flash of light or a sudden loud noise, they will pay attention to it even before identifying it. This orienting reflex seems to be present early in development, as babies will turn their head toward an environmental change (Nelson Cowan, 1995). From an evolutionary perspective, this mechanism is useful in reacting quickly to events that call for immediate action.
Habituation
Sokolov's investigation of OR was primarily motivated in understanding habituation. Provided the first introduction of a novel stimulus, defined in Sokolovian terms as any change from the "currently active neuronal model" (what the individual is currently focused on), results in OR. However, with repeated introduction of the same stimulus, the orienting response will decrease in intensity and eventually cease. When novel stimuli have an associated contextual significance, repeated stimulus will still result in a sequentially decreasing OR, though at a modified rate of decay.
Orienting in decision-making
The orienting response is believed to play an integral role in preference formation. When faced with deciding between two options, subjects in studies by Simion & Shimojo were shown to choose the items they preferentially orient their gaze toward. This gaze can occur while the stimulus is present or after it has been removed, the latter causing gaze to be fixated at the point in which the stimulus had been present. Gaze bias ceases following a decision, suggesting that gaze bias is the cause of preference and not its effect. Noting this postulated causal link with the irrelevance of a stimulus presence, it is argued that gaze orientation supports decision-making mechanisms in inducing a preferential bias.
Role between emotion and attention
Both novelty and significance of a stimulation are implicated in the generation of an orienting response. Specifically, the emotional significance of a stimulus, defined by its level of pleasantness, can affect the intensity of the orienting response toward focusing attention on a subject. Studies showed that during exposure to neutral and emotionally significant novel images, both pleasant and unpleasant images produced higher skin conductance readings than neutral images. With repeated stimulation, all skin conductance readings diminished relative to novel introduction, though with emotionally significant content diminishing more slowly. Conversely, studies observing cardiac deceleration during novel stimuli introduction showed significantly more deceleration for unpleasant stimuli compared to pleasant and neutral stimuli. These findings suggest that OR represents a combination of responses that act in tandem to a common stimulus. More importantly, the differences between emotionally charged and neutral stimuli demonstrates the influence of emotion in orienting attention, despite novelty.
In relation to therapy
The orienting response has been posited as being stimulated by bilateral stimulation, and being the active ingredient in Eye movement desensitization and reprocessing (EMDR) therapy.
In popular culture
In his 2007 book The Assault on Reason, Al Gore posited that watching television affects the orienting response, an effect similar to vicarious traumatization.
See also
Information metabolism
Browsing
Perception
Attention
Interest (emotion)
References
6. Sokolov E N, Spinks J A, Naatanen R, Lyytinen H (2002) The Orienting Response In Information Processing. Lawrence Erlbaum Associates, Publishers. Mahwah, New Jersey. London.
Neurophysiology
Physiology
Behavioral concepts | Orienting response | [
"Biology"
] | 1,086 | [
"Behavior",
"Behavioral concepts",
"Behaviorism",
"Physiology"
] |
1,563,435 | https://en.wikipedia.org/wiki/Berlin%20Circle | The Berlin Circle () was a group that maintained logical empiricist views about philosophy.
History
The "Berlin Circle" had its roots in seminars by Hans Reichenbach between 1926-1928, resulting in the formation of a group that included Reichenbach, Kurt Grelling and Walter Dubislav among others. Independently, the Machist philosopher Joseph Petzoldt and others founded the local Berlin group (German: "Berliner Ortsgruppe") of the International society for empirical philosophy (German: "Internationale Gesellschaft für empirische Philosophie"), which was subsequently joined by the members of Reichenbach's group as well. The society was renamed in 1928 as Berlin society for empirical philosophy (German: "Berliner Gesellschaft für empirische Philosophie"), and after Petzoldt's death in 1929, the society was essentially taken over by Reichenbach's group, who in 1931 rebranded it as Berlin society for scientific philosophy (German: "Berliner Gesellschaft für wissenschaftliche Philosophie"). Additional members of the group include philosophers and scientists such as Carl Gustav Hempel, David Hilbert and Richard von Mises. Together with the Vienna Circle, they published the journal Erkenntnis ("Knowledge") edited by Rudolf Carnap and Reichenbach, and organized several congresses and colloquia concerning the philosophy of science, the first of which was held in Prague in 1929.
The Berlin Circle had much in common with the Vienna Circle, but the philosophies of the circles differed on a few subjects, such as probability and conventionalism. Reichenbach insisted on calling his philosophy logical empiricism, to distinguish it from the logical positivism of the Vienna Circle. Few people today make the distinction, and the words are often used interchangeably. Members of the Berlin Circle were particularly active in analyzing the philosophical and logical consequences of the advances in contemporary physics, especially the theory of relativity. Apart from that, they denied the soundness of metaphysics and traditional philosophy and asserted that many philosophical problems are indeed meaningless. After the rise of Nazism, several of the group's members emigrated to other countries, including Reichenbach, who moved to Turkey in 1933 and later to the United States in 1938; Dubislav emigrated to Prague in 1936; Hempel moved to Belgium in 1934 and later to the United States in 1939; and Grelling was killed in a concentration camp. A younger member of the Berlin Circle or Berlin School to leave Germany was Olaf Helmer who joined the RAND Corporation and played an important role in the development of the Delphi method used for predicting future trends, and other early forms of social technology.
After emigrating to various countries the group effectively came to an end, but not without influencing a wide range of philosophers of the 20th century, its method having been especially influential on analytic philosophy and futurology.
References
Logical positivism
1930s in Berlin
Philosophy of science
Weimar culture | Berlin Circle | [
"Mathematics"
] | 605 | [
"Mathematical logic",
"Logical positivism"
] |
1,563,600 | https://en.wikipedia.org/wiki/Content%20word | Content words, in linguistics, are words that possess semantic content and contribute to the meaning of the sentence in which they occur. In a traditional approach, nouns were said to name objects and other entities, lexical verbs to indicate actions, adjectives to refer to attributes of entities, and adverbs to attributes of actions. They contrast with function words, which have very little substantive meaning and primarily denote grammatical relationships between content words, such as prepositions (in, out, under etc.), pronouns (I, you, he, who etc.) and conjunctions (and, but, till, as etc.).
All words can be classified as either content or function words, but it is not always easy to make the distinction. With only around 150 function words, 99.9% of words in the English language are content words. Although small in number, function words are used at a disproportionately higher rate than content and make up about 50% of any English text because of the conventional patterns of usage that binds function words to content words almost every time they are used, which creates an interdependence between the two word groups.
Content words are usually open class words, and new words are easily added to the language. In relation to English phonology, content words generally adhere to the minimal word constraint of being no shorter than two morae long (a minimum length of two light syllables or one heavy syllable), but function words often do not.
See also
Lexical verb
Grammaticalization, the process by which words may change from content to function words
References
Linguistic morphology
Types of words
Parts of speech | Content word | [
"Technology"
] | 334 | [
"Parts of speech",
"Components"
] |
1,563,701 | https://en.wikipedia.org/wiki/Stag%20hunt | In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. However, both hunters know the only way to successfully hunt a stag is with the other's help. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. But both hunters would be better off if both choose the more ambitious and more rewarding goal of getting the stag, giving up some autonomy in exchange for the other hunter's cooperation and added might. This situation is often seen as a useful analogy for many kinds of social cooperation, such as international agreements on climate change.
The stag hunt differs from the prisoner's dilemma in that there are two pure-strategy Nash equilibria: one where both players cooperate, and one where both players defect. In the prisoner's dilemma, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect.
An example of the payoff matrix for the stag hunt is pictured in Figure 2.
Formal definition
Formally, a stag hunt is a game with two pure strategy Nash equilibria—one that is risk dominant and another that is payoff dominant. The payoff matrix in Figure 1 illustrates a generic stag hunt, where .
In addition to the pure strategy Nash equilibria there is one mixed strategy Nash equilibrium. This equilibrium depends on the payoffs, but the risk dominance condition places a bound on the mixed strategy Nash equilibrium. No payoffs (that satisfy the above conditions including risk dominance) can generate a mixed strategy equilibrium where Stag is played with a probability higher than one half. The best response correspondences are pictured here.
The stag hunt and social cooperation
Although most authors focus on the prisoner's dilemma as the game that best represents the problem of social cooperation, some authors believe that the stag hunt represents an equally (or more) interesting context in which to study cooperation and its problems (for an overview see ).
There is a substantial relationship between the stag hunt and the prisoner's dilemma. In biology many circumstances that have been described as prisoner's dilemma might also be interpreted as a stag hunt, depending on how fitness is calculated.
It is also the case that some human interactions that seem like prisoner's dilemmas may in fact be stag hunts. For example, suppose we have a prisoner's dilemma as pictured in Figure 3. The payoff matrix would need adjusting if players who defect against cooperators might be punished for their defection. For instance, if the expected punishment is −2, then the imposition of this punishment turns the above prisoner's dilemma into the stag hunt given at the introduction.
Examples of the stag hunt
The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a certain path. If all the hunters work together, they can kill the stag and all eat. If they are discovered, or do not cooperate, the stag will flee, and all will go hungry.
The hunters hide and wait along a path. An hour goes by, with no sign of the stag. Two, three, four hours pass, with no trace. A day passes. The stag may not pass every day, but the hunters are reasonably certain that it will come. However, a hare is seen by all hunters moving along the path.
If a hunter leaps out and kills the hare, he will eat, but the trap laid for the stag will be wasted and the other hunters will starve. There is no certainty that the stag will arrive; the hare is present. The dilemma is that if one hunter waits, he risks one of his fellows killing the hare for himself, sacrificing everyone else. This makes the risk twofold; the risk that the stag does not appear, and the risk that another hunter takes the kill.
In addition to the example suggested by Rousseau, David Hume provides a series of examples that are stag hunts. One example addresses two individuals who must row a boat. If both choose to row they can successfully move the boat. However, if one doesn't, the other wastes his effort. Hume's second example involves two neighbors wishing to drain a meadow. If they both work to drain it they will be successful, but if either fails to do his part the meadow will not be drained.
Several animal behaviors have been described as stag hunts. One is the coordination of slime molds. In times of stress, individual unicellular protists will aggregate to form one large body. Here if they all act together they can successfully reproduce, but success depends on the cooperation of many individual protozoa. Another example is the hunting practices of orcas (known as carousel feeding). Orcas cooperatively corral large schools of fish to the surface and stun them by hitting them with their tails. Since this requires that the fish have no way to escape, it requires the cooperation of many orcas.
Author James Cambias describes a solution to the game as the basis for an extraterrestrial civilization in his 2014 science fiction book A Darkling Sea. Carol M. Rose argues that the stag hunt theory is useful in 'law and humanities' theory. In international law, countries are the participants in a stag hunt. They can, for example, work together to improve good corporate governance.
A stag Hunt with pre-play communication
Robert Aumann proposed: "Let us now change the scenario by permitting pre-play communication. On the face of it, it seems that the players can then 'agree' to play (c,c); though the agreement is not enforceable, it removes each player's doubt about the other one playing c". Aumann concluded that in this game "agreement has no effect, one way or the other." It is his argument: "The information that such an agreement conveys is not that the players will keep it (since it is not binding), but that each wants the other to keep it." In this game "each player always prefers the other to play c, no matter what he himself plays. Therefore, an agreement to play (c,c) conveys no information about what the players will do, and cannot be considered self-enforcing." Weiss and Agassi wrote about this argument: "This we deem somewhat incorrect since it is an oversight of the agreement that may change the mutual expectations of players that the result of the game depends on... Aumann’s assertion that there is no a priori reason to expect agreement to lead to cooperation requires completion; at times, but only at times, there is a posteriori reason for that... How a given player will behave in a given game, thus, depends on the culture within which the game takes place".
See also
Common knowledge (logic)
Discourse on Inequality
Mutual knowledge
Pluralistic ignorance
Prisoner's dilemma
Social contract
Christmas truce
Explanatory footnotes
References
Notes
Bibliography
External links
The stag hunt at GameTheory.net
Non-cooperative games
Evolutionary game theory
Social science experiments | Stag hunt | [
"Mathematics"
] | 1,552 | [
"Game theory",
"Non-cooperative games",
"Evolutionary game theory"
] |
1,563,736 | https://en.wikipedia.org/wiki/World%20clock | A world clock is a clock which displays the time for various cities around the world.
The display can take various forms:
The clock face can incorporate multiple round analogue clocks with moving hands or multiple digital clocks with numeric readouts, with each clock being labelled with the name of a major city or time zone in the world. The World Clock in Alexanderplatz displays 146 cities in all 24 time zones on its head.
It could also be a picture map of the world with embedded analog or digital time-displays.
A moving circular map of the world, rotating inside a stationary 24-hour dial ring. Alternatively, the disc can be stationary and the ring moving.
Light projection onto a map representing daytime, used in the Geochron, a brand of a particular form of world clock.
There are also worldtime watches, both wrist watches and pocket watches. Sometime manufacturers of timekeepers erroneously apply the worldtime label to instruments that merely indicate time for two or a few time zones, but the term should be used only for timepieces that indicate time for all major time zones of the globe.
See also
Manege Square, Moscow
Time zone
Jens Olsen's World Clock (actually an astronomical clock)
References
Clocks
Horology | World clock | [
"Physics",
"Technology",
"Engineering"
] | 248 | [
"Machines",
"Physical quantities",
"Time",
"Horology",
"Clocks",
"Measuring instruments",
"Physical systems",
"Spacetime"
] |
1,563,824 | https://en.wikipedia.org/wiki/Martin%20Nowak | Martin Andreas Nowak (born April 7, 1965) is an Austrian-born professor of mathematics and biology at Harvard University. He is one of the leading researchers in evolutionary dynamics. Nowak has made contributions to the fields of evolutionary theory, cooperation, viral dynamics, and cancer dynamics.
Nowak held professorships at Oxford University and at the Institute for Advanced Study, Princeton, before being recruited by Harvard in 2003 when Jeffrey Epstein donated a large sum of money to support Nowak's work and set up a center for studying cooperation in evolution. Nowak was the director of Harvard's resulting Program for Evolutionary Dynamics (PED) from 2003 until 2020, when he was disciplined by being suspended from supervising undergraduate research for two years and having his institute permanently closed down as a punishment for having provided an office, keycard, and passcode, and for allowing Epstein free and unlimited access to PED for over ten years after his conviction for sex crimes.
Martin Nowak is famous for his extensive contributions to various scientific disciplines, including evolutionary game theory, virology, cancer dynamics, and the evolution of cooperation. Throughout his career, Nowak has collaborated with notable figures such as Robert May, Karl Sigmund, and John Maynard Smith. His work spans a wide range of topics, from the somatic evolution of cancer to the origins of language and prelife. Nowak has authored over 300 scientific publications, including many contributions to Nature and Science.
Aside from his scientific career, Nowak has also authored five books. His 2006 work
Evolutionary Dynamics received praise for its unique perspective on theoretical biology and won the R.R. Hawkins Award. In 2011, he co-authored
SuperCooperators, which argues for cooperation as a fundamental principle of evolution, garnering positive reviews. Additionally, Nowak has edited books, including Evolution, Games, and God, which examines the relationship between theology and evolutionary theory. He has received numerous awards for his contributions to science, including the Weldon Memorial Prize, the Albert Wander Prize, the Akira Okubo Prize, the David Starr Jordan Prize and the Henry Dale Prize.
Nowak identifies as a Roman Catholic, advocating for the compatibility of science and religion in the pursuit of truth. His 2024 book, Beyond, is a poetic exploration of the connection between religion and science. In 2015, he received the honorary degree Doctor of Humane Letters from the Dominican School of Philosophy & Theology at Berkeley.
Early life and education
Nowak was born in Vienna, Austria. He studied at Albertus Magnus Gymnasium and the University of Vienna, earning a doctorate in biochemistry and mathematics in 1989. He worked with Peter Schuster on quasi-species theory and with Karl Sigmund on evolution of cooperation. Nowak received the highest Austrian honors (Sub auspiciis Praesidentis) when awarded his degree. In 1993, he received his "Habilitation" at the Institute of Mathematics at the University of Vienna. In 2001, he was elected into the Austrian Academy of Sciences.
Career
From 1989 to 1998, Nowak worked at the University of Oxford with Robert May. First, he was an Erwin Schrödinger postdoctoral Scholar, then a Junior Research Fellow at Wolfson College, then a Junior Research Fellow at Keble College. From 1992, he was a Wellcome Trust Senior Research Fellow. From 1997 to 1998, Nowak was a professor of mathematical biology.
In 1998, Martin Nowak was recruited by the Institute for Advanced Study in Princeton.
He was Head of the Institute's first Initiative in Theoretical Biology from 1998 until 2003.
In 2003, Nowak was recruited to Harvard University as Professor of Mathematics and Biology. Nowak was also co-director with Sarah Coakley of the Evolution and Theology of Cooperation project at Harvard University, sponsored by the Templeton Foundation, where he was also a member of their Board of Advisers. He was appointed Director of the Program for Evolutionary Dynamics (PED). The PED was funded with a large sum of money from the Jeffrey Epstein VI Foundation. In 2003, Epstein had introduced himself as a science philanthropist cementing the initial interaction with a large donation to Harvard. Scientific American reported that Nowak's team received US$6.5 million initially, with nothing released to him after 2007, a couple of hundred thousand dollars remained unspent.
After Epstein's 2008 conviction, Harvard president Drew Faust decided that the university would no longer accept his donations. A report commissioned by the university found that Nowak allowed Epstein to visit the PED offices more than 40 times after his conviction, to maintain an office with a phone line and webpage, and to interact with students at PED. In 2020, the university placed Nowak on paid academic leave for violation of campus policies including professional conduct and campus access. In 2021, Harvard decided a proportionate response to the severity of Nowak's failure to follow Harvard policies was to close the institute founded with Epstein's money, to donate the money remaining to a foundation helping victims of sexual assaults, and to impose a two year ban on Nowak supervising undergraduate research, serving as the principal investigator of new grants, and supervising new graduate students or postdoctoral fellows. Nowak said he would "take the lessons from this time with me as I move forward". The sanctions against Nowak were lifted in 2023.
Academic research
Nowak has authored books and scientific papers on topics in evolutionary game theory, cancer, viruses, infectious disease, the evolution of language, and the evolution of cooperation. At Oxford, he helped to establish the fields of virus dynamics and spatial games (which later became evolutionary graph theory). He continued his collaboration with Karl Sigmund in game theory, proposing generous tit-for-tat and win-stay, lose-shift, inventing adaptive dynamics, alternating games and indirect reciprocity. He collaborated with John Maynard Smith on genetic redundancy, with Baruch Blumberg on hepatitis B virus, with George Shaw and Andrew McMichael on HIV. He worked with Robert May on evolution of virulence.
In 1990, Nowak and Robert May proposed a mathematical model which explained the puzzling delay between HIV infection and AIDS in terms of the evolution of different strains of the virus during individual infections, to the point where the genetic diversity of the virus reaches a threshold whereby the immune system can no longer control it. This detailed quantitative approach depended on assumptions about the biology of HIV which were subsequently confirmed by experiment.
At Harvard, Nowak continued his work on virus dynamics, cancer dynamics, and evolutionary game theory. In 2004, he established evolutionary game dynamics in finite populations. In 2005 and 2006 he wrote key papers establishing evolutionary graph theory. In 2006, he suggested that cooperation was a third fundamental principle of evolution beside mutation and selection. In 2007, he proposed prelife - a theory for the origin of life. In 2008 and 2009 he suggested that positive interaction, but not punishment, promotes evolution of cooperation.
In a paper in Science in 2006, Nowak enunciated and unified the mathematical rules for the five understood bases of the evolution of cooperation (kin selection, direct reciprocity, indirect reciprocity, network reciprocity, and group selection). Nowak suggests that evolution is constructive because of cooperation, and that we might add “natural cooperation” as a third fundamental principle of evolution beside mutation and natural selection.
In a paper featured on the front cover of Nature in 2007, Nowak and colleagues demonstrated that the transition of irregular verbs to regular verbs in English over time obeys a simple inverse-square law, thus providing one of the first quantitative laws in the evolution of language.
In 2010 a paper by Nowak, E. O. Wilson, and Corina Tarnita, in Nature, argued that standard natural selection theory represents a simpler and superior approach to kin selection theory in the evolution of eusociality. This work has led to many comments including strong criticism from proponents of inclusive fitness theory. Nowak maintains that the findings of the paper are conclusive and that the field of social evolution should move beyond inclusive fitness theory.
He has over 300 scientific publications, of which 40 are in Nature and 15 in Science.
Nowak's research interests include:
Somatic evolution of cancer, genetic instability, tumor suppressor genes
Stem cells, tissue architecture
Viruses, infectious diseases, immunology
Dynamics of prion infections
Quasispecies
Genetic redundancy
Evolution of language
Evolutionary game theory
Evolutionary graph theory
Evolution of cooperation
Prelife and origins of life
Published books
Nowak's first book Virus Dynamics: Mathematical Principles of Immunology and Virology, written with Robert May, was published by Oxford University Press in 2001. Nowak's 2006 book Evolutionary Dynamics: Exploring the Equations of Life discusses the evolution of various biological processes. Reviewing Evolutionary Dynamics in Nature, Sean Nee called it a "unique book" that "should be on the shelf of anyone who has, or thinks they might have, an interest in theoretical biology." The book received the Association of American Publishers' R.R. Hawkins Award for the Outstanding Professional, Reference or Scholarly Work of 2006.
Nowak's book SuperCooperators: The Mathematics of Evolution, Altruism and Human Behaviour (Or, Why We Need Each Other to Succeed), co-authored with Roger Highfield, was published in 2011. SuperCooperators is both an autobiography of Nowak and a popular presentation of his work in mathematical biology on the evolution of cooperation, the origin of life, and the evolution of language. In the book, Nowak argues that cooperation is the third fundamental principle of evolution, next to mutation and natural selection. SuperCooperators received positive reviews in The New York Times, Nature, and the Financial Times.
With Sarah Coakley, Nowak edited the 2013 book Evolution, Games, and God: The Principle of Cooperation, published by Harvard University Press. The volume features articles from experts in multiple fields who explore the interplay between theology and evolutionary theory as pertaining to cooperation and altruism.
Awards
Nowak is a corresponding member of the Austrian Academy of Sciences. He won the Weldon Memorial Prize, the Albert Wander Prize, the Akira Okubo Prize, the David Starr Jordan Prize and the Henry Dale Prize.
Personal life
Nowak is a Roman Catholic. In a 2007 lecture at Harvard, he argued that science and religion occupied different but complementary roles in humans' search for meaning, stating: "Science and religion are two essential components in the search for truth. Denying either is a barren approach."
References
External links
Martin Nowak: Extended film interview with transcript for the 'Why Are We Here?' documentary series.
1965 births
Christian scholars
Living people
Fellows of Wolfson College, Oxford
Evolutionary biologists
University of Vienna alumni
Harvard University faculty
Austrian Roman Catholics
Theistic evolutionists
Austrian mathematicians
Austrian biochemists | Martin Nowak | [
"Biology"
] | 2,192 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
1,563,927 | https://en.wikipedia.org/wiki/California%20Nebula | The California Nebula (Also known NGC 1499 or Sh2-220) is an emission nebula located in the constellation Perseus. Its name comes from its resemblance to the outline of the US State of California in long exposure photographs. It is almost 2.5° long on the sky and, because of its very low surface brightness, it is extremely difficult to observe visually. It can be observed with a Hα filter (isolates the Hα line at 656 nm) or Hβ filter (isolates the Hβ line at 486 nm) in a rich-field telescope under dark skies. It lies at a distance of about 1,000 light years from Earth. Its fluorescence is due to excitation of the Hβ line in the nebula by the nearby prodigiously energetic O7 star, Xi Persei (also known as Menkib).
The California Nebula was discovered by E. E. Barnard in 1884.
By coincidence, the California Nebula transits in the zenith in central California as the latitude matches the declination of the object.
References
External links
NGC 1499 at SEDS
Menkhib and the California Nebula by the Wide-field Infrared Survey Explorer (WISE)
NGC objects
Perseus (constellation)
Emission nebulae
Sharpless objects
? | California Nebula | [
"Astronomy"
] | 263 | [
"Perseus (constellation)",
"Constellations"
] |
1,564,180 | https://en.wikipedia.org/wiki/HD%2046375 | HD 46375 is double star with an exoplanetary companion in the equatorial constellation of Monoceros. It presents as an 8th-magnitude star with an apparent visual magnitude of 7.91, which is too dim to be readily visible to the naked eye. The system is located at a distance of 96.5 light-years from the Sun based on parallax measurements, but is slowly drifting closer with a radial velocity of −1 km/s. The common proper motion stellar companion, designated HD 46375 B, has a linear projected separation of .
The primary component is a solar-type star with a stellar classification of G9V, matching a G-type main-sequence star. Age estimates for this star range from 2.6 up to 11.9 billion years. It is a chromospherically inactive star and is spinning slowly with a projected rotational velocity of 0.86 km/s. The absolute magnitude of this star places it one magnitude brighter than the equivalent for a zero age main sequence. It has 91% of the mass and 101% of the radius of the Sun. The star is radiating 77% of the luminosity of the Sun from its photosphere at an effective temperature of 3,663 K.
This star has sometimes been classified as a member of the NGC 2244 star cluster in the Rosette Nebula, but in reality it just happens to lie in the foreground. The distance to the cluster is much greater, about 4500 light-years.
Planetary system
On March 29, 2000, the planet HD 46375 b with a minimum mass three quarters that of Saturn was discovered by Marcy, Butler, and Vogt in California, together with 79 Ceti b. This planet was discovered using the "wobble method" or radial velocity method, which calculates the rate and shape of the stellar wobble caused by the revolving planet's gravity.
See also
List of extrasolar planets
References
External links
G-type main-sequence stars
M-type main-sequence stars
Double stars
Planetary systems with one confirmed planet
Monoceros
Durchmusterung objects
046375
031246 | HD 46375 | [
"Astronomy"
] | 439 | [
"Monoceros",
"Constellations"
] |
1,564,194 | https://en.wikipedia.org/wiki/Universal%20instantiation | In predicate logic, universal instantiation (UI; also called universal specification or universal elimination, and sometimes confused with dictum de omni) is a valid rule of inference from a truth about each member of a class of individuals to the truth about a particular individual of that class. It is generally given as a quantification rule for the universal quantifier but it can also be encoded in an axiom schema. It is one of the basic principles used in quantification theory.
Example: "All dogs are mammals. Fido is a dog. Therefore Fido is a mammal."
Formally, the rule as an axiom schema is given as
for every formula A and every term t, where is the result of substituting t for each free occurrence of x in A. is an instance of
And as a rule of inference it is
from infer
Irving Copi noted that universal instantiation "...follows from variants of rules for 'natural deduction', which were devised independently by Gerhard Gentzen and Stanisław Jaśkowski in 1934."
Quine
According to Willard Van Orman Quine, universal instantiation and existential generalization are two aspects of a single principle, for instead of saying that "∀x x = x" implies "Socrates = Socrates", we could as well say that the denial "Socrates ≠ Socrates" implies "∃x x ≠ x". The principle embodied in these two operations is the link between quantifications and the singular statements that are related to them as instances. Yet it is a principle only by courtesy. It holds only in the case where a term names and, furthermore, occurs referentially.
See also
Existential instantiation
Existential quantification
References
Rules of inference
Predicate logic | Universal instantiation | [
"Mathematics"
] | 360 | [
"Predicate logic",
"Proof theory",
"Mathematical logic",
"Rules of inference",
"Basic concepts in set theory"
] |
1,564,205 | https://en.wikipedia.org/wiki/Real-time%20computer%20graphics | Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion.
Computers have been capable of generating 2D images such as simple lines, images and polygons in real time since their invention. However, quickly rendering detailed 3D objects is a daunting task for traditional Von Neumann architecture-based systems. An early workaround to this problem was the use of sprites, 2D images that could imitate 3D graphics.
Different techniques for rendering now exist, such as ray-tracing and rasterization. Using these techniques and advanced hardware, computers can now render images quickly enough to create the illusion of motion while simultaneously accepting user input. This means that the user can respond to rendered images in real time, producing an interactive experience.
Principles of real-time 3D computer graphics
The goal of computer graphics is to generate computer-generated images, or frames, using certain desired metrics. One such metric is the number of frames generated in a given second. Real-time computer graphics systems differ from traditional (i.e., non-real-time) rendering systems in that non-real-time graphics typically rely on ray tracing. In this process, millions or billions of rays are traced from the camera to the world for detailed rendering—this expensive operation can take hours or days to render a single frame.
Real-time graphics systems must render each image in less than 1/30th of a second. Ray tracing is far too slow for these systems; instead, they employ the technique of z-buffer triangle rasterization. In this technique, every object is decomposed into individual primitives, usually triangles. Each triangle gets positioned, rotated and scaled on the screen, and rasterizer hardware (or a software emulator) generates pixels inside each triangle. These triangles are then decomposed into atomic units called fragments that are suitable for displaying on a display screen. The fragments are drawn on the screen using a color that is computed in several steps. For example, a texture can be used to "paint" a triangle based on a stored image, and then shadow mapping can alter that triangle's colors based on line-of-sight to light sources.
Video game graphics
Real-time graphics optimizes image quality subject to time and hardware constraints. GPUs and other advances increased the image quality that real-time graphics can produce. GPUs are capable of handling millions of triangles per frame, and modern DirectX/OpenGL class hardware is capable of generating complex effects, such as shadow volumes, motion blurring, and triangle generation, in real-time. The advancement of real-time graphics is evidenced in the progressive improvements between actual gameplay graphics and the pre-rendered cutscenes traditionally found in video games. Cutscenes are typically rendered in real-time—and may be interactive. Although the gap in quality between real-time graphics and traditional off-line graphics is narrowing, offline rendering remains much more accurate.
Advantages
Real-time graphics are typically employed when interactivity (e.g., player feedback) is crucial. When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making. Teams of people are typically involved in the making of these decisions.
In real-time computer graphics, the user typically operates an input device to influence what is about to be drawn on the display. For example, when the user wants to move a character on the screen, the system updates the character's position before drawing the next frame. Usually, the display's response-time is far slower than the input device—this is justified by the immense difference between the (fast) response time of a human being's motion and the (slow) perspective speed of the human visual system. This difference has other effects too: because input devices must be very fast to keep up with human motion response, advancements in input devices (e.g., the current Wii remote) typically take much longer to achieve than comparable advancements in display devices.
Another important factor controlling real-time computer graphics is the combination of physics and animation. These techniques largely dictate what is to be drawn on the screen—especially where to draw objects in the scene. These techniques help realistically imitate real world behavior (the temporal dimension, not the spatial dimensions), adding to the computer graphics' degree of realism.
Real-time previewing with graphics software, especially when adjusting lighting effects, can increase work speed. Some parameter adjustments in fractal generating software may be made while viewing changes to the image in real time.
Rendering pipeline
The graphics rendering pipeline ("rendering pipeline" or simply "pipeline") is the foundation of real-time graphics. Its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures and more.
Architecture
The architecture of the real-time rendering pipeline can be divided into conceptual stages: application, geometry and rasterization.
Application stage
The application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software that developers optimize for performance. This stage may perform processing such as collision detection, speed-up techniques, animation and force feedback, in addition to handling user input.
Collision detection is an example of an operation that would be performed in the application stage. Collision detection uses algorithms to detect and respond to collisions between (virtual) objects. For example, the application may calculate new positions for the colliding objects and provide feedback via a force feedback device such as a vibrating game controller.
The application stage also prepares graphics data for the next stage. This includes texture animation, animation of 3D models, animation via transforms, and geometry morphing. Finally, it produces primitives (points, lines, and triangles) based on scene information and feeds those primitives into the geometry stage of the pipeline.
Geometry stage
The geometry stage manipulates polygons and vertices to compute what to draw, how to draw it and where to draw it. Usually, these operations are performed by specialized hardware or GPUs. Variations across graphics hardware mean that the "geometry stage" may actually be implemented as several consecutive stages.
Model and view transformation
Before the final model is shown on the output device, the model is transformed onto multiple spaces or coordinate systems. Transformations move and manipulate objects by altering their vertices. Transformation is the general term for the four specific ways that manipulate the shape or position of a point, line or shape.
Lighting
In order to give the model a more realistic appearance, one or more light sources are usually established during transformation. However, this stage cannot be reached without first transforming the 3D scene into view space. In view space, the observer (camera) is typically placed at the origin. If using a right-handed coordinate system (which is considered standard), the observer looks in the direction of the negative z-axis with the y-axis pointing upwards and the x-axis pointing to the right.
Projection
Projection is a transformation used to represent a 3D model in a 2D space. The two main types of projection are orthographic projection (also called parallel) and perspective projection. The main characteristic of an orthographic projection is that parallel lines remain parallel after the transformation. Perspective projection utilizes the concept that if the distance between the observer and model increases, the model appears smaller than before. Essentially, perspective projection mimics human sight.
Clipping
Clipping is the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage. Once those primitives are removed, the primitives that remain will be drawn into new triangles that reach the next stage.
Screen mapping
The purpose of screen mapping is to find out the coordinates of the primitives during the clipping stage.
Rasterizer stage
The rasterizer stage applies color and turns the graphic elements into pixels or picture elements.
See also
Bounding interval hierarchy
Demoscene
Geometry instancing
Optical feedback
Quartz Composer
Real time (media)
Real-time raytracing
Tessellation (computer graphics)
Video art
Video display controller
References
Bibliography
External links
RTR Portal – a trimmed-down "best of" set of links to resources
Computer graphics
Computer graphics | Real-time computer graphics | [
"Technology"
] | 1,777 | [
"Real-time computing"
] |
1,564,226 | https://en.wikipedia.org/wiki/Aufbau%20principle | In atomic physics and quantum chemistry, the Aufbau principle (, from ), also called the Aufbau rule, states that in the ground state of an atom or ion, electrons first fill subshells of the lowest available energy, then fill subshells of higher energy. For example, the 1s subshell is filled before the 2s subshell is occupied. In this way, the electrons of an atom or ion form the most stable electron configuration possible. An example is the configuration for the phosphorus atom, meaning that the 1s subshell has 2 electrons, the 2s subshell has 2 electrons, the 2p subshell has 6 electrons, and so on.
The configuration is often abbreviated by writing only the valence electrons explicitly, while the core electrons are replaced by the symbol for the last previous noble gas in the periodic table, placed in square brackets. For phosphorus, the last previous noble gas is neon, so the configuration is abbreviated to [Ne] 3s2 3p3, where [Ne] signifies the core electrons whose configuration in phosphorus is identical to that of neon.
Electron behavior is elaborated by other principles of atomic physics, such as Hund's rule and the Pauli exclusion principle. Hund's rule asserts that if multiple orbitals of the same energy are available, electrons will occupy different orbitals singly and with the same spin before any are occupied doubly. If double occupation does occur, the Pauli exclusion principle requires that electrons that occupy the same orbital must have different spins (+ and −).
Passing from one element to another of the next higher atomic number, one proton and one electron are added each time to the neutral atom.
The maximum number of electrons in any shell is 2n2, where n is the principal quantum number.
The maximum number of electrons in a subshell is equal to 2(2 + 1), where the azimuthal quantum number is equal to 0, 1, 2, and 3 for s, p, d, and f subshells, so that the maximum numbers of electrons are 2, 6, 10, and 14 respectively. In the ground state, the electronic configuration can be built up by placing electrons in the lowest available subshell until the total number of electrons added is equal to the atomic number. Thus subshells are filled in the order of increasing energy, using two general rules to help predict electronic configurations:
Electrons are assigned to subshells in order of increasing value of n + .
For subshells with the same value of n + , electrons are assigned first to the subshell with lower n.
A version of the aufbau principle known as the nuclear shell model is used to predict the configuration of protons and neutrons in an atomic nucleus.
Madelung energy ordering rule
In neutral atoms, the approximate order in which subshells are filled is given by the n + rule, also known as the:
Madelung rule (after Erwin Madelung)
Janet rule (after Charles Janet)
Klechkowsky rule (after Vsevolod Klechkovsky)
Wiswesser's rule (after William Wiswesser)
Moeller's rubric
aufbau (building-up) rule or
diagonal rule
Here n represents the principal quantum number and the azimuthal quantum number; the values = 0, 1, 2, 3 correspond to the s, p, d, and f subshells, respectively. Subshells with a lower n + value are filled before those with higher n + values. In the many cases of equal n + values, the subshell with a lower n value is filled first. The subshell ordering by this rule is 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, 8s, 5g, ... For example, thallium (Z = 81) has the ground-state configuration or in condensed form, [Xe] 6s2 4f14 5d10 6p1.
Other authors write the subshells outside of the noble gas core in order of increasing n, or if equal, increasing n + l, such as Tl (Z = 81) . They do so to emphasize that if this atom is ionized, electrons leave approximately in the order 6p, 6s, 5d, 4f, etc. On a related note, writing configurations in this way emphasizes the outermost electrons and their involvement in chemical bonding.
In general, subshells with the same n + value have similar energies, but the s-orbitals (with = 0) are exceptional: their energy levels are appreciably far from those of their n + group and are closer to those of the next n + group. This is why the periodic table is usually drawn to begin with the s-block elements.
The Madelung energy ordering rule applies only to neutral atoms in their ground state. There are twenty elements (eleven in the d-block and nine in the f-block) for which the Madelung rule predicts an electron configuration that differs from that determined experimentally, although the Madelung-predicted electron configurations are at least close to the ground state even in those cases.
One inorganic chemistry textbook describes the Madelung rule as essentially an approximate empirical rule although with some theoretical justification, based on the Thomas–Fermi model of the atom as a many-electron quantum-mechanical system.
Exceptions in the d-block
The valence d-subshell "borrows" one electron (in the case of palladium two electrons) from the valence s-subshell.
For example, in copper 29Cu, according to the Madelung rule, the 4s subshell (n + = 4 + 0 = 4) is occupied before the 3d subshell (n + = 3 + 2 = 5). The rule then predicts the electron configuration , abbreviated where [Ar] denotes the configuration of argon, the preceding noble gas. However, the measured electron configuration of the copper atom is . By filling the 3d subshell, copper can be in a lower energy state.
A special exception is lawrencium 103Lr, where the 6d electron predicted by the Madelung rule is replaced by a 7p electron: the rule predicts , but the measured configuration is .
Exceptions in the f-block
The valence d-subshell often "borrows" one electron (in the case of thorium two electrons) from the valence f-subshell. For example, in uranium 92U, according to the Madelung rule, the 5f subshell (n + = 5 + 3 = 8) is occupied before the 6d subshell (n + = 6 + 2 = 8). The rule then predicts the electron configuration where [Rn] denotes the configuration of radon, the preceding noble gas. However, the measured electron configuration of the uranium atom is .
All these exceptions are not very relevant for chemistry, as the energy differences are quite small and the presence of a nearby atom can change the preferred configuration. The periodic table ignores them and follows idealised configurations. They occur as the result of interelectronic repulsion effects; when atoms are positively ionised, most of the anomalies vanish.
The above exceptions are predicted to be the only ones until element 120, where the 8s shell is completed. Element 121, starting the g-block, should be an exception in which the expected 5g electron is transferred to 8p (similarly to lawrencium). After this, sources do not agree on the predicted configurations, but due to very strong relativistic effects there are not expected to be many more elements that show the expected configuration from Madelung's rule beyond 120. The general idea that after the two 8s elements, there come regions of chemical activity of 5g, followed by 6f, followed by 7d, and then 8p, does however mostly seem to hold true, except that relativity "splits" the 8p shell into a stabilized part (8p1/2, which acts like an extra covering shell together with 8s and is slowly drowned into the core across the 5g and 6f series) and a destabilized part (8p3/2, which has nearly the same energy as 9p1/2), and that the 8s shell gets replaced by the 9s shell as the covering s-shell for the 7d elements.
History
The aufbau principle in the new quantum theory
The principle takes its name from German, , "building-up principle", rather than being named for a scientist. It was formulated by Niels Bohr in the early 1920s. This was an early application of quantum mechanics to the properties of electrons and explained chemical properties in physical terms. Each added electron is subject to the electric field created by the positive charge of the atomic nucleus and the negative charge of other electrons that are bound to the nucleus. Although in hydrogen there is no energy difference between subshells with the same principal quantum number n, this is not true for the outer electrons of other atoms.
In the old quantum theory prior to quantum mechanics, electrons were supposed to occupy classical elliptical orbits. The orbits with the highest angular momentum are "circular orbits" outside the inner electrons, but orbits with low angular momentum (s- and p-subshell) have high subshell eccentricity, so that they get closer to the nucleus and feel on average a less strongly screened nuclear charge.
Wolfgang Pauli's model of the atom, including the effects of electron spin, provided a more complete explanation of the empirical aufbau rules.
The n + energy ordering rule
A periodic table in which each row corresponds to one value of n + (where the values of n and correspond to the principal and azimuthal quantum numbers respectively) was suggested by Charles Janet in 1928, and in 1930 he made explicit the quantum basis of this pattern, based on knowledge of atomic ground states determined by the analysis of atomic spectra. This table came to be referred to as the left-step table. Janet "adjusted" some of the actual n + values of the elements, since they did not accord with his energy ordering rule, and he considered that the discrepancies involved must have arisen from measurement errors. As it happens, the actual values were correct and the n + energy ordering rule turned out to be an approximation rather than a perfect fit, although for all elements that are exceptions the regularised configuration is a low-energy excited state, well within reach of chemical bond energies.
In 1936, the German physicist Erwin Madelung proposed this as an empirical rule for the order of filling atomic subshells, and most English-language sources therefore refer to the Madelung rule. Madelung may have been aware of this pattern as early as 1926. The Russian-American engineer Vladimir Karapetoff was the first to publish the rule in 1930, though Janet also published an illustration of it the same year.
In 1945, American chemist William Wiswesser proposed that the subshells are filled in order of increasing values of the function
This formula correctly predicts both the first and second parts of the Madelung rule (the second part being that for two subshells with the same value of n + , the one with the smaller value of n fills first). Wiswesser argued for this formula based on the pattern of both angular and radial nodes, the concept now known as orbital penetration, and the influence of the core electrons on the valence orbitals.
In 1961 the Russian agricultural chemist V.M. Klechkowski proposed a theoretical explanation for the importance of the sum n + , based on the Thomas–Fermi model of the atom. Many French- and Russian-language sources therefore refer to the Klechkowski rule. '
The full Madelung rule was derived from a similar potential in 1971 by Yury N. Demkov and Valentin N. Ostrovsky. They considered the potential
where and are constant parameters; this approaches a Coulomb potential for small . When satisfies the condition
,
where , the zero-energy solutions to the Schrödinger equation for this potential can be described analytically with Gegenbauer polynomials. As passes through each of these values, a manifold containing all states with that value of arises at zero energy and then becomes bound, recovering the Madelung order. The application of perturbation-theory show that states with smaller have lower energy, and that the s-orbitals (with ) have their energies approaching the next group.
In recent years it has been noted that the order of filling subshells in neutral atoms does not always correspond to the order of adding or removing electrons for a given atom. For example, in the fourth row of the periodic table, the Madelung rule indicates that the 4s subshell is occupied before the 3d. Therefore, the neutral atom ground state configuration for K is , Ca is , Sc is and so on. However, if a scandium atom is ionized by removing electrons (only), the configurations differ: Sc is , Sc+ is , and Sc2+ is . The subshell energies and their order depend on the nuclear charge; 4s is lower than 3d as per the Madelung rule in K with 19 protons, but 3d is lower in Sc2+ with 21 protons. In addition to there being ample experimental evidence to support this view, it makes the explanation of the order of ionization of electrons in this and other transition metals more intelligible, given that 4s electrons are invariably preferentially ionized. Generally the Madelung rule should only be used for neutral atoms; however, even for neutral atoms there are exceptions in the d-block and f-block (as shown above).
See also
Ionization energy
References
Further reading
Image: Understanding order of shell filling
Boeyens, J. C. A.: Chemistry from First Principles. Berlin: Springer Science 2008,
External links
Electron Configurations, the Aufbau Principle, Degenerate Orbitals, and Hund's Rule from Purdue University
Electron states
Foundational quantum physics
Chemical bonding | Aufbau principle | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,977 | [
"Electron",
"Foundational quantum physics",
"Quantum mechanics",
"Condensed matter physics",
"nan",
"Chemical bonding",
"Electron states"
] |
1,564,246 | https://en.wikipedia.org/wiki/Circassian%20beauty | The concept of Circassian beauty is an ethnic stereotype of the Circassian people. A fairly extensive literary history suggests that Circassian women were thought to be unusually attractive, spirited, smart, and elegant. Therefore, they were seen as mentally and physically desirable for men.
There are folk songs in various languages all around the Middle East and the Balkans describing the unusual beauty of Circassian women. This trend popularised greatly after the Circassian genocide, although the reputation of Circassian women dates back to the Late Middle Ages, when the Circassian coast was frequented by Italian traders from Genoa. This reputation was further reinforced by the Italian banker and politician Cosimo de' Medici (the founder of the Medici dynasty in the Republic of Florence), who conceived an illegitimate son with his Venice-based Circassian slave Maddalena. Additionally, the Circassian women who lived as slaves in the Ottoman harem, the Safavid harem, and the Qajar harem also developed a reputation as extremely beautiful, which then became a common trope of Orientalism throughout the Western world.
As a result of this reputation, Circassians in Europe and Northern America were often characterised as ideals of feminine beauty in poetry and art. Consequently, from the 18th century onward, cosmetic products were often advertised by using the word "Circassian" in the title or by claiming that the product was based on substances used by women in Circassia.
Many consorts and mothers of the Ottoman Sultans were ethnic Circassians, including, but not limited to: Mahidevran Hatun, Şevkefza Sultan, Rahime Perestu Sultan, Tirimujgan Kadin, Nükhetsezâ Hanim, Hümaşah Sultan, Bedrifelek Kadin, Bidar Kadin, Kamures Kadin, Servetseza Kadin, Bezmiara Kadin, Düzdidil Hanim, Hayranidil Kadin, Meyliservet Kadin, Mihrengiz Kadin, Neşerek Kadin, Nurefsun Kadin, Reftarıdil Kadin, Şayan Kadin, Gevherriz Hanim, Ceylanyar Hanim, Dilfirib Kadin, Nalanıdil Hanim, Nergizev Hanim, and Şehsuvar Kadın. It is likely that many other concubines, whose origin is not recorded, were also of Circassian ethnicity. The "golden age" of Circassian beauty may be considered to be between the 1770s, when the Russian Empire seized the Crimean Khanate and cut off the Black Sea slave trade, which increased the demand for Circassian women in Muslim harems; and the 1860s, when the Russian Empire perpetrated the Circassian genocide and destroyed the Circassians' ancestral homeland during the Russo-Circassian War, creating the modern-day Circassian diaspora. After 1854, almost all concubines in the Ottoman harem were of Circassian origin; the Circassians had been expelled from Russian-controlled lands in the 1860s, and the impoverished refugee parents sold their daughters in a trade that was tolerated despite being formally banned.
In the 1860s, the American showman P. T. Barnum exhibited women who he claimed were Circassian beauties. They had a distinctively curly style of big hair, which had no precedent in earlier portrayals of Circassians, but which was soon copied by other female performers, who became known as "moss-haired girls" in the United States. This hairstyle was a sort of exhibit's trademark and was achieved by washing the hair of women in beer, drying it, and then teasing it. It is not clear why Barnum chose this hairstyle; it may have been a reference to the standard Circassian fur hat, rather than the hair.
There were also several classical Turkish music pieces and poems praising the beauty of the Circassian ethnic group, such as "Lepiska Saçlı Çerkes" (); the word "Lepiska" refers to long and blonde hair that is straight, as if it was flat-ironed.
Circassian slave trade
From the Middle Ages until the 20th-century, Circassian women were a major target for sexual slavery in the harems of the Islamic Middle East.
In the middle ages, the Black Sea slave traders bought slaves from a number of different ethnic groups in the Caucasus, such as Abkhazians, Mingrelians and Circassians.
During the early modern Crimean slave trade, the trade of Circassians from the Caucasus expanded and developed in to what was termed a luxury slave trade route, providing elite slaves to the Ottoman Empire and the Middle East.
The Crimean slave trade was one of the biggest suppliers of concubines (female sex slaves) to the Ottoman Imperial Harem, and virgin slave girls (normally arriving as children) were given to the Sultan from local statesmen, family members, grand dignitaries and provincial governors, and particularly from the Crimean Khan; the Ottoman Sultan Ahmed III received one hundred Circassian virgin girl slaves as presents upon his accession to the throne.
When the Crimean slave trade was ended with the Annexation of the Crimean Khanate by the Russian Empire in the 18th-century, the trade of Circassians was redirected from Crimea and went directly from the Caucasus to the Ottoman Empire, developing in to a separate slave trade which continued until the 20th-century.
The Circassian slave trade was heavily (though not entirely) focused on slave-girls. In the Islamic empires of the Middle East, enslaved African black women – trafficked via the Trans-Saharan slave trade, the Red Sea slave trade and the Indian Ocean slave trade – were primarily used as domestic house slaves and not exclusively for sexual slavery. Conversely, white women, trafficked via the Black Sea slave trade and the Barbary slave trade, were highly sought after by Middle Eastern Muslim slave traders to be used as concubines (sex slaves) or wives.
It was commonly known that Circassian girls were mainly bought to become wives or concubines to rich men, which made the Circassian slave trade to be viewed as a form of marriage market, and it was commonly claimed in these regions that the Circassian girls were in fact eager to be enslaved by the Muslims and asked their parents to sell them to the traders because it was the only way for them to enhance their class status.
There was a tendency of apologetism by the Ottomans to claim that slavery was beneficial to the Circassians, since it delivered them from "primitivism to civilisation, from poverty and need to prosperity and happiness", and that they became slaves willingly: "Circassians came to Istanbul willingly 'to become wives of the Sultan and the Pachas, and the young men to become Beys and Pachas'".
The Middle East's preference for European white girls over African black girls as sex slaves were noted by the international press, when the slave market was flooded by white girls in the 1850s due to the Circassian genocide, which resulted in the price for white slave girls to become cheaper, and Muslim men, who were not able to buy white girls before, then exchanged their black slave women for white ones. The New York Daily Times reported on August 6, 1856:
"There has been lately an unusually large number of Circassians going about the streets of Constantinople. [...] They are here as slave dealers, charged with the disposal of the numerous parcels of Circassian girls that have been for some time pouring into this market. [...] ...never, perhaps, at any former period, was white human flesh so cheap as it is at this moment.In former times a “good middling” Circassian girl was thought very cheap at 100 pounds, but at the present moment the same description of goods may be had for 5 pounds! [...] Formerly a Circassian slave girl was pretty sure of being bought into a good family, where not only good treatment, but often rank and fortune awaited her; but at present low rates she may be taken by any huxter who never thought of keeping a slave before. Another evil is that the temptation to possess a Circassian girl at such low prices is so great in the minds of the Turks that many who cannot afford to keep several slaves have been sending their blacks to market, in order to make room for a newly-purchased white girl."
There was a greater reluctance from Ottoman authorities to prohibit the Circassian slave trade than the African slave trade, because the Circassian slave trade was regarded as in effect a marriage market, and it continued until the end of the Ottoman Empire after World War I.
Girls from Caucasus and the Circassian colonies in Anatolia were still trafficked to other parts of the Middle East, especially the Arab world, in the 1920s; in 1928, at least 60 white slave girls were discovered for sexual purposes in Kuwait.
In the 1940s, it was reported that Baluchi girls were shipped via Oman to the rest of the Arabian Peninsula, where they were popular as concubines since Caucasian girls were no longer available, and were sold for $350–450 in Mecca.
The legal sex slave trade to the Middle East was ended with the abolition of slavery in Saudi Arabia, slavery in Dubai and slavery in Oman in the 1960s.
Literary allusions
The legend of Circassian women in the western world was enhanced in 1734, when, in his Letters on the English, Voltaire alludes to the beauty of Circassian women:
Their beauty is mentioned in Henry Fielding's Tom Jones (1749), in which Fielding remarked, "How contemptible would the brightest Circassian beauty, drest in all the jewels of the Indies, appear to my eyes!"
Similar claims about Circassian women appear in Lord Byron's Don Juan (1818–1824), in which the tale of a slave auction is told:
The legend of Circassian women was also repeated by legal theorist Gustav Hugo, who wrote that "Even beauty is more likely to be found in a Circassian slave girl than in a beggar girl", referring to the fact that even a slave has some security and safety, but a "free" beggar has none. Hugo's comment was later condemned by Karl Marx in The Philosophical Manifesto of the Historical School of Law (1842) on the grounds that it excused slavery. Mark Twain reported in The Innocents Abroad (1869) that "Circassian and Georgian girls are still sold in Constantinople, but not publicly."
American travel author and diplomat Bayard Taylor in 1862 claimed that, "So far as female beauty is concerned, the Circassian women have no superiors. They have preserved in their mountain home the purity of the Grecian models, and still display the perfect physical loveliness, whose type has descended to us in the Venus de' Medici."
Circassian features
Circassian women
An anthropological literary suggests that Circassians were best characterized by what was called "rosy pale" or "translucent white skin". While most Circassian tribes were famous for abundance of fair or dark blond and red hair combined with greyish-blue or green eyes, many also had the pairing of very dark hair with very light complexions, a typical feature of peoples of the Caucasus. Many of the Circassian women in the Ottoman harem were described as having "green eyes and long, dark blond hair, pale skin of translucent white colour, thin waist, slender body structure, and very good-looking hands and feet". The fact that Circassian women were traditionally encouraged to wear corsets in order to keep their posture straight might have shaped their wasp waist as a result. In the late 18th century, it was claimed by Western European couturiers that "the Circassian Corset is the only one which displays, without indelicacy, the shape of the bosom to the greatest possible advantage; gives a width to the chest which is equally conducive to health and elegance of appearance".
It has also been suggested that a lithe and erect physique were favored for Circassians, and many villages had large numbers of healthy elderly people, many over a hundred years of age.
Maturin Murray Ballou described Circassians as being of the "fair and rosy-cheeked race", and "with a form of ravishing loveliness, large and lustrous eyes, and every belonging that might go to make up a Venus".
In Henry Lindlahr's words in the early 20th century, "Blue-eyed Caucasian regiments today form the cream of the Sultan's army. Circassian beauties are admired for their abundant and luxuriant yellow hair and blue eyes."
In his book A Year Among the Circassians, John Augustus Longworth describes a Circassian girl of typical Circassian features as the following:
It is also understood from the memoirs of Princess Emily Ruete, a half-Circassian and half-Omani herself, that Circassian women, who were bought in Constantinople and brought via the Circassian slave trade to slavery in Zanzibar for the harem of Zanzibari Said bin Sultan, Sultan of Muscat and Oman, were envied by their rivals who considered Circassians to be of the "hateful race of blue-eyed cats".
Regarding one of her half-sisters who was also from a Circassian mother, Princess Ruete of Zanzibar mentions that "The daughter of a Circassian was a dazzling beauty with the complexion of a German blonde. Besides, she possessed a sharp intellect, which made her into a faithful advisor of my father's."
The characteristics of Circassian and Georgian women were further articulated in 1839 by the author Emma Reeve who, as stated by Joan DelPlato, differentiated "between 'the blond Circassians' who are 'indolent and graceful, their voices low and sweet' and what she calls the slightly darker-skinned Georgians who are 'more animated' and have more 'intelligence and vivacity than their delicate rivals.
Similar descriptions of the Circassian women appear in Florence Nightingale's travel journal where Nightingale called Circassians "the most graceful and the most sensual-looking creatures I ever saw".
According to the feminist Harriet Martineau, Circassians trafficked to slavery in Egypt were the only saving virtue of the Egyptian harem where these Circassian mothers produced the finest children and if they were to be excluded from the harem, the upper class in Egypt would be doomed. The sex slave trade of "white women" (normally Circassians) to the Egyptian harems was explicitly banned after pressure by the British in the Anglo-Egyptian Slave Trade Convention of 1884.
In parts of Europe and North America where blond hair was more common, the pairing of extremely white skin with very dark hair also present among some Circassians was exalted, even in Russia which was at war with the Circassians; Semyon Bronevskii exalted Circassian women for having light skin, dark brown hair, dark eyes and "the lineaments of the face of the Ancient Greek". In the United States, the girls disguised as "Circassians" exhibited by Phineas T. Barnum were in fact Catholic Irish girls from Lower Manhattan.
Circassian men
Circassian men were also exalted for their beauty, manliness, and bravery in Western Europe, in a way Caucasus historian Charles King calls "homoerotic". In Scotland, in 1862, Circassian chiefs arrived to advocate their cause against Russia and to persuade Britain to stop the actions of the Russian army at that time, and upon the arrival of two Circassian leaders, Hadji Hayder Hassan and Kustan Ogli Ismael, the Dundee Advertiser reported that
Pseudoscientific explanations for fair skin
During the 19th century, various Western intellectuals offered pseudoscientific explanations for the light complexion present among Circassians. The doctor of medicine Hugh Williamson, a signatory to the United States Constitution, argued that the reason for the extreme whiteness of the Circassian and coastal Celto-Germanic peoples can be explained by the geographical location of these folks' ancestral homelands which lie in high latitudes ranging from 45° to 55° N near a sea or ocean where westerlies prevail from the west towards the east.
According to Voltaire, the practice of inoculation (see also variolation, an early form of vaccination) resulted in the Circassians having skin clean of smallpox scars:
Pseudoscientific racialist theories
By the early 19th century, Circassians were associated with theories of racial hierarchy, which elevated the Caucasus region as the source of the purest examples of the "white race", which was named the Caucasian race after the area by German physiologist and anthropologist Johann Friedrich Blumenbach. Blumenbach theorised that the Circassians were the closest to God's original model of humanity, and thus "the purest and most beautiful whites were the Circassians". This fuelled the idea of female Circassian beauty.
In 1873, the decade after the expulsion of Circassians from the Caucasus where only a minority of them live today, it was argued that "the Caucasian Race receives its name from the Caucasus, the abode of the Circassians who are said to be the handsomest and best-formed nation, not only of this race, but of the whole human family." Another anthropologist, William Guthrie, distinguished the Caucasian race and the "Circassians who are admired for their beauty" in particular by their oval form of their head, straight nose, thin lips, vertically-placed teeth, facial angle from 80 to 90 degrees that he calls the most developed one, and their regular features overall, which "causes them to be considered as the most handsome and agreeable".
American travel writer Bayard Taylor observed Circassian women during his trip to the Ottoman Empire and argued that "the Circassian face is a pure oval; the forehead is low and fair, an excellent thing in woman, and the skin of an ivory whiteness, except the faint pink of the cheeks and the ripe, roseate stain of the lips."
Circassians are depicted in images of harems at this time through these ideologies of racial hierarchy. English painter John Frederick Lewis's The Harem portrays Circassians as the dominant mistresses of the harem, who look down on other women, as implied in the review of the painting in The Art Journal, which described it as follows:
It represents the interior of a harem and slaves at Cairo, wherein is seated in luxurious ease a young man, attired in the excess of Moslem fashion. Near him, and reclining upon cushions, are two European Circassian women, whom also dressed in the extremity of Egyptian Oriental taste of Cairo ... On the right is seen a tall Nubian eunuch, who removes from the shoulders of an African Black slave the shawl by which she had been covered, in order to show her to the master of the harem; this figure with her high shoulders and the characteristics of her features, is a most successful national impersonation. The Circassian women look languidly to the African with an expression of supreme contempt, which is responded to by a sneer on the face of the Nubian eunuch.
Orientalizing paintings of nudes were also sometimes exhibited as "Circassians".
The Circassians became major news during the Caucasian War, in which Russia conquered the North Caucasus, displacing large numbers of Circassians southwards. In 1856 The New York Times published a report entitled "Horrible Traffic in Circassian Women – Infanticide in Turkey", asserting that a consequence of the Russian conquest of the Caucasus was an excess of beautiful Circassian women on the Constantinople slave market, and that this was causing prices of slaves in general to plummet. The story drew on ideas of racial hierarchy, stating that:
The temptation to possess a Circassian girl at such low prices is so great in the minds of the Turks that many who cannot afford to keep several slaves have been sending their blacks to market, in order to make room for a newly purchased white girl.
The article also claimed that children born to the "inferior" black concubines were being killed. This story drew widespread attention to the area, as did later conflicts.
At the same time writers and illustrators were also creating images depicting the authentic costumes and people of the Caucasus. Francis Davis Millet depicted Circassian women during his 1877 coverage of the Russo-Turkish war, specifying local costume and hairstyle.
Advertising of beauty products
An advertisement from 1782 titled "Bloom of Circassia" makes clear that it was by then well established "that the Circassians are the most beautiful Women in the World", but goes on to reveal that they "derive not all their Charms from Nature". They used a concoction supposedly extracted from a vegetable native to Circassia. Knowledge of this "Liquid Bloom" had been brought back by a "well-regarded gentleman" who had traveled and lived in the region. It "instantly gives a Rosy Hue to the Cheeks", a "lively and animated Bloom of Rural Beauty" that would not disappear in perspiration or handkerchiefs.
In 1802 "the Balm of Mecca" was also marketed as being used by Circassians:
"This delicate as well as fragrant composition has been long celebrated as the summit of cosmetics by all the Circassian and Georgian women in the seraglio of the Grand Sultan". It claims that the product was endorsed by Lady Mary Wortley Montague who stated that it was very helpful "for removing those sebacious impurities so noxious to beauty". The article continues:
"Circassian Lotion" was sold in 1806 for the skin, at fifty cents the bottle.
"Circassian Eye-Water" was marketed as "a sovereign remedy for all diseases of the eyes", and in the 1840s "Circassian hair dye" was marketed to create a rich dark lustrous effect.
Nineteenth-century sideshow attraction
The combination of the popular issues of slavery, the Orient, racial ideology, and sexual titillation gave the reports of Circassian women sufficient notoriety at the time that the circus leader P. T. Barnum decided to capitalize on this interest. He displayed a "Circassian Beauty" at his American Museum in 1865. Barnum's Circassian beauties were young women with tall, teased hairstyles, rather like the Afro style of the 1970s. Actual Circassian hairstyles bore no resemblance to Barnum's fantasy. Barnum's first "Circassian" was marketed under the name "" and was exhibited at his American Museum in New York from 1864. Barnum had written to John Greenwood, his agent in Europe, asking him to purchase a beautiful Circassian girl to exhibit, or at least to hire a girl who could "pass for" one. However, it seems that "Zalumma Agra" was probably a local girl hired by the show, as were later "Circassians". Barnum also produced a booklet about another of his Circassians, Zoe Meleke, who was portrayed as an ideally beautiful and refined woman who had escaped a life of sexual slavery.
The portrayal of a white woman as a rescued slave at the time of the American Civil War played on the racial connotations of slavery at the time. It has been argued that the distinctive hairstyle affiliates the side-show Circassian with African identity, and thus,
resonates oddly yet resoundingly with the rest of her identifying significations: her racial purity, her sexual enslavement, her position as colonial subject; her beauty. The Circassian blended elements of white Victorian True Womanhood with traits of the enslaved African American woman in one curiosity.
The trend spread, with supposedly Circassian women featured in dime museums and travelling medicine shows, sometimes known as "Moss-haired girls". They were typically identified by the distinctive hairstyle, which was held in place by the use of beer. They also often performed in pseudo-oriental costume. Many postcards of Circassians also circulated. Though Barnum's original women were portrayed as proud and genteel, later images of Circassians often emphasised erotic poses and revealing costumes. As the original fad faded, the "Circassians" started to add to their appeal by performing traditional circus tricks such as sword swallowing.
In popular culture
The Safety Fire, British progressive metal band, released their song "Circassian Beauties" in 2012.
The American alternative rock band Monks of Doom released another song with the title "Circassian Beauty" in their 1991 album Meridian.
See also
White slave trade
Barbary slave trade
Ottoman Imperial Harem
La Circassienne au Bain, 1814 painting by Merry-Joseph Blondel; famously lost with the Titanic
References
Further reading
Natalia Królikowska-Jedlinska. 2020. "The Role of Circassian Slaves in the Foreign and Domestic Policy of the Crimean Khanate in the Early Modern Period." in Slaves and Slave Agency in the Ottoman Empire, edited by Stephan Conermann, Gül Şen. V&R unipress and Bonn University Press.
External links
The Circassian Beauty Archive
Circassian Women, from showhistory.com
Circassians
Corsetry
Female beauty
Medieval ethnic groups of Europe
Orientalism
Ottoman imperial harem
Physical attractiveness
Qajar harem
Race and society
Royal consorts
Safavid imperial harem
Scientific racism
Sexual slavery
Stereotypes of white people
Women in art
Aftermath of the Circassian genocide | Circassian beauty | [
"Biology"
] | 5,350 | [
"Biology theories",
"Obsolete biology theories",
"Scientific racism"
] |
1,564,380 | https://en.wikipedia.org/wiki/Disjoint%20union%20%28topology%29 | In general topology and related areas of mathematics, the disjoint union (also called the direct sum, free union, free sum, topological sum, or coproduct) of a family of topological spaces is a space formed by equipping the disjoint union of the underlying sets with a natural topology called the disjoint union topology. Roughly speaking, in the disjoint union the given spaces are considered as part of a single new space where each looks as it would alone and they are isolated from each other.
The name coproduct originates from the fact that the disjoint union is the categorical dual of the product space construction.
Definition
Let {Xi : i ∈ I} be a family of topological spaces indexed by I. Let
be the disjoint union of the underlying sets. For each i in I, let
be the canonical injection (defined by ). The disjoint union topology on X is defined as the finest topology on X for which all the canonical injections are continuous (i.e.: it is the final topology on X induced by the canonical injections).
Explicitly, the disjoint union topology can be described as follows. A subset U of X is open in X if and only if its preimage is open in Xi for each i ∈ I. Yet another formulation is that a subset V of X is open relative to X iff its intersection with Xi is open relative to Xi for each i.
Properties
The disjoint union space X, together with the canonical injections, can be characterized by the following universal property: If Y is a topological space, and fi : Xi → Y is a continuous map for each i ∈ I, then there exists precisely one continuous map f : X → Y such that the following set of diagrams commute:
This shows that the disjoint union is the coproduct in the category of topological spaces. It follows from the above universal property that a map f : X → Y is continuous iff fi = f o φi is continuous for all i in I.
In addition to being continuous, the canonical injections φi : Xi → X are open and closed maps. It follows that the injections are topological embeddings so that each Xi may be canonically thought of as a subspace of X.
Examples
If each Xi is homeomorphic to a fixed space A, then the disjoint union X is homeomorphic to the product space A × I where I has the discrete topology.
Preservation of topological properties
Every disjoint union of discrete spaces is discrete
Separation
Every disjoint union of T0 spaces is T0
Every disjoint union of T1 spaces is T1
Every disjoint union of Hausdorff spaces is Hausdorff
Connectedness
The disjoint union of two or more nonempty topological spaces is disconnected
See also
product topology, the dual construction
subspace topology and its dual quotient topology
topological union, a generalization to the case where the pieces are not disjoint
References
General topology | Disjoint union (topology) | [
"Mathematics"
] | 638 | [
"General topology",
"Topology"
] |
1,564,394 | https://en.wikipedia.org/wiki/Electromagnetic%20shielding | In electrical engineering, electromagnetic shielding is the practice of reducing or redirecting the electromagnetic field (EMF) in a space with barriers made of conductive or magnetic materials. It is typically applied to enclosures, for isolating electrical devices from their surroundings, and to cables to isolate wires from the environment through which the cable runs (). Electromagnetic shielding that blocks radio frequency (RF) electromagnetic radiation is also known as RF shielding.
EMF shielding serves to minimize electromagnetic interference. The shielding can reduce the coupling of radio waves, electromagnetic fields, and electrostatic fields. A conductive enclosure used to block electrostatic fields is also known as a Faraday cage. The amount of reduction depends very much upon the material used, its thickness, the size of the shielded volume and the frequency of the fields of interest and the size, shape and orientation of holes in a shield to an incident electromagnetic field.
Materials used
Typical materials used for electromagnetic shielding include thin layer of metal, sheet metal, metal screen, and metal foam. Common sheet metals for shielding include copper, brass, nickel, silver, steel, and tin. Shielding effectiveness, that is, how well a shield reflects or absorbs/suppresses electromagnetic radiation, is affected by the physical properties of the metal. These may include conductivity, solderability, permeability, thickness, and weight. A metal's properties are an important consideration in material selection. For example, electrically dominant waves are reflected by highly conductive metals like copper, silver, and brass, while magnetically dominant waves are absorbed/suppressed by a less conductive metal such as steel or stainless steel. Further, any holes in the shield or mesh must be significantly smaller than the wavelength of the radiation that is being kept out, or the enclosure will not effectively approximate an unbroken conducting surface.
Another commonly used shielding method, especially with electronic goods housed in plastic enclosures, is to coat the inside of the enclosure with a metallic ink or similar material. The ink consists of a carrier material loaded with a suitable metal, typically copper or nickel, in the form of very small particulates. It is sprayed on to the enclosure and, once dry, produces a continuous conductive layer of metal, which can be electrically connected to the chassis ground of the equipment, thus providing effective shielding.
Electromagnetic shielding is the process of lowering the electromagnetic field in an area by barricading it with conductive or magnetic material. Copper is used for radio frequency (RF) shielding because it absorbs radio and other electromagnetic waves. Properly designed and constructed RF shielding enclosures satisfy most RF shielding needs, from computer and electrical switching rooms to hospital CAT-scan and MRI facilities.
EMI (electromagnetic interference) shielding is of great research interest and several new types of nanocomposites made of ferrites, polymers, and 2D materials are being developed to obtain more efficient RF/microwave-absorbing materials (MAMs). EMI shielding is often achieved by electroless plating of copper as most popular plastics are non-conductive or by special conductive paint.
Example of applications
One example is a shielded cable, which has electromagnetic shielding in the form of a wire mesh surrounding an inner core conductor. The shielding impedes the escape of any signal from the core conductor, and also prevents signals from being added to the core conductor.
Some cables have two separate coaxial screens, one connected at both ends, the other at one end only, to maximize shielding of both electromagnetic and electrostatic fields.
The door of a microwave oven has a screen built into the window. From the perspective of microwaves (with wavelengths of 12 cm) this screen finishes a Faraday cage formed by the oven's metal housing. Visible light, with wavelengths ranging between 400 nm and 700 nm, passes easily through the screen holes.
RF shielding is also used to prevent access to data stored on RFID chips embedded in various devices, such as biometric passports.
NATO specifies electromagnetic shielding for computers and keyboards to prevent passive monitoring of keyboard emissions that would allow passwords to be captured; consumer keyboards do not offer this protection primarily because of the prohibitive cost.
RF shielding is also used to protect medical and laboratory equipment to provide protection against interfering signals, including AM, FM, TV, emergency services, dispatch, pagers, ESMR, cellular, and PCS. It can also be used to protect the equipment at the AM, FM or TV broadcast facilities.
Another example of the practical use of electromagnetic shielding would be defense applications. As technology improves, so does the susceptibility to various types of nefarious electromagnetic interference. The idea of encasing a cable inside a grounded conductive barrier can provide mitigation to these risks.
How it works
Electromagnetic radiation consists of coupled electric and magnetic fields. The electric field produces forces on the charge carriers (i.e., electrons) within the conductor. As soon as an electric field is applied to the surface of an ideal conductor, it induces a current that causes displacement of charge inside the conductor that cancels the applied field inside, at which point the current stops. See Faraday cage for more explanation.
Similarly, varying magnetic fields generate eddy currents that act to cancel the applied magnetic field. (The conductor does not respond to static magnetic fields unless the conductor is moving relative to the magnetic field.) The result is that electromagnetic radiation is reflected from the surface of the conductor: internal fields stay inside, and external fields stay outside.
Several factors serve to limit the shielding capability of real RF shields. One is that, due to the electrical resistance of the conductor, the excited field does not completely cancel the incident field. Also, most conductors exhibit a ferromagnetic response to low-frequency magnetic fields, so that such fields are not fully attenuated by the conductor. Any holes in the shield force current to flow around them, so that fields passing through the holes do not excite opposing electromagnetic fields. These effects reduce the field-reflecting capability of the shield.
In the case of high-frequency electromagnetic radiation, the above-mentioned adjustments take a non-negligible amount of time, yet any such radiation energy, as far as it is not reflected, is absorbed by the skin (unless it is extremely thin), so in this case there is no electromagnetic field inside either. This is one aspect of a greater phenomenon called the skin effect. A measure of the depth to which radiation can penetrate the shield is the so-called skin depth.
Magnetic shielding
Equipment sometimes requires isolation from external magnetic fields. For static or slowly varying magnetic fields (below about 100 kHz) the Faraday shielding described above is ineffective. In these cases shields made of high magnetic permeability metal alloys can be used, such as sheets of permalloy and mu-metal or with nanocrystalline grain structure ferromagnetic metal coatings. These materials do not block the magnetic field, as with electric shielding, but rather draw the field into themselves, providing a path for the magnetic field lines around the shielded volume. The best shape for magnetic shields is thus a closed container surrounding the shielded volume. The effectiveness of this type of shielding depends on the material's permeability, which generally drops off at both very low magnetic field strengths and high field strengths where the material becomes saturated. Therefore, to achieve low residual fields, magnetic shields often consist of several enclosures, one inside the other, each of which successively reduces the field inside it. Entry holes within shielding surfaces may degrade their performance significantly.
Because of the above limitations of passive shielding, an alternative used with static or low-frequency fields is active shielding, in which a field created by electromagnets cancels the ambient field within a volume. Solenoids and Helmholtz coils are types of coils that can be used for this purpose, as well as more complex wire patterns designed using methods adapted from those used in coil design for magnetic resonance imaging. Active shields may also be designed accounting for the electromagnetic coupling with passive shields, referred to as hybrid shielding, so that there is broadband shielding from the passive shield and additional cancellation of specific components using the active system.
Additionally, superconducting materials can expel magnetic fields via the Meissner effect.
Mathematical model
Suppose that we have a spherical shell of a (linear and isotropic) diamagnetic material with relative permeability with inner radius and outer radius We then put this object in a constant magnetic field:
Since there are no currents in this problem except for possible bound currents on the boundaries of the diamagnetic material, then we can define a magnetic scalar potential that satisfies Laplace's equation:
where
In this particular problem there is azimuthal symmetry so we can write down that the solution to Laplace's equation in spherical coordinates is:
After matching the boundary conditions
at the boundaries (where is a unit vector that is normal to the surface pointing from side 1 to side 2), then we find that the magnetic field inside the cavity in the spherical shell is:
where is an attenuation coefficient that depends on the thickness of the diamagnetic material and the magnetic permeability of the material:
This coefficient describes the effectiveness of this material in shielding the external magnetic field from the cavity that it surrounds. Notice that this coefficient appropriately goes to 1 (no shielding) in the limit that . In the limit that this coefficient goes to 0 (perfect shielding). When , then the attenuation coefficient takes on the simpler form:
which shows that the magnetic field decreases like
See also
Electromagnetic interference
Electromagnetic radiation and health
Radiation
Ionising radiation protection
Mu-metal
MRI RF shielding
Permalloy
Electric field screening
Faraday cage
Anechoic chamber
Plasma window
References
External links
All about Mu Metal Permalloy material
Mu Metal Shieldings Frequently asked questions (FAQ by MARCHANDISE, Germany) magnetic permeability
Clemson Vehicular Electronics Laboratory: Shielding Effectiveness Calculator
Shielding Issues for Medical Products (PDF) — ETS-Lindgren Paper
Practical Electromagnetic Shielding Tutorial
Simulation of Electromagnetic Shielding in the COMSOL Multiphysics Environment
Magnetoencephalography
Radio electronics
Electromagnetic radiation
Electromagnetic compatibility | Electromagnetic shielding | [
"Physics",
"Engineering"
] | 2,061 | [
"Electromagnetic compatibility",
"Physical phenomena",
"Radio electronics",
"Electromagnetic radiation",
"Radiation",
"Electrical engineering"
] |
1,564,401 | https://en.wikipedia.org/wiki/Multiple%20drug%20resistance | Multiple drug resistance (MDR), multidrug resistance or multiresistance is antimicrobial resistance shown by a species of microorganism to at least one antimicrobial drug in three or more antimicrobial categories. Antimicrobial categories are classifications of antimicrobial agents based on their mode of action and specific to target organisms. The MDR types most threatening to public health are MDR bacteria that resist multiple antibiotics; other types include MDR viruses, parasites (resistant to multiple antifungal, antiviral, and antiparasitic drugs of a wide chemical variety).
Recognizing different degrees of MDR in bacteria, the terms extensively drug-resistant (XDR) and pandrug-resistant (PDR) have been introduced. Extensively drug-resistant (XDR) is the non-susceptibility of one bacteria species to all antimicrobial agents except in two or less antimicrobial categories. Within XDR, pandrug-resistant (PDR) is the non-susceptibility of bacteria to all antimicrobial agents in all antimicrobial categories. The definitions were published in 2011 in the journal Clinical Microbiology and Infection and are openly accessible.
Common multidrug-resistant organisms (MDROs)
Common multidrug-resistant organisms, typically bacteria, include:
Vancomycin-Resistant Enterococci (VRE)
Methicillin-resistant Staphylococcus aureus (MRSA)
Extended-spectrum β-lactamase (ESBLs) producing Gram-negative bacteria
Klebsiella pneumoniae carbapenemase (KPC) producing Gram-negatives
Multidrug-resistant Gram negative rods (MDR GNR) MDRGN bacteria such as Enterobacter species, E.coli, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa
Multi-drug-resistant tuberculosis
Overlapping with MDRGN, a group of Gram-positive and Gram-negative bacteria of particular recent importance have been dubbed as the ESKAPE group (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa and Enterobacter species).
Bacterial resistance to antibiotics
Various microorganisms have survived for thousands of years by their ability to adapt to antimicrobial agents. They do so via spontaneous mutation or by DNA transfer. This process enables some bacteria to oppose the action of certain antibiotics, rendering the antibiotics ineffective. These microorganisms employ several mechanisms in attaining multi-drug resistance:
No longer relying on a glycoprotein cell wall
Enzymatic deactivation of antibiotics
Decreased cell wall permeability to antibiotics
Altered target sites of antibiotic
Efflux mechanisms to remove antibiotics
Increased mutation rate as a stress response
Many different bacteria now exhibit multi-drug resistance, including staphylococci, enterococci, gonococci, streptococci, salmonella, as well as numerous other Gram-negative bacteria and Mycobacterium tuberculosis. Antibiotic resistant bacteria are able to transfer copies of DNA that code for a mechanism of resistance to other bacteria even distantly related to them, which then are also able to pass on the resistance genes, resulting in generations of antibiotics resistant bacteria. This initial transfer of DNA is called horizontal gene transfer.
Bacterial resistance to bacteriophages
Phage-resistant bacteria variants have been observed in human studies. As for antibiotics, horizontal transfer of phage resistance can be acquired by plasmid acquisition.
Antifungal resistance
Yeasts such as Candida species can become resistant under long-term treatment with azole preparations, requiring treatment with a different drug class.
Lomentospora prolificans infections are often fatal because of their resistance to multiple antifungal agents.
Antiviral resistance
HIV is the prime example of MDR against antivirals, as it mutates rapidly under monotherapy.
Influenza virus has become increasingly MDR; first to amantadines, then to neuraminidase inhibitors such as oseltamivir, (2008-2009: 98.5% of Influenza A tested resistant), also more commonly in people with weak immune systems. Cytomegalovirus can become resistant to ganciclovir and foscarnet under treatment, especially in immunosuppressed patients. Herpes simplex virus rarely becomes resistant to acyclovir preparations, mostly in the form of cross-resistance to famciclovir and valacyclovir, usually in immunosuppressed patients.
Antiparasitic resistance
The prime example for MDR against antiparasitic drugs is malaria. Plasmodium vivax has become chloroquine and sulfadoxine-pyrimethamine resistant a few decades ago, and as of 2012 artemisinin-resistant Plasmodium falciparum has emerged in western Cambodia and western Thailand.
Toxoplasma gondii can also become resistant to artemisinin, as well as atovaquone and sulfadiazine, but is not usually MDR
Antihelminthic resistance is mainly reported in the veterinary literature, for example in connection with the practice of livestock drenching and has been recent focus of FDA regulation.
Preventing the emergence of antimicrobial resistance
To limit the development of antimicrobial resistance, it has been suggested to:
Use the appropriate antimicrobial for an infection; e.g. no antibiotics for viral infections
Identify the causative organism whenever possible
Select an antimicrobial which targets the specific organism, rather than relying on a broad-spectrum antimicrobial
Complete an appropriate duration of antimicrobial treatment (not too short and not too long)
Use the correct dose for eradication; subtherapeutic dosing is associated with resistance, as demonstrated in food animals.
More thorough education of and by prescribers on their actions' implications globally.
Vaccination to prevent drug resistance for instance pneumococcus vaccine or flu vaccine
The medical community relies on education of its prescribers, and self-regulation in the form of appeals to voluntary antimicrobial stewardship, which at hospitals may take the form of an antimicrobial stewardship program. It has been argued that depending on the cultural context government can aid in educating the public on the importance of restrictive use of antibiotics for human clinical use, but unlike narcotics, there is no regulation of its use anywhere in the world at this time. Antibiotic use has been restricted or regulated for treating animals raised for human consumption with success, in Denmark for example.
Infection prevention is the most efficient strategy of prevention of an infection with a MDR organism within a hospital, because there are few alternatives to antibiotics in the case of an extensively resistant or panresistant infection; if an infection is localized, removal or excision can be attempted (with MDR-TB the lung for example), but in the case of a systemic infection only generic measures like boosting the immune system with immunoglobulins may be possible. The use of bacteriophages (viruses which kill bacteria) is a developing area of possible therapeutic treatments.
It is necessary to develop new antibiotics over time since the selection of resistant bacteria cannot be prevented completely. This means with every application of a specific antibiotic, the survival of a few bacteria which already have a resistance gene against the substance is promoted, and the concerning bacterial population amplifies. Therefore, the resistance gene is farther distributed in the organism and the environment, and a higher percentage of bacteria means they no longer respond to a therapy with this specific antibiotic. In addition to developing new antibiotics, new strategies entirely must be implemented in order to keep the public safe from the event of total resistance. New strategies are being tested such as UV light treatments and bacteriophage utilization, however more resources must be dedicated to this cause.
See also
Drug resistance
MDRGN bacteria
Xenobiotic metabolism
NDM1 enzymatic resistance
Herbicide resistance
P-glycoprotein
References
Further reading
External links
BURDEN of Resistance and Disease in European Nations - An EU project to estimate the financial burden of antibiotic resistance in European Hospitals
European Centre of Disease Prevention and Control and (ECDC): Multidrug-resistant, extensively drug-resistant and pandrug-resistant bacteria: An international expert proposal for interim standard definitions for acquired resistance Disease Programmes Unit
State of Connecticut Department of Public Health MDRO information MultidrugResistant Organisms MDROs What Are They
Antimicrobial resistance
Drug resistance
Bacteria | Multiple drug resistance | [
"Chemistry",
"Biology"
] | 1,799 | [
"Pharmacology",
"Prokaryotes",
"Drug resistance",
"Bacteria",
"Microorganisms"
] |
1,564,463 | https://en.wikipedia.org/wiki/HD%2088133 | HD 88133 is a yellow star with an orbiting exoplanet in the equatorial constellation of Leo. It has an apparent visual magnitude of 8.01, which is too faint to be visible to the naked eye. With a small telescope it should be easily visible. The distance to this system, as measured through parallax, is 240 light years, but it is slowly drifting closer with a radial velocity of −3.6 km/s.
This is classified as an ordinary G-type main-sequence star with a stellar classification of G8V. However, D. A. Fischer and associates in 2005 listed a class of G5 IV, suggesting it is instead a subgiant star that is evolving away from the main sequence having exhausted the hydrogen at its core. It is about 5 billion years old and is spinning with a projected rotational velocity of 4.9 km/s. The star has 23% more mass than the Sun and has double the Sun's girth. It is radiating over three times the luminosity of the Sun from its photosphere at an effective temperature of 5,414 km/s.
Planetary system
In 2004 a close orbiting exoplanet was found using Doppler spectroscopy. In 2016 the direct detection of the planetary thermal emission spectrum was claimed, but the detection was brought into questioned in 2021.
See also
List of extrasolar planets
List of stars in Leo
References
G-type main-sequence stars
Planetary systems with one confirmed planet
Leo (constellation)
BD+18 2326
088133
049813
J10100767+1811132 | HD 88133 | [
"Astronomy"
] | 331 | [
"Leo (constellation)",
"Constellations"
] |
1,564,533 | https://en.wikipedia.org/wiki/HD%20101930 | HD 101930, also known as Gliese 3683, is an orange hued star with an orbiting exoplanet located in the southern constellation Centaurus. It has an apparent magnitude of 8.21, making it faintly visible in binoculars but not to the naked eye. The system is located relatively close at a distance of 98 light years but is receding with a heliocentric radial velocity of . It has a relatively large proper motion, traversing the celestial sphere with an angular velocity of ·yr−1.
HD 101930 has a stellar classification of K2 V+, indicating that it is an ordinary K-type main-sequence star. It has a current mass of and is said to be 5.4 billion years old, which is slightly older than the Sun. The object has 87% the radius of the Sun and an effective temperature of . When combined, these parameters yield a luminosity 43% that of the Sun from its photosphere. As expected with planetary hosts, HD 101930 is metal enriched, having a metallicity 26% above solar levels. The star's projected rotational velocity is similar to the Sun's, having a value of .
A 2007 multicity survey found a co-moving companion located away, making it a binary star. It has a class of M0-1 and a mass of .
Planetary system
In 2005, the discovery of an exoplanet orbiting the star was announced. This is another discovery using the radial velocity method with the HARPS spectrograph. As the inclination of the orbital plane is unknown, only a lower bound on the mass can be determined. It has at least 30% of the mass of Jupiter.
See also
HD 93083
HD 102117
List of extrasolar planets
HARPS spectrograph
References
External links
K-type main-sequence stars
Planetary systems with one confirmed planet
Centaurus
BD-57 4096
3683
101930
057172 | HD 101930 | [
"Astronomy"
] | 404 | [
"Centaurus",
"Constellations"
] |
1,564,592 | https://en.wikipedia.org/wiki/List%20of%20Firefox%20features | Mozilla Firefox has features which distinguish it from other web browsers, such as Google Chrome, Safari, and Microsoft Edge.
Major differences
To avoid interface bloat, ship a relatively smaller core customizable to meet individual users' needs, and allow for corporate or institutional extensions to meet their varying policies, Firefox relies on a robust extension system to allow users to modify the browser according to their requirements instead of providing all features in the standard distribution.
While Opera and Google Chrome do the same, extensions for these are fewer in number as of late 2013. Internet Explorer also has an extension system but it is less widely supported than that of others. Developers supporting multiple browsers almost always support Firefox, and in many instances exclusively. As Opera has a policy of deliberately including more features in the core as they prove useful, the market for extensions is relatively unstable but also there is less need for them. The sheer number of extensions is not a good guide to the capabilities of a browser.
Protocol support and the difficulty of adding new link type protocols also vary widely across not only these browsers but across versions of these browsers. Opera has historically been most robust and consistent about supporting cutting-edge protocols such as robust file sharing eDonkey links or bitcoin transactions. These can be difficult to support in Firefox without relying on unknown small developers, which defeats the privacy purpose of these protocols. Instructions for supporting new link protocols vary widely across operating systems and Firefox versions, and are generally not implementable by end users who lack systems administration comfort and the ability to follow exact detailed instructions to type in strings.
Web technologies support
Firefox supports most basic Web standards including HTML, XML, XHTML, CSS (with extensions), JavaScript, DOM, MathML, SVG, XSLT and XPath. Firefox's standards support and growing popularity have been credited as one reason Internet Explorer 7 was to be released with improved standards support.
Since Web standards are often in contradiction with Internet Explorer's behaviors, Firefox, like other browsers, has a quirks mode. This mode attempts to mimic Internet Explorer's quirks modes, which equates to using obsolete rendering standards dating back to Internet Explorer 5, or alternately newer peculiarities introduced in IE 6 or 7. However, it is not completely compatible. Because of the differing rendering, PC World notes that a minority of pages do not work in Firefox, however Internet Explorer 7's quirks mode does not either.
CNET notes that Firefox does not support ActiveX controls by default, which can also cause webpages to be missing features or to not work at all in Firefox. Mozilla made the decision to not support ActiveX due to potential security vulnerabilities, its proprietary nature and its lack of cross-platform compatibility. There are methods of using ActiveX in Firefox such as via third-party plugins but they do not work in all versions of Firefox or on all platforms.
Beginning on December 8, 2006, Firefox Nightly builds pass the Acid2 CSS standards compliance test, so all future releases of Firefox 3 would pass the test.
Firefox also implements a proprietary protocol from Google called "safebrowsing", which is not an open standard.
Cross-platform support
Mozilla Firefox runs on certain platforms that coincide OS versions in use at the time of release. In 2004 version 1 supported older operating systems such as Windows 95 and Mac OS X 10.1, by 2008 version 3 required at least OS X 10.4 and even Windows 98 support ended.
Various releases available on the primary distribution site can support the following operating systems, although not always the latest Firefox version.
Various versions of Microsoft Windows, including 98, 98SE, ME, NT 4.0, 2000, XP, Server 2003, Vista, 7, 8 and 10.
OS X
Linux-based operating systems using X.Org Server or XFree86
Builds for Solaris (x86 and SPARC), contributed by the Sun Beijing Desktop Team, are available on the Mozilla web site.
Mozilla Firefox 1.x installation on Windows 95 requires a few additional steps.
Since Firefox is open source and Mozilla actively develops a platform independent abstraction for its graphical front end, it can also be compiled and run on a variety of other architectures and operating systems. Thus, Firefox is also available for many other systems. This includes OS/2, AIX, and FreeBSD. Builds for Windows XP Professional x64 Edition are also available. Mozilla Firefox is also the browser of choice for a good number of smaller operating systems, such as SkyOS and ZETA.
Firefox uses the same profile format on the different platforms, so a profile may be used on multiple platforms, if all of the platforms can access the same profile; this includes, for example, profiles stored on an NTFS (via FUSE) or FAT32 partition accessible from both Windows and Linux, or on a USB flash drive. This is useful for users who dual-boot their machines. However, it may cause a few problems, especially with extensions.
Security
Firefox is free-libre software, and thus in particular its source code is visible to everyone. This allows anyone to review the code for security vulnerabilities. It also allowed the U.S. Department of Homeland Security to give funding for the automated tool Coverity to be run against Firefox code.
Additionally, Mozilla has a security bug bounty system - anyone who reports a valid critical security bug receives a $3000 (US) cash reward for each report and a Mozilla T-shirt. With effect from December 15, 2010, Mozilla added Web Applications to its Security Bug Bounty Program.
Tabbed browsing
Firefox supports tabbed browsing, which allows users to open several pages in one window. This feature was carried over from the Mozilla Application Suite, which in turn had borrowed the feature from the popular MultiZilla extension for Mozilla.
Firefox also permits the "homepage" to be a list of URLs delimited with vertical bars (|), which are automatically opened in separate tabs, rather than a single page.
Firefox 2 supports more tabbed browsing features, including a "tab overflow" solution that keeps the user's tabs easily accessible when they would otherwise become illegible, a "session store" which lets the user keep the opened tabs across the restarts, and an "undo close tab" feature.
The tab browsing feature allows users to open multiple tabs or pages on one window. This is convenient for users who enjoy browsing from one window and is also advantageous in ensuring ease of browsing. The tabs are easily made accessible and users can close tabs that are not in use for better usability.
Pop-up blocking
Firefox also includes integrated customizable pop-up blocking. Firefox was given this feature early in beta development, and it was a major comparative selling point of the browser until Internet Explorer gained the capability in the Windows XP SP2 release of August 25, 2004. Firefox's pop-up blocking can be turned off entirely to allow pop-ups from all sites. Firefox's pop-up blocking can be inconvenient at times — it prevents JavaScript-based links from opening a new window while a page is loading unless the site is added to a "safe list" found in the options menu.
In many cases, it is possible to view the pop-up's URL by clicking the dialog that appears when one is blocked. This makes it easier to decide if the pop-up should be displayed.
Private browsing
Private browsing was introduced in Firefox 3.5, which released on June 30, 2009. This feature lets users browse the Internet without leaving any traces in the browsing history.
Download manager
An integrated customizable download manager is also included. Downloads can be opened automatically depending on the file type, or saved directly to a disk. By default, Firefox downloads all files to a user's desktop on Mac and Windows or to the user's home directory on Linux, but it can be configured to prompt for a specific download location. Version 3.0 added support for cross-session resuming (stopping a download and resuming it after closing the browser). From within the download manager, a user can view the source URL from which a download originated as well as the location to which a file was downloaded.
Live bookmarks
From 2004, live bookmarks allowed users to dynamically monitor changes to their favorite news sources, using RSS or Atom feeds. Instead of treating RSS-feeds as HTML pages as most news aggregators do, Firefox treated them as bookmarks and automatically updated them in real-time with a link to the appropriate source. In December 2018, version 64.0 of Firefox removed live bookmarks and web feeds, with Mozilla suggesting its replacement by add-ons or other software with news aggregator functionality like Mozilla Thunderbird.
Other features
Find as you type
Firefox also has an incremental find feature known as "Find as you type", invoked by pressing Ctrl+F. With this feature enabled, a user can simply begin typing a word while viewing a web page, and Firefox automatically searches for it and highlights the first instance found. As the user types more of the word, Firefox refines its search. Also, if the user's exact query does not appear anywhere on the page, the "Find" box turns red. Ctrl+G can be pressed to go to the next found match.
Alternatively the slash (/) key can be used instead to invoke the "quick search". The "quick search", in contrast to the normal search, lacks search controls and is wholly controlled by the keyboard. In this mode highlighted links can be followed by pressing the enter key. The "quick search" has an alternate mode which is invoked by pressing the apostrophe (') key, in this mode only links are matched.
Mycroft Web Search
A built-in Mycroft Web search function features extensible search-engine listing; by default, Firefox includes plugins for Google and Yahoo!, and also includes plugins for looking up a word on dictionary.com and browsing through Amazon.com listings. Other popular Mycroft search engines include Wikipedia, eBay, and IMDb.
Smart Bookmarks
Smart Bookmarks (aka Smart keywords) can be used to quickly search for information on specific Web sites. A smart keyword is defined by the user and can be associated with any bookmark, and can then be used in the address bar as a shortcut to quickly get to the site or, if the smart keyword is linked to a searchbox, to search the site. For example, "imdb" is a pre-defined smart keyword; to search for information about the movie 'Firefox' on IMDb, jump to the location bar with the + shortcut, type "imdb Firefox" and press the Enter key or just simply type in "imdb" if one wants to get to the frontpage instead.
Chrome
The Chrome packages within Firefox control and implement the Firefox user interface.
Version 2.0 and above
Enhanced search capabilities
Search term suggestions will now appear as users type in the integrated search box when using the Google, Yahoo! or Answers.com search engines. A new search engine manager makes it easier to add, remove and re-order search engines, and users will be alerted when Firefox encounters a website that offers new search engines that the user may wish to install.
Microsummaries
Support for Microsummaries was added in version 2.0. Microsummaries are short summaries of web pages that are used to convey more information than page titles. Microsummaries are regularly updated to reflect content changes in web pages so that viewers of the web page will want to revisit the web page after updates. Microsummaries can either be provided by the page, or be generated by the processing of an XSLT stylesheet against the page. In the latter case, the XSLT stylesheet and the page that the microsummary applies to are provided by a microsummary generator. Support for Microsummaries was removed as of Firefox 6.
Live Titles
When a website offers a microsummary (a regularly updated summary of the most important information on a Web page), users can create a bookmark with a "Live Title". Compact enough to fit in the space available to a bookmark label, they provide more useful information about pages than static page titles, and are regularly updated with the latest information. There are several websites that can be bookmarked with Live Titles, and even more add-ons to generate Live Titles for other popular websites. Support for Live Titles was removed as of Firefox 6.
Session Restore
The Session Restore feature restores windows, tabs, text typed in forms, and in-progress downloads from the last user session. It will be activated automatically when installing an application update or extension, and users will be asked if they want to resume their previous session after a system crash.
Inline spell checker
A built-in spell checker enables users to quickly check the spelling of text entered into Web forms without having to use a separate application.
Usability in version 2
Firefox 2 was designed for the average user, hiding advanced configuration and making features that do not require user interaction to function. Jim Repoza of eWEEK states: Firefox also won UK Usability Professionals' Association's 2005 award for "Best software application".
Version 3.0 and above
Star button
Quickly add bookmarks from the location bar with a single click; a second click lets the user file and tag them.
Version 5.0 and above
Style Inspector
Firefox 10 added the CSS Style Inspector to the Page Inspector, which allow users to check out a site's structure and edit the CSS without leaving the browser.
Firefox 10 added support for CSS 3D Transforms and for anti-aliasing in the WebGL standard for hardware-accelerated 3D graphics. These updates mean that complex site and Web app animations will render more smoothly in Firefox, and that developers can animate 2D objects into 3D without plug-ins.
3D Page Inspector
Firefox 11, released January 2012, introduced a tiltable three-dimensional visualization of the Document Object Model (DOM), where more nested elements protrude further from the page surface. This feature was removed with version 47.
Firefox 57 and above
Electrolysis and WebExtensions
On August 21, 2015, Firefox developers announced that due to planned changes to Firefox's internal operations, including the planned implementation of a new multi-process architecture codenamed "Multiprocess Firefox" or "Electrolysis" (stylized "e10s"). Introduced to some users in version 48, Firefox adopted a new extension architecture known as WebExtensions. WebExtensions uses HTML and JavaScript APIs and is designed to be similar to the extension API used by Google Chrome, and run within a multi-process environment, but does not enable the same level of access to the browser. XPCOM and XUL add-ons are no longer supported effective Firefox 57.
HTTPS-only mode
Firefox 83 introduced HTTPS-only mode, a security enhancing mode that once enabled forces all connections to websites to use HTTPS.
Picture in Picture
Released on December 3, 2019, Firefox 71 is the first Firefox release to include Picture-in-picture. At first a Windows only feature, with Mac and Linux support introduced in Firefox 72, picture-in-picture allows users to place a video from a webpage into a small separate window that's viewable regardless of which tab the user is in—including from outside the browser.
Credit Card Auto-Fill
Firefox 81 allowed users in the US to save, manage, and auto-fill credit card information. Support for more countries have been added since the release. As of 2023, the list of supported countries is: Austria, Belgium, Canada, France, Germany, Italy, Poland, Spain, the U.K. and the U.S.
Automatic Local Translation of Webpages
Automatic translation of web content, performed entirely locally on device, was introduced to users in Firefox 118. This feature is a joint effort between Mozilla, University of Edinburgh, Charles University, University of Sheffield, and the University of Tartu under the name Project Bergamot. Project Bergamot was funded by the European Union’s Horizon 2020 research and innovation programme.
Tags
Smart Location Bar
Firefox 3 includes a "Smart Location Bar". While most other browsers, such as Internet Explorer, will search through history for matching web sites as the user types a URL into the location bar, the Smart Location Bar will also search through bookmarks for a page with a matching URL. Additionally, Firefox's Smart Location Bar will also search through page titles, allowing the user to type in a relevant keyword, instead of a URL, to find the desired page. Firefox uses frecency and other heuristics to predict which history and bookmark matches the user is most likely to select.
Library
View, organize and search through bookmarks, tags and browsing history using the new Library window. Create or restore full backups of this data whenever with a few clicks.
Smart Bookmark Folders
Users can quickly access their most visited bookmarks from the toolbar, or recently bookmarked and tagged pages from the bookmark menu. Smart Bookmark Folders can be created by saving a search query in the Library.
Full page zoom
From the View menu and via keyboard shortcuts, the new zooming feature lets users zoom in and out of entire pages, scaling the layout, text and images, or optionally only the text size. Zoom settings will be remembered for each site.
Text selection improvements
In addition to being able to double-click and drag to select text by words; or triple-click and drag to select text by paragraph, Ctrl (Cmd on Mac) can be held down to retain the previous selection and extend it instead of replacing it when doing another selection.
Web-based protocol handlers
Web applications, such as a user's favorite webmail provider, can now be used instead of desktop applications for handling mail to links from other sites. Similar support is available for other protocols (Web applications will have to first enable this by registering as handlers with Firefox).
Add-ons and extensions
There are six types of add-ons in Firefox: extensions, themes, language packs, plugins, social features and apps. Firefox add-ons may be obtained from the Mozilla Add-ons web site or from other sources.
Extensions
Firefox users can add features and change functionality in Firefox by installing extensions. Extension functionality is varied; such as those enabling mouse gestures, those that block advertisements, and those that enhance tabbed browsing.
Features that the Firefox developers believe will be used by only a small number of its users are not included in Firefox, but instead left to be implemented as extensions. Many Mozilla Suite features, such as IRC chat (ChatZilla) and calendar have been recreated as Firefox extensions. Extensions are also sometimes a testing ground for features that are eventually integrated to the main codebase. For example, MultiZilla was an extension that provided tabbed browsing when Mozilla lacked that feature.
While extensions provide a high level of customizability, PC World notes the difficulty a casual user would have in finding and installing extensions as compared to their features being available by default.
Most extensions are not created or supported by Mozilla. Malicious extensions have been created. Mozilla provides a repository of extensions that have been reviewed by volunteers and are believed to not contain malware. Since extensions are mostly created by third parties, they do not necessarily go through the same level of testing as official Mozilla products, and they may have bugs or vulnerabilities. Like applications on Android and iOS, Firefox extensions have permission model: for example before installing of extension user must agree that this extension can have access to all webpages, or maybe have permission to manage downloads, or have no special permissions — in such way the extension can be manually activated and interact with current page. From 2019 Firefox, Chromium based browsers (Google Chrome, Edge, Opera, Vivaldi) have the same format of extension: WebExtensions API, this is mean that web extension developed for Google Chrome can be used on Firefox (in most cases), and vice versa.
Themes
Firefox also supports a variety of themes for changing its appearance. Prior to the release of Firefox 57, themes are simply packages of CSS and image files. From Firefox 57 onwards, themes consist solely of color modifications through the use of CSS. Many themes can be downloaded from the Mozilla Update web site.
Language packs
Language packs are dictionaries for spell checking of input fields.
Plugins
Firefox supports plugins based on Netscape Plugin Application Program Interface (NPAPI), i.e. Netscape-style plugins. As a side note, Opera and Internet Explorer 3.0 to 5.0 also support NPAPI.
On June 30, 2004, the Mozilla Foundation, in partnership with Adobe, Apple, Macromedia, Opera, and Sun Microsystems, announced a series of changes to web browser plugins. The then-new API allowed web developers to offer richer web browsing experiences, helping to maintain innovation and standards. The then-new plugin technologies were implemented in the future versions of the Mozilla applications.
Mozilla Firefox 1.5 and later versions include the Java Embedding plugin, which allow Mac OS X users to run Java applets with the then-latest 1.4 and 5.0 versions of Java (the default Java software shipped by Apple is not compatible with any browser, except its own Safari).
Apps
After the releases of Firefox OS based on stack of web technologies, Mozilla added a feature to install mobile apps on PC using Firefox as base.
Customizability
Beyond the use of Add-ons, Firefox additional customization features.
The position of the toolbars and interface are customizable
User stylesheets to change the style of webpages and Firefox's user interface.
Customizable font colours.
A number of internal configuration options are not accessible in a conventional manner through Firefox's preference dialogs, although they are exposed through its about:config interface.
References
External links
Firefox Features at Mozilla.com
Microsummaries - MozillaWiki
Mozilla Firefox
Firefox | List of Firefox features | [
"Technology"
] | 4,820 | [
"Software features"
] |
1,564,630 | https://en.wikipedia.org/wiki/HD%2093083 | HD 93083 is an orange-hued star in the southern constellation of Antlia. It has the proper name Macondo, after the mythical village of the novel One Hundred Years of Solitude (Cien años de soledad). The name was selected by Colombia during the IAU's NameExoWorlds campaign. The star has an apparent visual magnitude of 8.30, which is too faint to be visible to the naked eye. It is located at a distance of 93 light years from the Sun based on parallax. HD 93083 is drifting further away with a radial velocity of +43.65 km/s, having come to within some 484,000 years ago.
This is a K-type main-sequence star that has been assigned a stellar classification of K2IV-V or K3V, depending on the study. It is smaller and less massive than the Sun, with a higher metallicity, or abundance of elements heavier than helium. The star is roughly six billion years old with a low projected rotational velocity of 2.2 km/s, and has an expected main sequence lifetime of 20.4 billion years. It is a source of X-ray emission with a luminosity of . The star is radiating around 41% of the luminosity of the Sun from its photosphere at an effective temperature of 5,030 K.
Planetary system
In 2005, the discovery of an exoplanet orbiting the star was announced. This is another discovery using the radial velocity method with the HARPS spectrograph. The planet was given the name Melquíades by the IAU after a character in the book One Hundred Years of Solitude. The orbit of this body lies entirely within the habitable zone of the host star, and it is theoretically possible that a large moon orbiting the body, or a hypothetical terrestrial exoplanet at a trojan point, is habitable.
See also
List of extrasolar planets
HARPS spectrograph
References
K-type main-sequence stars
K-type subgiants
Planetary systems with one confirmed planet
Antlia
Durchmusterung objects
1137
093083
052521 | HD 93083 | [
"Astronomy"
] | 444 | [
"Antlia",
"Constellations"
] |
1,564,675 | https://en.wikipedia.org/wiki/HD%2063454 | HD 63454, formally named Ceibo, is a star located in the southern circumpolar constellation Chamaeleon near the border with Mensa. To see the star, one needs a small telescope because it has an apparent magnitude of 9.36, which is below the limit for naked eye visibility. The object is located relatively close at a distance of 123 light years based on Gaia DR3 parallax measurements but is receding with a heliocentric radial velocity of . At its current distance, HD 63454's brightness is diminished by two tenths of a magnitude due to interstellar dust. It has an absolute magnitude of +6.68.
Properties
HD 63454 has a stellar classification of K3 V(k), indicating that it is a K-type main-sequence star with some infilling of the calcium K and H lines. It has 79% the mass of the Sun and 80% the Sun's radius. It radiates 28.7% the luminosity of the Sun from its photosphere at an effective temperature of , giving it an orange hue. HD 63454 has a solar metallicity and is estimated to 1.52 billion years old, a third the age of the Sun. It spins modestly with a projected rotational velocity of .
Planetary system
On Valentine’s Day 2005, a hot Jupiter HD 63454 b was found by Claire Moutou, Michel Mayor, and François Bouchy using the radial velocity method.
After the 2019 IAU100 NameExoWorlds campaign, the International Astronomical Union, approved the names proposed from Uruguay: Ceibo for the star and Ibirapitá for the planet, respectively after the native Uruguayan tree species Erythrina crista-galli and Peltophorum dubium.
These names were announced on 17 December 2019, at a press conference of the IAU in Paris, together with other 111 sets of exoplanets and host stars. Ceibo and Ibirapitá were proposed by Adrián Basedas, from the Astronomical Observatory of Liceo Nº9, Montevideo, Uruguay, who won the national contest "Nombra Tu Exoplaneta", organized in Uruguay, to name HD 63454 and HD 63454 b.
See also
List of extrasolar planets
References
External links
K-type main-sequence stars
063454
037284
Chamaeleon
Planetary systems with one confirmed planet
CD-77 00298 | HD 63454 | [
"Astronomy"
] | 514 | [
"Chamaeleon",
"Constellations"
] |
1,564,687 | https://en.wikipedia.org/wiki/Sulfur%20monoxide | Sulfur monoxide is an inorganic compound with formula . It is only found as a dilute gas phase. When concentrated or condensed, it converts to S2O2 (disulfur dioxide). It has been detected in space but is rarely encountered intact otherwise.
Structure and bonding
The SO molecule has a triplet ground state similar to O2 and S2, that is, each molecule has two unpaired electrons. The S−O bond length of 148.1 pm is similar to that found in lower sulfur oxides (e.g. S8O, S−O = 148 pm) but is longer than the S−O bond in gaseous S2O (146 pm), SO2 (143.1 pm) and SO3 (142 pm).
The molecule is excited with near infrared radiation to the singlet state (with no unpaired electrons). The singlet state is believed to be more reactive than the ground triplet state, in the same way that singlet oxygen is more reactive than triplet oxygen.
Production and reactions
The SO molecule is thermodynamically unstable, converting initially to S2O2. Consequently controlled syntheses typically do not detect the presence of SO proper, but instead the reaction of a chemical trap or the terminal decomposition products of S2O2 (sulfur and sulfur dioxide).
Production of SO as a reagent in organic syntheses has centred on using compounds that "extrude" SO. Examples include the decomposition of the relatively simple molecule ethylene episulfoxide:
C2H4SO → C2H4 + SO
Yields directly from an episulfoxide are poor, and improve only moderately when the carbons are sterically shielded. A much better approach decomposes a diaryl cyclic trisulfide oxide, C10H6S3O, produced from thionyl chloride and the dithiol.
SO inserts into alkenes, alkynes and dienes producing thiiranes, molecules with three-membered rings containing sulfur.
Sulfur monoxide may form transiently during the metallic reduction of thionyl bromide.
Generation under extreme conditions
In the laboratory, sulfur monoxide can be produced by treating sulfur dioxide with sulfur vapor in a glow discharge. It has been detected in single-bubble sonoluminescence of concentrated sulfuric acid containing some dissolved noble gas.
Benner and Stedman developed a chemiluminescence detector for sulfur via the reaction between sulfur monoxide and ozone:
SO + O3 → SO2* + O2
SO2* → SO2 + hν
Occurrence
Ligand for transition metals
As a ligand SO can bond in a number different ways:
a terminal ligand, with a bent M−O−S arrangement, for example with titanium oxyfluoride
a terminal ligand, with a bent M−S−O arrangement, analogous to bent nitrosyl
bridging across two or three metal centres (via sulfur), as in Fe3(μ3-S)(μ3-SO)(CO)9
η2 sideways-on (d–π interaction) with vanadium, niobium, and tantalum.
Astrochemistry
Sulfur monoxide has been detected around Io, one of Jupiter's moons, both in the atmosphere and in the plasma torus. It has also been found in the atmosphere of Venus, in Comet Hale–Bopp, in 67P/Churyumov–Gerasimenko, and in the interstellar medium.
On Io, SO is thought to be produced both by volcanic and photochemical routes. The principal photochemical reactions are proposed as follows:
O + S2 → S + SO
SO2 → SO + O
Sulfur monoxide has been found in NML Cygni.
Biological chemistry
Sulfur monoxide may have some biological activity. The formation of transient SO in the coronary artery of pigs has been inferred from the reaction products, carbonyl sulfide and sulfur dioxide.
Safety measures
Because of sulfur monoxide's rare occurrence in our atmosphere and poor stability, it is difficult to fully determine its hazards. But when condensed and compacted, it forms disulfur dioxide, which is relatively toxic and corrosive. This compound is also highly flammable (similar flammability to methane) and when burned produces sulfur dioxide, a poisonous gas.
Sulfur monoxide dication
Sulfur dioxide SO2 in presence of hexamethylbenzene C6(CH3)6 can be protonated under superacidic conditions (HF·AsF5) to give the non-rigid π-complex C6(CH3)6SO2+. The SO2+ moiety can essentially move barrierless over the benzene ring. The S−O bond length is 142.4(2) pm.
C6(CH3)6 + SO2 + 3 HF·AsF5 → [C6(CH3)6SO][AsF6]2 + [H3O][AsF6]
Disulfur dioxide
SO converts to disulfur dioxide (S2O2). Disulfur dioxide is a planar molecule with C2v symmetry. The S−O bond length is 145.8 pm, shorter than in the monomer, and the S−S bond length is 202.45 pm. The O−S−S angle is 112.7°. S2O2 has a dipole moment of 3.17 D.
References
Gases
Interchalcogens
Sulfur oxides
Sulfur(II) compounds
Diatomic molecules | Sulfur monoxide | [
"Physics",
"Chemistry"
] | 1,146 | [
"Matter",
"Molecules",
"Statistical mechanics",
"Phases of matter",
"Diatomic molecules",
"Gases"
] |
1,564,717 | https://en.wikipedia.org/wiki/HD%20208487 | HD 208487 is a star with an orbiting exoplanet in the constellation of Grus. Based on parallax measurements, it is located at a distance of 146.5 light years from the Sun. The absolute magnitude of HD 208487 is 4.26, but at that distance the apparent visual magnitude is 7.47, which is too faint to be viewed with the naked eye. The system is drifting further away with a radial velocity of 5.6 km/s. It is a member of the thin disk population.
The spectrum of HD 208487 presents as an ordinary G-type main-sequence star with a stellar classification of G1/3V. It is a relatively young star, with age estimates of 1–2 billion years, and is spinning with a projected rotational velocity of 3.7 km/s. The star has 16% greater mass and a 17% larger radius than the Sun. The abundance of iron, a measure of the star's metallicity, is similar to the Sun. It is radiating 176% of the luminosity of the Sun from its photosphere at an effective temperature of 6,143 K. The level of magnetic activity in the chromosphere is low.
The star HD 208487 is named Itonda and the exoplanet Mintome. The names were selected in the NameExoWorlds campaign by Gabon, during the 100th anniversary of the IAU. Itonda, in the Myene tongue, corresponds to all that is beautiful. Mintome, in the Fang tongue, is a mythical land where a brotherhood of brave men live.
Planetary system
There is one known planet orbiting the star HD 208487, which is designated HD 208487 b. It has a mass at least half that of Jupiter and is located in an eccentric 130-day orbit.
The discovery of a second planet in the system was announced on 13 September 2005, by P.C. Gregory. The discovery was made using Bayesian analysis of the radial velocity dataset to determine the planetary parameters. However, further analysis revealed that an alternative two-planet solution for the HD 208487 system was possible, with a planet in a 28-day orbit instead of the 908-day orbit postulated, and it was concluded that activity on the star is more likely to be responsible for the residuals to the one-planet solution than the presence of a second planet.
See also
Lists of exoplanets
References
External links
G-type main-sequence stars
Planetary systems with one confirmed planet
Grus (constellation)
Durchmusterung objects
208487
108375 | HD 208487 | [
"Astronomy"
] | 537 | [
"Grus (constellation)",
"Constellations"
] |
1,564,830 | https://en.wikipedia.org/wiki/Paul%20Walden | Paul Walden (; ; ; 26 July 1863 – 22 January 1957) was a Russian, Latvian and German chemist known for his work in stereochemistry and history of chemistry. In particular, he discovered the Walden rule, he invented the stereochemical reaction known as Walden inversion and synthesized the first room-temperature ionic liquid, ethylammonium nitrate.
Early life and education
Walden was born in Rozulas in the Russian Empire (now Stalbe parish, Pārgauja municipality, Latvia) in a large Latvian peasant family. At the age of four, he lost his father and later his mother. Thanks to financial support from his two older brothers who lived in Riga (one was a merchant and another served as a lieutenant in the military) Walden managed to complete his education – first graduated with honors from the district school in the town of Cēsis (1876), and then from the Riga Technical High School (1882).
In December 1882, he enrolled into the Riga Technical University and became seriously interested in chemistry. In 1886, he published his first scientific study on the color evaluation of the reactions of nitric and nitrous acid with various reagents and establishing the limits of sensitivity of the color method to detection of nitric acid.
In April 1887, Walden became an active member of the Russian Physico-chemical Society. During this time, Walden started his collaboration with Wilhelm Ostwald (Nobel Prize in Chemistry 1909) which greatly influenced his development as a scientist. Their first work together was published in 1887 and was devoted to the dependence of the electrical conductivity of aqueous solutions of salts on their molecular weight.
Work in chemistry
In 1888, Walden graduated from the university with a degree in chemical engineering and continued working at the Chemistry Department as an assistant to professor C. Bischof.
Under his guidance, Walden began compiling "Handbook of Stereochemistry" which was published in 1894. In preparation of this handbook, Walden had to perform numerous chemical syntheses and characterizations which resulted in 57 journal papers on stereochemistry alone, published between 1889 and 1900 in Russian and foreign journals 57 articles on the stereochemistry. He also continued his research in the field of physical chemistry, establishing in 1889 that the ionizing power of non-aqueous solvent is directly proportional to the dielectric constant.
During the summer vacations of 1890 and 1891, Walden was visiting Ostwald at the University of Leipzig and in September 1891 defended there a master thesis on the affinity values of certain organic acids. Ostwald suggested that he stay in Leipzig as a private lecturer, but Walden declined, hoping for a better career in Riga.
In the summer of 1892 he was appointed assistant professor of physical chemistry. A year later he defended his doctorate on osmotic phenomena in sedimentary layers and in September 1894 became professor of analytical and physical chemistry at the Riga Technical University. He worked there until 1911 and during 1902–1905 was rector of the university. In 1895, Walden made his most remarkable discovery which was later named Walden inversion, namely that various stereoisomers can be obtained from the same compound via certain exchange reactions involving hydrogen. This topic became the basis for his habilitation thesis defended in March 1899 at St. Petersburg University.
After that, Walden became interested in electrochemistry of nonaqueous solutions. In 1902, he proposed a theory of autodissociation of inorganic and organic solvents. In 1905, he found a relationship between the maximum molecular conductivity and viscosity of the medium and in 1906, coined the term "solvation". Together with his work on stereochemistry, these results brought him to prominence; in particular, he was considered a candidate for the Nobel Prize in Chemistry in 1913 and 1914.
Walden was also credited as a talented chemistry lecturer. In his memoirs, he wrote: "My audience usually was crowded and the feedback of sympathetic listeners gave me strength ... my lectures I was giving spontaneously, to bring freshness to the subject ... I never considered teaching as a burden".
1896 brought reforms to the Riga Technical University. Whereas previously, all teaching was conducted in German and Walden was the only professor giving some courses in Russian, from then on, Russian became the official language. This change allowed receiving subsidies from the Russian government and helped the alumni in obtaining positions in Russia. These reforms resulted in another and rather unusual collaboration of Walden with Ostwald: Walden was rebuilding the Chemistry Department and Ostwald sent him the blueprints of the chemical laboratories in Leipzig as an example. In May 1910, Walden was elected a member of the St. Petersburg Academy of Sciences and in 1911 was invited to Saint Petersburg to lead the Chemical Laboratories of the academy founded in 1748, by Mikhail Lomonosov. He remained in that position till 1919. As an exception, he was allowed to stay in Riga where he had better research possibilities, but he was traveling, almost every week, by train, to St. Petersburg for the academy meetings and guidance of research. In the period 1911–1915, Walden published 14 articles in the "Proceedings of the Academy of Sciences" on electrochemistry of nonaqueous solutions. In particular, in 1914 he synthesized the first room-temperature ionic liquid, namely ethylammonium nitrate ()· with the melting point of 12 °C.
After 1915, due to the difficulties caused by the World War I, political unrest in Russia and then October Revolution, Walden had reduced his research activity and focused on teaching and administrative work, taking numerous leading positions in science. Due to the political unrest in Latvia, Walden had immigrated to Germany. He was appointed as professor of inorganic chemistry at the University of Rostock where he worked until retirement in 1934. In 1924 he was invited back to Riga, where he gave a series of lectures. He was offered leading positions in chemistry in Riga and in St. Petersburg, but declined. Despite his emigration, Walden retained his popularity in Russia, and in 1927 he was appointed as a foreign member of the Russian Academy of Sciences. Later, he also became a member of the Swedish (1928) and Finnish (1932) Academies.
Personal life
Walden's daughter, Antonina Anna Walden (1899–1983), was a music teacher who married Finnish translator and essayist Juho August Hollo. Their son was the Finnish poet and translator Anselm Hollo.
Late years
In his last years, Walden focused on history of chemistry and collected a unique library of over 10,000 volumes. The library and his house were destroyed when the British bombed Rostock in 1942. Walden moved to Berlin and then to Frankfurt am Main, where he became a visiting professor of the history of chemistry at the local university. He met the end of World War II in the French Occupation Zone, cut off from Rostock University, located in the Soviet Zone, and thus left without any source of income.
Walden survived on a modest pension arranged by German chemists, giving occasional lectures in Tübingen and writing memoirs. In 1949, he published his best-known book, History of Chemistry. He died in Gammertingen in 1957, at the age of 93. His memoirs were published only in 1974.
References
Further reading
1863 births
1957 deaths
People from Cēsis Municipality
People from Valmiera county
Baltic-German people from the Russian Empire
Latvian scientists
Chemists from the Russian Empire
Latvian chemists
20th-century German chemists
Inventors from the Russian Empire
20th-century German inventors
Stereochemists
19th-century Latvian people
Saint Petersburg State University alumni
Leipzig University alumni
Riga Technical University alumni
Academic staff of Riga Technical University
Academic staff of the University of Latvia
Full members of the Saint Petersburg Academy of Sciences
Full Members of the Russian Academy of Sciences (1917–1925)
Full Members of the USSR Academy of Sciences
Honorary members of the USSR Academy of Sciences
Latvian emigrants to Germany | Paul Walden | [
"Chemistry"
] | 1,585 | [
"Stereochemistry",
"Stereochemists"
] |
1,564,924 | https://en.wikipedia.org/wiki/International%20Material%20Data%20System | The International Material Data System (IMDS) is a global data repository that contains information on materials used by the automotive industry. Several leading auto manufacturers use the IMDS to maintain data for various reporting requirements.
In the IMDS, all materials present in finished automobile manufacturing are collected, maintained, analysed and archived. IMDS facilitates meeting the obligations placed on automobile manufacturers, and thus on their suppliers, by national and international standards, laws and regulations.
Introduction
The IMDS was originally a collaboration of Audi, BMW, Daimler, EDS (now part of DXC Technology, the system administrator), Ford, Opel, Porsche, VW and Volvo. Since inception the list of participating vehicle manufacturers and suppliers has grown greatly.
Usage
Because it is a computer-based system, IMDS highlights hazardous and controlled substances by comparing entered data with regulatory-originated lists of prohibited substances (GADSL, REACH, ELV, etc...). Hence OEMs can trace hazardous substances back to the individual part and work with suppliers to reduce, control, or eliminate the hazard.
All substances must be declared in the material data sheet (MDS) of the IMDS to a resolution of 1 gram or better – not just declarable and prohibited substances (e.g. Cr VI / Hg / Pb / Cd). Substances and materials of products must be known in detail so that it may be delivered by the OEMs to dismantler companies in order to achieve the goals of the ELV Directive.
The basic workflow model of the system is for each supplier to submit data about the parts they sell to their direct customer. When each link in the supply chain submits data per this method, it mimics actual supply chain part flow, preserving customer-to-supplier relations. Data entry in IMDS is frequently a contractual requirement of PPAP which is one part of standard automotive quality systems.
Access and costs
The IMDS is easily accessed through the internet. The basic web browser version of the system is supported by the OEM sponsor's group and provided free of charge to suppliers in the automotive supply chain.
There are several vendors that provide systems allowing compatible IMDS interaction with product lifecycle management, download and upload, data format translation, and other reporting systems.
Changes in 2013 and 2014
2013 – IMDS NT for Design Changes
2014 – IMDS 2020 for New Functions
References
External links
Training Homepage for new IMDS NT Design
GADSL Homepage
End of Life Vehicles Directive 2000/53/EC
REACh
IMDS at AIAG
IMDS Suppliers Group at CLEPA, European Automotive Suppliers' Association
US American Automotive Suppliers' Associations
About IMDS (Brazil)
Automotive standards
Material handling
Automotive industry
European Union directives
Vehicle recycling | International Material Data System | [
"Physics"
] | 559 | [
"Materials",
"Material handling",
"Matter"
] |
1,564,929 | https://en.wikipedia.org/wiki/Marin%20Computer%20Center | Opened in 1977 in Marin County, California, the Marin Computer Center was the world's first public access microcomputer center. The non-profit company was co-created by David Fox (later to become one of Lucasfilm Games' founding members) and author Annie Fox.
MCC (as it was known) initially featured the Atari 2600, an Equinox 100, 9 Processor Technology Sol 20 computers (S-100 bus systems), the Radio Shack Model I and the Commodore PET. In addition to providing computer access to the public it had classes on the programming language BASIC. Later, it added Apple II and Atari 8-bit computers, for a total of about 40 systems.
The Foxes left MCC in 1981, turning it over to new management, and later to the teens and young adults who helped run it.
See also
Public computer
External links
Marin Computer Center in People's Computers, Nov-Dec 1978
You Want to Open a What? - Article from Creative Computing, November 1984
Electric Eggplant
Scan of 1981 advertisement for Marin Computer Center
Buildings and structures in Marin County, California
History of computing | Marin Computer Center | [
"Technology"
] | 226 | [
"Computing stubs",
"Computers",
"History of computing"
] |
1,565,068 | https://en.wikipedia.org/wiki/HD%2083443 | HD 83443 is an orange dwarf star approximately 133 light-years away in the constellation of Vela. As of 2000, at least one extrasolar planet has been confirmed to be orbiting the star. The star HD 83443 is named Kalausi. The name was selected in the NameExoWorlds campaign by Kenya, during the 100th anniversary of the IAU. The word Kalausi means a very strong whirling column of wind in the Dholuo language.
Planetary system
The planet HD 83443 b was discovered in 2000 by the Geneva Extrasolar Planet Search Team led by Michel Mayor. It has a minimum mass comparable to Saturn's, and its orbit at the time of discovery was one of the shortest known taking only three days to complete one revolution around the star. This hot Jupiter is likely to be slightly larger than Jupiter in radius.
In 2000, the same year that planet b was found, another planet around HD 83443 was announced by the Geneva Team. The new planet was designated as "HD 83443 c". It had a mass smaller than planet b and a short, very eccentric orbit. Its orbital period, 28.9 days, was especially interesting, because it indicated a 10:1 orbital resonance between the planets. However, a team led by astronomer Paul Butler did not detect any signal indicating the existence of the second planet. New observations by the Geneva team could not detect the signal either and the discovery claim had to be retracted. The origin of the signal, which was "highly significant" in the earlier data, is not yet clear. Another planet, designated HD 83443 c, was discovered in 2022 in a wide, eccentric 22-year orbit. It is suspected that HD 83443 c entered its current orbit due to the inward migration of HD 83443 b.
References
External links
K-type main-sequence stars
Planetary systems with two confirmed planets
Vela (constellation)
CD–42 5452
083443
047202
Kalausi
J09371182-4316198 | HD 83443 | [
"Astronomy"
] | 424 | [
"Vela (constellation)",
"Constellations"
] |
1,565,136 | https://en.wikipedia.org/wiki/Crest%20and%20trough | A Crest point on a wave is the highest point of the wave. A crest is a point on a surface wave where the displacement of the medium is at a maximum. A trough is the opposite of a crest, so the minimum or lowest point of the wave.
When the crests and troughs of two sine waves of equal amplitude and frequency intersect or collide, while being in phase with each other, the result is called constructive interference and the magnitudes double (above and below the line). When in antiphase – 180° out of phase – the result is destructive interference: the resulting wave is the undisturbed line having zero amplitude.
See also
Crest factor
Superposition principle
Wave
References
, 704 pages.
Waves | Crest and trough | [
"Physics"
] | 150 | [
"Waves",
"Physical phenomena",
"Motion (physics)"
] |
1,565,156 | https://en.wikipedia.org/wiki/Wireless%20network%20interface%20controller | A wireless network interface controller (WNIC) is a network interface controller which connects to a wireless network, such as Wi-Fi, Bluetooth, or LTE (4G) or 5G rather than a wired network, such as an Ethernet network. A WNIC, just like other NICs, works on the layers 1 and 2 of the OSI model and uses an antenna to communicate via radio waves.
A wireless network interface controller may be implemented as an expansion card and connected using PCI bus or PCIe bus, or connected via USB, PC Card, ExpressCard, Mini PCIe or M.2.
The low cost and ubiquity of the Wi-Fi standard means that many newer mobile computers have a wireless network interface built into the motherboard.
The term is usually applied to IEEE 802.11 adapters; it may also apply to a NIC using protocols other than 802.11, such as one implementing Bluetooth connections.
Modes of operation
An 802.11 WNIC can operate in two modes known as infrastructure mode and ad hoc mode:
Infrastructure mode
In an infrastructure mode network the WNIC needs a wireless access point: all data is transferred using the access point as the central hub. All wireless nodes in an infrastructure mode network connect to an access point. All nodes connecting to the access point must have the same service set identifier (SSID) as the access point. If wireless security is enabled on the access point (such as WEP or WPA), the NIC must have valid authentication parameters in order to connect to the access point.
Ad hoc mode
In an ad hoc mode network the WNIC does not require an access point, but rather can interface with all other wireless nodes directly. All the nodes in an ad hoc network must have the same channel and SSID.
Specifications
The IEEE 802.11 standard sets out low-level specifications for how all 802.11 wireless networks operate. Earlier 802.11 interface controllers are usually only compatible with earlier variants of the standard, while newer cards support both current and old standards.
Specifications commonly used in marketing materials for WNICs include:
Wireless data transfer rates (measured in Mbit/s)
Wireless transmit power (measured in dBm)
Wireless network standards supported, such as 802.11b, 802.11a, 802.11g, 802.11n, 802.11ac, 802.11ax
Most WNICs support one or more of 802.11, Bluetooth and 3GPP (2G, 3G, 4G, 5G) network standards.
Range
Wireless range may be substantially affected by objects in the way of the signal and by the quality of the antenna. Large electrical appliances, such as refrigerators, fuse boxes, metal plumbing, and air conditioning units can impede a wireless network signal. The theoretical maximum range of IEEE 802.11 is only reached under ideal circumstances and true effective range is typically about half of the theoretical range. Specifically, the maximum throughput speed is only achieved at extremely close range (less than or so); at the outer reaches of a device's effective range, speed may decrease to around 1 Mbit/s before it drops out altogether. The reason is that wireless devices dynamically negotiate the top speed at which they can communicate without dropping too many data packets.
FullMAC and SoftMAC devices
In an 802.11 WNIC, the MAC Sublayer Management Entity (MLME) can be implemented either in the NIC's hardware or firmware, or in host-based software that is executed on the main CPU. A WNIC that implements the MLME function in hardware or firmware is called a FullMAC WNIC or a HardMAC NIC and a NIC that implements it in host software is called a SoftMAC NIC.
A FullMAC device hides the complexity of the 802.11 protocol from the main CPU, instead providing an 802.3 (Ethernet) interface; a SoftMAC design implements only the timing-critical part of the protocol in hardware/firmware and the rest on the host.
FullMAC chips are typically used in mobile devices because:
they are easier to integrate in complete products
power is saved by having a specialized CPU perform the 802.11 processing;
the chip vendor has tighter control of the MLME.
Popular example of FullMAC chips is the one implemented on the Raspberry Pi 3.
Linux kernel's mac80211 framework provides capabilities for SoftMAC devices and additional capabilities (such as mesh networking, which is known as the IEEE 802.11s standard) for devices with limited functionality.
FreeBSD also supports SoftMAC drivers.
See also
List of device bandwidths
Wi-Fi operating system support
References
Interface card | Wireless network interface controller | [
"Technology",
"Engineering"
] | 952 | [
"Wireless networking",
"Computer networks engineering"
] |
1,565,267 | https://en.wikipedia.org/wiki/De%20Bruijn%20sequence | In combinatorial mathematics, a de Bruijn sequence of order n on a size-k alphabet A is a cyclic sequence in which every possible length-n string on A occurs exactly once as a substring (i.e., as a contiguous subsequence). Such a sequence is denoted by and has length , which is also the number of distinct strings of length n on A. Each of these distinct strings, when taken as a substring of , must start at a different position, because substrings starting at the same position are not distinct. Therefore, must have at least symbols. And since has exactly symbols, de Bruijn sequences are optimally short with respect to the property of containing every string of length n at least once.
The number of distinct de Bruijn sequences is
For a binary alphabet this is , leading to the following sequence for positive : 1, 1, 2, 16, 2048, 67108864... ()
The sequences are named after the Dutch mathematician Nicolaas Govert de Bruijn, who wrote about them in 1946. As he later wrote, the existence of de Bruijn sequences for each order together with the above properties were first proved, for the case of alphabets with two elements, by . The generalization to larger alphabets is due to . Automata for recognizing these sequences are denoted as de Bruijn automata.
In many applications, A = {0,1}.
History
The earliest known example of a de Bruijn sequence comes from Sanskrit prosody where, since the work of Pingala, each possible three-syllable pattern of long and short syllables is given a name, such as 'y' for short–long–long and 'm' for long–long–long. To remember these names, the mnemonic yamātārājabhānasalagām is used, in which each three-syllable pattern occurs starting at its name: 'yamātā' has a short–long–long pattern, 'mātārā' has a long–long–long pattern, and so on, until 'salagām' which has a short–short–long pattern. This mnemonic, equivalent to a de Bruijn sequence on binary 3-tuples, is of unknown antiquity, but is at least as old as Charles Philip Brown's 1869 book on Sanskrit prosody that mentions it and considers it "an ancient line, written by Pāṇini".
In 1894, A. de Rivière raised the question in an issue of the French problem journal L'Intermédiaire des Mathématiciens, of the existence of a circular arrangement of zeroes and ones of size that contains all binary sequences of length . The problem was solved (in the affirmative), along with the count of distinct solutions, by Camille Flye Sainte-Marie in the same year. This was largely forgotten, and proved the existence of such cycles for general alphabet size in place of 2, with an algorithm for constructing them. Finally, when in 1944 Kees Posthumus conjectured the count for binary sequences, de Bruijn proved the conjecture in 1946, through which the problem became well-known.
Karl Popper independently describes these objects in his The Logic of Scientific Discovery (1934), calling them "shortest random-like sequences".
Examples
Taking A = {0, 1}, there are two distinct B(2, 3): 00010111 and 11101000, one being the reverse or negation of the other.
Two of the 16 possible B(2, 4) in the same alphabet are 0000100110101111 and 0000111101100101.
Two of the 2048 possible B(2, 5) in the same alphabet are 00000100011001010011101011011111 and 00000101001000111110111001101011.
Construction
The de Bruijn sequences can be constructed by taking a Hamiltonian path of an n-dimensional de Bruijn graph over k symbols (or equivalently, an Eulerian cycle of an (n − 1)-dimensional de Bruijn graph).
An alternative construction involves concatenating together, in lexicographic order, all the Lyndon words whose length divides n.
An inverse Burrows–Wheeler transform can be used to generate the required Lyndon words in lexicographic order.
de Bruijn sequences can also be constructed using shift registers or via finite fields.
Example using de Bruijn graph
Goal: to construct a B(2, 4) de Bruijn sequence of length 24 = 16 using Eulerian (n − 1 = 4 − 1 = 3) 3-D de Bruijn graph cycle.
Each edge in this 3-dimensional de Bruijn graph corresponds to a sequence of four digits: the three digits that label the vertex that the edge is leaving followed by the one that labels the edge. If one traverses the edge labeled 1 from 000, one arrives at 001, thereby indicating the presence of the subsequence 0001 in the de Bruijn sequence. To traverse each edge exactly once is to use each of the 16 four-digit sequences exactly once.
For example, suppose we follow the following Eulerian path through these vertices:
000, 000, 001, 011, 111, 111, 110, 101, 011,
110, 100, 001, 010, 101, 010, 100, 000.
These are the output sequences of length k:
0 0 0 0
_ 0 0 0 1
_ _ 0 0 1 1
This corresponds to the following de Bruijn sequence:
0 0 0 0 1 1 1 1 0 1 1 0 0 1 0 1
The eight vertices appear in the sequence in the following way:
{0 0 0 0} 1 1 1 1 0 1 1 0 0 1 0 1
0 {0 0 0 1} 1 1 1 0 1 1 0 0 1 0 1
0 0 {0 0 1 1} 1 1 0 1 1 0 0 1 0 1
0 0 0 {0 1 1 1} 1 0 1 1 0 0 1 0 1
0 0 0 0 {1 1 1 1} 0 1 1 0 0 1 0 1
0 0 0 0 1 {1 1 1 0} 1 1 0 0 1 0 1
0 0 0 0 1 1 {1 1 0 1} 1 0 0 1 0 1
0 0 0 0 1 1 1 {1 0 1 1} 0 0 1 0 1
0 0 0 0 1 1 1 1 {0 1 1 0} 0 1 0 1
0 0 0 0 1 1 1 1 0 {1 1 0 0} 1 0 1
0 0 0 0 1 1 1 1 0 1 {1 0 0 1} 0 1
0 0 0 0 1 1 1 1 0 1 1 {0 0 1 0} 1
0 0 0 0 1 1 1 1 0 1 1 0 {0 1 0 1}
0} 0 0 0 1 1 1 1 0 1 1 0 0 {1 0 1 ...
... 0 0} 0 0 1 1 1 1 0 1 1 0 0 1 {0 1 ...
... 0 0 0} 0 1 1 1 1 0 1 1 0 0 1 0 {1 ...
...and then we return to the starting point. Each of the eight 3-digit sequences (corresponding to the eight vertices) appears exactly twice, and each of the sixteen 4-digit sequences (corresponding to the 16 edges) appears exactly once.
Example using inverse Burrows—Wheeler transform
Mathematically, an inverse Burrows—Wheeler transform on a word generates a multi-set of equivalence classes consisting of strings and their rotations. These equivalence classes of strings each contain a Lyndon word as a unique minimum element, so the inverse Burrows—Wheeler transform can be considered to generate a set of Lyndon words. It can be shown that if we perform the inverse Burrows—Wheeler transform on a word consisting of the size-k alphabet repeated kn−1 times (so that it will produce a word the same length as the desired de Bruijn sequence), then the result will be the set of all Lyndon words whose length divides n. It follows that arranging these Lyndon words in lexicographic order will yield a de Bruijn sequence B(k,n), and that this will be the first de Bruijn sequence in lexicographic order. The following method can be used to perform the inverse Burrows—Wheeler transform, using its standard permutation:
Sort the characters in the string , yielding a new string
Position the string above the string , and map each letter's position in to its position in while preserving order. This process defines the Standard Permutation.
Write this permutation in cycle notation with the smallest position in each cycle first, and the cycles sorted in increasing order.
For each cycle, replace each number with the corresponding letter from string in that position.
Each cycle has now become a Lyndon word, and they are arranged in lexicographic order, so dropping the parentheses yields the first de Bruijn sequence.
For example, to construct the smallest B(2,4) de Bruijn sequence of length 24 = 16, repeat the alphabet (ab) 8 times yielding . Sort the characters in , yielding . Position above as shown, and map each element in to the corresponding element in by drawing a line. Number the columns as shown so we can read the cycles of the permutation:
Starting from the left, the Standard Permutation notation cycles are: . (Standard Permutation)
Then, replacing each number by the corresponding letter in from that column yields: .
These are all of the Lyndon words whose length divides 4, in lexicographic order, so dropping the parentheses gives .
Algorithm
The following Python code calculates a de Bruijn sequence, given k and n, based on an algorithm from Frank Ruskey's Combinatorial Generation.
from typing import Iterable, Any
def de_bruijn(k: Iterable[str] | int, n: int) -> str:
"""de Bruijn sequence for alphabet k
and subsequences of length n.
"""
# Two kinds of alphabet input: an integer expands
# to a list of integers as the alphabet..
if isinstance(k, int):
alphabet = list(map(str, range(k)))
else:
# While any sort of list becomes used as it is
alphabet = k
k = len(k)
a = [0] * k * n
sequence = []
def db(t, p):
if t > n:
if n % p == 0:
sequence.extend(a[1 : p + 1])
else:
a[t] = a[t - p]
db(t + 1, p)
for j in range(a[t - p] + 1, k):
a[t] = j
db(t + 1, t)
db(1, 1)
return "".join(alphabet[i] for i in sequence)
print(de_bruijn(2, 3))
print(de_bruijn("abcd", 2))
which prints
00010111
aabacadbbcbdccdd
Note that these sequences are understood to "wrap around" in a cycle. For example, the first sequence contains 110 and 100 in this fashion.
Uses
de Bruijn cycles are of general use in neuroscience and psychology experiments that examine the effect of stimulus order upon neural systems, and can be specially crafted for use with functional magnetic resonance imaging.
Angle detection
The symbols of a de Bruijn sequence written around a circular object (such as a wheel of a robot) can be used to identify its angle by examining the n consecutive symbols facing a fixed point. This angle-encoding problem is known as the "rotating drum problem". Gray codes can be used as similar rotary positional encoding mechanisms, a method commonly found in rotary encoders.
Finding least- or most-significant set bit in a word
A de Bruijn sequence can be used to quickly find the index of the least significant set bit ("right-most 1") or the most significant set bit ("left-most 1") in a word using bitwise operations and multiplication. The following example uses a de Bruijn sequence to determine the index of the least significant set bit (equivalent to counting the number of trailing '0' bits) in a 32 bit unsigned integer:
uint8_t lowestBitIndex(uint32_t v)
{
static const uint8_t BitPositionLookup[32] = // hash table
{
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
return BitPositionLookup[((uint32_t)((v & -v) * 0x077CB531U)) >> 27];
}
The lowestBitIndex() function returns the index of the least-significant set bit in v, or zero if v has no set bits. The constant 0x077CB531U in the expression is the B (2, 5) sequence 0000 0111 0111 1100 1011 0101 0011 0001 (spaces added for clarity). The operation (v & -v) zeros all bits except the least-significant bit set, resulting in a new value which is a power of 2. This power of 2 is multiplied (arithmetic modulo 232) by the de Bruijn sequence, thus producing a 32-bit product in which the bit sequence of the 5 MSBs is unique for each power of 2. The 5 MSBs are shifted into the LSB positions to produce a hash code in the range [0, 31], which is then used as an index into hash table BitPositionLookup. The selected hash table value is the bit index of the least significant set bit in v.
The following example determines the index of the most significant bit set in a 32 bit unsigned integer:
uint32_t keepHighestBit(uint32_t n)
{
n |= (n >> 1);
n |= (n >> 2);
n |= (n >> 4);
n |= (n >> 8);
n |= (n >> 16);
return n - (n >> 1);
}
uint8_t highestBitIndex(uint32_t v)
{
static const uint8_t BitPositionLookup[32] = { // hash table
0, 1, 16, 2, 29, 17, 3, 22, 30, 20, 18, 11, 13, 4, 7, 23,
31, 15, 28, 21, 19, 10, 12, 6, 14, 27, 9, 5, 26, 8, 25, 24,
};
return BitPositionLookup[(keepHighestBit(v) * 0x06EB14F9U) >> 27];
}
In the above example an alternative de Bruijn sequence (0x06EB14F9U) is used, with corresponding reordering of array values. The choice of this particular de Bruijn sequence is arbitrary, but the hash table values must be ordered to match the chosen de Bruijn sequence. The keepHighestBit() function zeros all bits except the most-significant set bit, resulting in a value which is a power of 2, which is then processed as in the previous example.
Brute-force attacks on locks
A de Bruijn sequence can be used to shorten a brute-force attack on a PIN-like code lock that does not have an "enter" key and accepts the last n digits entered. For example, a digital door lock with a 4-digit code (each digit having 10 possibilities, from 0 to 9) would have B (10, 4) solutions, with length . Therefore, only at most (as the solutions are cyclic) presses are needed to open the lock, whereas trying all codes separately would require presses.
f-fold de Bruijn sequences
An f-fold n-ary de Bruijn sequence is an extension of the notion n-ary de Bruijn sequence, such that the sequence of the length contains every possible subsequence of the length n exactly f times. For example, for the cyclic sequences 11100010 and 11101000 are two-fold binary de Bruijn sequences. The number of two-fold de Bruijn sequences, for is , the other known numbers are , , and .
de Bruijn torus
A de Bruijn torus is a toroidal array with the property that every k-ary m-by-n matrix occurs exactly once.
Such a pattern can be used for two-dimensional positional encoding in a fashion analogous to that described above for rotary encoding. Position can be determined by examining the m-by-n matrix directly adjacent to the sensor, and calculating its position on the de Bruijn torus.
de Bruijn decoding
Computing the position of a particular unique tuple or matrix in a de Bruijn sequence or torus is known as the de Bruijn decoding problem. Efficient decoding algorithms exist for special, recursively constructed sequences and extend to the two-dimensional case. de Bruijn decoding is of interest, e.g., in cases where large sequences or tori are used for positional encoding.
See also
Normal number
Linear-feedback shift register
n-sequence
BEST theorem
Superpermutation
Notes
References
Reprinted in Wardhaugh, Benjamin, ed. (2012), A Wealth of Numbers: An Anthology of 500 Years of Popular Mathematics Writing, Princeton University Press, pp. 139–144.
External links
De Bruijn sequence
CGI generator
Applet generator
Javascript generator and decoder. Implementation of J. Tuliani's algorithm.
Door code lock
Minimal arrays containing all sub-array combinations of symbols: de Bruijn sequences and tori
http://debruijnsequence.org has many kinds of de Bruijn sequences.
Binary sequences
Enumerative combinatorics
Articles with example Python (programming language) code | De Bruijn sequence | [
"Mathematics"
] | 3,845 | [
"Enumerative combinatorics",
"Combinatorics"
] |
1,565,386 | https://en.wikipedia.org/wiki/HD%20108147 | HD 108147, also known as Tupã, is a 7th magnitude star in the constellation of Crux in direct line with and very near to the bright star Acrux or Alpha Crucis. It is either a yellow-white or yellow dwarf (the line is arbitrary and the colour difference is only from classification, not real), slightly brighter and more massive than the Sun. The spectral type is F8 V or G0 V. The star is also younger than the Sun. Due to its distance, about 126 light years, it is too dim to be visible with unaided eye; with binoculars it is an easy target. However, due to its southerly location it is not visible in the northern hemisphere except for the tropics.
An extrasolar planet was detected orbiting it in 2000 by the Geneva Extrasolar Planet Search Team. This exoplanet is "a gas giant smaller than Jupiter that screams around its primary [star] in 11 days at only 0.1 AU." This is much closer than the orbit of Mercury in the Solar System.
In December 2019, the International Astronomical Union announced the star will bear the name Tupã, after the God of the Guarani peoples of Paraguay. The name was a result of a contest ran in Paraguay by the Centro Paraguayo de Informaciones Astronómicas, along with the IAU100 NameExoWorlds 2019 global contest.
It should not be confused with HD 107148, which also has an extrasolar planet discovered in 2006 in the Virgo constellation.
See also
List of extrasolar planets
References
External links
F-type main-sequence stars
G-type main-sequence stars
108147
060644
Crux
Planetary systems with one confirmed planet
Durchmusterung objects | HD 108147 | [
"Astronomy"
] | 363 | [
"Crux",
"Constellations"
] |
1,565,414 | https://en.wikipedia.org/wiki/Sunshine%20Project | The Sunshine Project was an international NGO dedicated to upholding prohibitions against biological warfare and, particularly, to preventing military abuse of biotechnology. It was directed by Edward Hammond.
With offices in Austin, Texas, and Hamburg, Germany, the Sunshine Project worked by exposing research on biological and chemical weapons. Typically, it accessed documents under the Freedom of Information Act and other open records laws, publishing reports and encouraging action to reduce the risk of biological warfare. It tracked the construction of high containment laboratory facilities and the dual-use activities of the U.S. biodefense program. Another focus was on documenting government-sponsored research and development of incapacitating "non-lethal" weapons, such as the chemical used by Russia to end the Moscow theater hostage crisis in 2002. The Sunshine Project was also active in meetings of the Biological Weapons Convention, the main international treaty prohibiting biological warfare.
An announcement was posted on The Sunshine Project website, "As of 1 February 2008, the Sunshine Project is suspending its operations", due to a lack of funding. Its website remained online for some time after this date and could be used as an archive of its activities and publications from 2000 through 2008. However, as of October 2013 the Sunshine Project website was offline. The domain for the website was then reappropriated by a Thai reforestation volunteer organization until September 2023. It now redirects to the internet pornography website 33porn.
Biological weapons safety
The Sunshine Project
Biosafety Bites (v.2) #14 6 June 2006
External links
The Sun Sets on the Sunshine Project
References
Biological warfare | Sunshine Project | [
"Biology"
] | 324 | [
"Biological warfare"
] |
1,565,482 | https://en.wikipedia.org/wiki/Vortex%20ring%20state | The vortex ring state (VRS) is a dangerous aerodynamic condition that may arise in helicopter flight, when a vortex ring system engulfs the rotor, causing severe loss of lift. Often the term settling with power is used as a synonym, e.g., in Australia, the UK, and the US, but not in Canada, which uses the latter term for a different phenomenon.
A vortex ring state sets in when the airflow around a helicopter's main rotor assumes a rotationally symmetrical form over the tips of the blades, supported by a laminar flow over the blade tips, and a countering upflow of air outside and away from the rotor. In this condition, the rotor falls into a new topological state of the surrounding flow field, induced by its own downwash, and suddenly loses lift. Since vortex rings are a surprisingly stable fluid dynamical phenomena (a form of topological soliton), the best way to recover from them is to laterally steer clear of them, in order to re-establish lift, and to break them up using maximum engine power, in order to establish turbulence.
This is also why the condition is often mistaken for "settling with insufficient power": high-powered maneuvers can both induce a vortex ring state in free air, and then at low altitude, during landing conditions, possibly break it. If sufficient power is not available to maintain the airfoil of the rotor at a stalled condition, while generating sufficient lift, the aircraft will not be able to stay aloft before the vortex ring state dissipates, and will crash.
This condition also occurs with tiltrotors, and it was responsible for an accident involving a V-22 Osprey in 2000. Vortex ring state caused the loss of a heavily modified MH-60 helicopter during Operation Neptune Spear, the 2011 raid in which Osama bin Laden was killed.
Description
Because the blades are rotating about a central axis, the speed of each airfoil is lowest at the point where it connects to the hub-and-grip assembly. This fundamental physical reality means that the innermost portion of each blade has an inherent vulnerability to stalling.
In forward flight with translational lift, there is no upward flow (upflow) of air in the hub area. As forward airspeed decreases and vertical descent rates increase, an upflow begins simply because there are no airfoil surfaces in the area of the hub, mast and blade-grip assembly.
Then, as the volume of upflow increases in the central region (i.e. between the hub and the innermost edges of the airfoils), the induced flow (air pulled or "induced" downwards through the rotor system) of the inner blade sections is overcome. This causes the innermost portions of the blades to begin to stall.
As the inner blade sections stall, a second set of vortices, similar to the rotor-tip vortices, begins to form in and around the center of the rotor system. This, combined with the outer set of vortices, results in severe loss of lift. The failure of a helicopter pilot to recognize and react to the condition can lead to high descent rates and catastrophic ground impact.
Occurrence
A helicopter normally encounters this condition when attempting to hover out-of-ground-effect (OGE) without maintaining precise altitude control, and while making downwind or steep, powered approaches when the airspeed is below Effective Translational Lift (ETL).
Detection and correction
The signs of VRS are a vibration in the main rotor system followed by an increasing sink rate and possibly a decrease of cyclic authority.
In single rotor helicopters, the vortex ring state is traditionally corrected by slightly lowering the collective to regain cyclic authority and using the cyclic control to apply lateral motion, often pitching the nose down to establish forward flight. In tandem-rotor helicopters, recovery is accomplished through lateral cyclic or pedal input or both. The aircraft will fly out of the vortex ring into "clean air", and will be able to regain lift.
Another correction now widely known as the Vuichard Recovery Technique after gaining recent popularity, was taught by Claude Vuichard, a Federal Office for Civil Aviation (FOCA) inspector in Switzerland. This technique uses a combination of all three controls together to reduce altitude loss and recover more quickly: apply cyclic in the direction of tail rotor thrust, increase the collective to climb power, and coordinate with the power pedal to maintain heading (cross controls). Recovery is complete when the rotor disc reaches the upwind part of the vortex.
Powering out of vortex ring state
It is possible to power out of vortex ring state, but this requires having about twice the power it takes to hover. Only one full-scale helicopter, the Sikorsky S-64 Skycrane, is documented as being able to do this, when unladen.
Pilot or operator reaction
Helicopter pilots are most commonly taught to avoid VRS by monitoring their rates of descent at lower airspeeds. When encountering VRS, pilots are taught to apply forward cyclic to fly out of the condition and/or lowering collective pitch. While transitioning to forward or lateral flight will alleviate the condition by itself, lowering the collective to reduce the power demand decreases the size of the vortices and reduces the amount of time required to be free of the condition. However, since the condition often occurs near the ground, lowering the collective may not be an option; a loss of altitude will occur proportional to the rate of descent developed before beginning the recovery. In some cases, vortex ring state is encountered and allowed to advance to the point that the pilot may severely lose cyclic authority due to the disrupted airflow. In these cases, the pilot's only recourse may be to enter an autorotation to break the rotor system free of its vortex ring state.
Tandem rotor helicopters
In a tandem rotor helicopter, forward cyclic will not arrest the rate of descent caused by VRS. In such a helicopter, which utilizes differential collective pitch in order to gain airspeed, lateral cyclic inputs must be made accompanied by pedal inputs in order to slide horizontally out of the vortex ring state's disturbed air.
Radio control multirotors
Radio controlled multirotors (common on drones) are subject to normal rotorcraft aerodynamics, including vortex ring state. Frame design, size and power affect the likelihood of entering the state and recovering from it. Multirotors that do not have altitude hold are also more likely to succumb to operator error, where the pilot drops the craft too fast resulting in the upwash at the rotor hubs that can lead to vortex ring state. Those that are equipped with that feature, on the other hand, tend to control their descent automatically and can usually (but not always) escape the dangerous condition.
See also
References
External links
Vortex ring state FAA Helicopter Flying Handbook
Free-Vortex Wake Calculations of Helicopter Rotors and Tilt-Rotors Operating-In and Transitioning Through the Vortex Ring State
Dispelling the Myth of the MV-22 Archive
Vortex Ring on SKYbrary
Vuichard Recovery Technique - How to escape a Vortex Ring State - Video showing recovery technique, and visualisation using water spray.
Helicopter aerodynamics
Aviation risks
Vortices | Vortex ring state | [
"Chemistry",
"Mathematics"
] | 1,463 | [
"Dynamical systems",
"Vortices",
"Fluid dynamics"
] |
1,565,519 | https://en.wikipedia.org/wiki/Larder | A larder is a cool area for storing food prior to use. Originally, it was where raw meat was larded—covered in fat—to be preserved. By the 18th century, the term had expanded: at that point, a dry larder was where bread, pastry, milk, butter, or cooked meats were stored. Larders were commonplace in houses before the widespread use of the refrigerator.
Stone larders were designed to keep cold in the hottest weather. They had slate or marble shelves two or three inches thick. These shelves were wedged into thick stone walls. Fish or vegetables were laid directly onto the shelves and covered with muslin or handfuls of wet rushes were sprinkled under and around.
Essential qualities
Cool, dry, and well-ventilated.
Usually on the shady side of the house.
No fireplaces or hot flues in any of the adjoining walls.
Might have a door to an outside yard.
Had windows with wire gauze in them instead of glass.
Description
In the northern hemisphere, most houses would be arranged to have their larders and kitchens on the north or west side of the house where they received the least amount of sun. In Australia and New Zealand, larders were placed on the south or east sides of the house for the same reason.
Many larders have small, unglazed windows with window openings covered in fine mesh. This allows free circulation of air without allowing flies to enter. Many larders also have tiled or painted walls to simplify cleaning. Older larders, and especially those in larger houses, have hooks in the ceiling to hang joints of meat.
Etymology
Middle English (denoting a store of meat): from Old French lardier, from medieval Latin lardarium, from laridum.
History
In medieval households, the word "larder" referred both to an office responsible for fish, jams, and meat, as well as to the room in which these commodities were kept. It was headed by a larderer. The Scots term for larder was spence, This referred specifically to a place from which stores or food were distributed. And so in Scotland larderers (also pantlers and cellarers) were known as spencers.
The office generally was subordinated to the kitchen and existed as a separate office only in larger households. It was closely connected to other offices of the kitchen, such as the saucery and the scullery.
Larders were used by the Indus Valley civilization to store bones of goats, oxen, and sheep. These larders were made of large clay pots.
Animal larders
Places where animals store food for later consumption are sometimes referred to as 'larders', a well-known example being the hoards of seeds and nuts hidden by squirrels to provide a store of fresh food during the leaner months of the year.
For alligators and crocodiles, larders are underwater storage places for their fresh kills until such time as they wish to consume the carcass when its flesh is rotten. These larders are usually dug into the side of a land bank, or wedged under a log or tree root.
See also
Food storage
Root cellar
References
Bibliography
Halliday, Tim, gen. ed. (1994). Animal Behavior. Oklahoma: UOP.
Rooms
Food preservation
Food storage
de:Speisekammer
he:מזווה | Larder | [
"Engineering"
] | 702 | [
"Rooms",
"Architecture"
] |
1,565,547 | https://en.wikipedia.org/wiki/Ground%20resonance | Ground resonance is an imbalance in the rotation of a helicopter rotor when the blades become bunched up on one side of their rotational plane and cause an oscillation in phase with the frequency of the rocking of the helicopter on its landing gear. The effect is similar to the behavior of a washing machine when the clothes are concentrated in one place during the spin cycle. It occurs when the landing gear is prevented from freely moving about on the horizontal plane, typically when the aircraft is on the ground.
Causes and consequences
Articulated rotor systems with drag hinges allow each blade to advance or lag in its rotation to compensate for the stress on the blade caused by the acceleration and deceleration of the rotor hub (due to momentum conservation). When the spacing of the blades becomes irregular, it shifts the rotor's center of mass from the axis of rotation, which causes an oscillation. When the airframe begins to rock back and forth from the oscillation, the oscillations can reinforce each other and cause the rotor's center of gravity to spiral away from the axis of rotation to a point beyond the compensating ability of the damping system.
Ground resonance is usually precipitated by a hard landing or an asymmetrical ground contact, and is more likely to occur when components of the landing gear or damping system are improperly maintained, such as the drag hinge dampers, oleo struts, or wheel tire pressure. Under extreme conditions, the initial shock can cause violent oscillations that quickly build and result in catastrophic damage of the entire airframe. In some cases, destruction occurs, e.g. body panels, fuel tanks, and engines are torn away, even at normal rotor speed.
Mitigation
Proper maintenance of the helicopter's damping system components can prevent ground resonance from taking hold. When it does occur, recovery is often possible if action is taken quickly. If sufficient rotor RPM exists, immediate takeoff can restore rotor balance by allowing the airframe to move and help dampen the oscillation freely. Complete shutdown may be sufficient if rotor RPM is very low during a ground resonance incident.
See also
Helicopter flight controls
Helicopter pilotage
Helicopter rotor
Aeronautical engineering
References
Basic Helicopter Handbook, US Department of Transportation, Federal Aviation Administration
External links
Video of a Brazilian rescue AS350BA breaking apart after landing due to ground resonance.
Chinook helicopter with wheeled landing gear ground resonance - side view video
Chinook helicopter with wheeled landing gear ground resonance - rear view video
Helicopter aerodynamics
Waves
Aviation risks | Ground resonance | [
"Physics"
] | 518 | [
"Waves",
"Physical phenomena",
"Motion (physics)"
] |
1,565,602 | https://en.wikipedia.org/wiki/HD%2075289 | HD 75289 is a faint double star in the southern constellation of Vela. The primary component has a yellow hue and an apparent visual magnitude of 6.35. Under exceptionally good circumstances it might be visible to the unaided eye; however, usually binoculars are needed. The pair are located at a distance of 95 light years from the Sun based on parallax, and are drifting further away with a radial velocity of +10 km/s.
The brighter member, component A, is a G-type main-sequence star like the Sun with a stellar classification of G0V. In 1982 it was classified as a supergiant, but this proved erroneous. It has an age comparable to the Sun and is considered metal-rich, with a greater abundance of heavier elements compared to the Sun. The star has 14% more mass than the Sun and a 30% greater girth. It is spinning with a projected rotational velocity of 3 km/s, giving it a ~16 day rotation period. The star is radiating double the luminosity of the Sun from its photosphere at an effective temperature of 6,184 K.
In 2004, a co-moving stellar companion was identified, based on an earlier suggestion from 2001. Designated component B, this red dwarf star lies at an angular separation of , corresponding to a projected separation of . However, the radial distance between the stars is unknown, so they are probably further apart. In any case, one revolution around the primary would take thousands of years to complete. The study that found the red dwarf also rules out any further stellar companions beyond 140 AU and massive brown dwarf companions from 400 AU up to 2,000 AU.
Planetary system
In 1999 a exoplanet HD 75289 b with half the mass of Jupiter was detected orbiting the primary by radial velocity method. This exoplanet is a typical hot Jupiter that takes only about 3.51 days to revolve at an orbital distance of 0.0482 AU.
See also
List of exoplanets discovered before 2000 - HD 75289 b
References
External links
SIMBAD star entry, planet entry, component B entry
G-type main-sequence stars
M-type main-sequence stars
Planetary systems with one confirmed planet
Binary stars
Vela (constellation)
CD=−41 4507
075289
043177
3497
J08474038-4144119 | HD 75289 | [
"Astronomy"
] | 493 | [
"Vela (constellation)",
"Constellations"
] |
1,565,639 | https://en.wikipedia.org/wiki/GABAA%20receptor | {{DISPLAYTITLE:GABAA receptor}}
The GABAA receptor (GABAAR) is an ionotropic receptor and ligand-gated ion channel. Its endogenous ligand is γ-aminobutyric acid (GABA), the major inhibitory neurotransmitter in the central nervous system. Accurate regulation of GABAergic transmission through appropriate developmental processes, specificity to neural cell types, and responsiveness to activity is crucial for the proper functioning of nearly all aspects of the central nervous system (CNS).
Upon opening, the GABAA receptor on the postsynaptic cell is selectively permeable to chloride ions () and, to a lesser extent, bicarbonate ions ().
GABAAR are members of the ligand-gated ion channel receptor superfamily, which is a chloride channel family with a dozen or more heterotetrametric subtypes and 19 distinct subunits. These subtypes have distinct brain regional and subcellular localization, age-dependent expression, and the ability to undergo plastic alterations in response to experience, including drug exposure.
GABAAR is not just the target of agonist depressants and antagonist convulsants, but most GABAAR medicines also act at additional (allosteric) binding sites on GABAAR proteins. Some sedatives and anxiolytics, such as benzodiazepines and related medicines, act on GABAAR subtype-dependent extracellular domain sites. Alcohols and neurosteroids, among other general anesthetics, act at GABAAR subunit-interface transmembrane locations. High anesthetic dosages of ethanol act on GABAAR subtype-dependent transmembrane domain locations. Ethanol acts at GABAAR subtype-dependent extracellular domain locations at low intoxication concentrations. Thus, GABAAR subtypes have pharmacologically distinct receptor binding sites for a diverse range of therapeutically significant neuropharmacological drugs.
Depending on the membrane potential and the ionic concentration difference, this can result in ionic fluxes across the pore. If the membrane potential is higher than the equilibrium potential (also known as the reversal potential) for chloride ions, when the receptor is activated will flow into the cell. This causes an inhibitory effect on neurotransmission by diminishing the chance of a successful action potential occurring at the postsynaptic cell. The reversal potential of the GABAA-mediated inhibitory postsynaptic potential (IPSP) in normal solution is −70 mV, contrasting the GABAB IPSP (−100 mV).
The active site of the GABAA receptor is the binding site for GABA and several drugs such as muscimol, gaboxadol, and bicuculline. The protein also contains a number of different allosteric binding sites which modulate the activity of the receptor indirectly. These allosteric sites are the targets of various other drugs, including the benzodiazepines, nonbenzodiazepines, neuroactive steroids, barbiturates, alcohol (ethanol), inhaled anaesthetics, kavalactones, cicutoxin, and picrotoxin, among others.
Much like the GABAA receptor, the GABAB receptor is an obligatory heterodimer consisting of GABAB1 and GABAB2 subunits. These subunits include an extracellular Venus Flytrap domain (VFT) and a transmembrane domain containing seven α-helices (7TM domain). These structural components play a vital role in intricately modulating neurotransmission and interactions with drugs.
Target for benzodiazepines
The ionotropic GABAA receptor protein complex is also the molecular target of the benzodiazepine class of tranquilizer drugs. Benzodiazepines do not bind to the same receptor site on the protein complex as does the endogenous ligand GABA (whose binding site is located between α- and β-subunits), but bind to distinct benzodiazepine binding sites situated at the interface between the α- and γ-subunits of α- and γ-subunit containing GABAA receptors. While the majority of GABAA receptors (those containing α1-, α2-, α3-, or α5-subunits) are benzodiazepine sensitive, there exists a minority of GABAA receptors (α4- or α6-subunit containing) which are insensitive to classical 1,4-benzodiazepines, but instead are sensitive to other classes of GABAergic drugs such as neurosteroids and alcohol. In addition peripheral benzodiazepine receptors exist which are not associated with GABAA receptors. As a result, the IUPHAR has recommended that the terms "BZ receptor", "GABA/BZ receptor" and "omega receptor" no longer be used and that the term "benzodiazepine receptor" be replaced with "benzodiazepine site". Benzodiazepines like diazepam and midazolam act as positive allosteric modulators for GABAA receptors. When these receptors are activated, there's a rise in intracellular chloride levels, resulting in cell membrane hyperpolarization and decreased excitation.
In order for GABAA receptors to be sensitive to the action of benzodiazepines they need to contain an α and a γ subunit, between which the benzodiazepine binds. Once bound, the benzodiazepine locks the GABAA receptor into a conformation where the neurotransmitter GABA has much higher affinity for the GABAA receptor, increasing the frequency of opening of the associated chloride ion channel and hyperpolarising the membrane. This potentiates the inhibitory effect of the available GABA leading to sedative and anxiolytic effects.
Different benzodiazepines have different affinities for GABAA receptors made up of different collection of subunits, and this means that their pharmacological profile varies with subtype selectivity. For instance, benzodiazepine receptor ligands with high activity at the α1 and/or α5 tend to be more associated with sedation, ataxia and amnesia, whereas those with higher activity at GABAA receptors containing α2 and/or α3 subunits generally have greater anxiolytic activity. Anticonvulsant effects can be produced by agonists acting at any of the GABAA subtypes, but current research in this area is focused mainly on producing α2-selective agonists as anticonvulsants which lack the side effects of older drugs such as sedation and amnesia.
The binding site for benzodiazepines is distinct from the binding site for barbiturates and GABA on the GABAA receptor, and also produces different effects on binding, with the benzodiazepines increasing the frequency of the chloride channel opening, while barbiturates increase the duration of chloride channel opening when GABA is bound. Since these are separate modulatory effects, they can both take place at the same time, and so the combination of benzodiazepines with barbiturates is strongly synergistic, and can be dangerous if dosage is not strictly controlled.
Also note that some GABAA agonists such as muscimol and gaboxadol do bind to the same site on the GABAA receptor complex as GABA itself, and consequently produce effects which are similar but not identical to those of positive allosteric modulators like benzodiazepines.
Structure and function
Structural understanding of the GABAA receptor was initially based on homology models, obtained using crystal structures of homologous proteins like Acetylcholine binding protein (AChBP) and nicotinic acetylcholine (nACh) receptors as templates. The much sought structure of a GABAA receptor was finally resolved, with the disclosure of the crystal structure of human β3 homopentameric GABAA receptor.
Whilst this was a major development, the majority of GABAA receptors are heteromeric and the structure did not provide any details of the benzodiazepine binding site. This was finally elucidated in 2018 by the publication of a high resolution cryo-EM structure of rat α1β1γ2S receptor and human α1β2γ2 receptor bound with GABA and the neutral benzodiazepine flumazenil.
GABAA receptors are pentameric transmembrane receptors which consist of five subunits arranged around a central pore. Each subunit comprises four transmembrane domains with both the N- and C-terminus located extracellularly. The receptor sits in the membrane of its neuron, usually localized at a synapse, postsynaptically. However, some isoforms may be found extrasynaptically. When vesicles of GABA are released presynaptically and activate the GABA receptors at the synapse, this is known as phasic inhibition. However, the GABA escaping from the synaptic cleft can activate receptors on presynaptic terminals or at neighbouring synapses on the same or adjacent neurons (a phenomenon termed 'spillover') in addition to the constant, low GABA concentrations in the extracellular space results in persistent activation of the GABAA receptors known as tonic inhibition.
The ligand GABA is the endogenous compound that causes this receptor to open; once bound to GABA, the protein receptor changes conformation within the membrane, opening the pore in order to allow chloride anions () and, to a lesser extent, bicarbonate ions () to pass down their electrochemical gradient. The binding site to GABA is about 80Å away from the narrowest part of the ion channel. Recent computational studies have suggested an allosteric mechanism whereby GABA binding leads to ion channel opening. Because the reversal potential for chloride in most mature neurons is close to or more negative than the resting membrane potential, activation of GABAA receptors tends to stabilize or hyperpolarise the resting potential, and can make it more difficult for excitatory neurotransmitters to depolarize the neuron and generate an action potential. The net effect therefore typically inhibitory, reducing the activity of the neuron, although depolarizing currents have been observed in response to GABA in immature neurons in early development. This effect during development is due to a modified gradient wherein the anions leave the cells through the GABAA receptors, since their intracellular chlorine concentration is higher than the extracellular. The difference in extracellular chlorine anion concentration is presumed to be due to the higher activity of chloride transporters, such as NKCC1, transporting chloride into cells which are present early in development, whereas, for instance, KCC2 transports chloride out of cells and is the dominant factor in establishing the chloride gradient later in development. These depolarization events have shown to be key in neuronal development. In the mature neuron, the GABAA channel opens quickly and thus contributes to the early part of the inhibitory post-synaptic potential (IPSP).
The endogenous ligand that binds to the benzodiazepine site is inosine.
Proper developmental, neuronal cell-type-specific, and activity-dependent GABAergic transmission control is required for nearly all aspects of CNS function.
It has been proposed that the GABAergic system is disrupted in numerous neurodevelopmental diseases, including fragile X syndrome, Rett syndrome, and Dravet syndrome, and that it is a crucial potential target for therapeutic intervention.
Subunits
GABAA receptors are members of the large pentameric ligand gated ion channel (previously referred to as "Cys-loop" receptors) super-family of evolutionarily related and structurally similar ligand-gated ion channels that also includes nicotinic acetylcholine receptors, glycine receptors, and the 5HT3 receptor. There are numerous subunit isoforms for the GABAA receptor, which determine the receptor's agonist affinity, chance of opening, conductance, and other properties.
In humans, the units are as follows:
six types of α subunits (GABRA1, GABRA2, GABRA3, GABRA4, GABRA5, GABRA6)
three βs (GABRB1, GABRB2, GABRB3)
three γs (GABRG1, GABRG2, GABRG3)
as well as a δ (GABRD), an ε (GABRE), a π (GABRP), and a θ (GABRQ)
There are three ρ units (GABRR1, GABRR2, GABRR3); however, these do not coassemble with the classical GABAA units listed above, but rather homooligomerize to form GABAA-ρ receptors (formerly classified as GABAC receptors but now this nomenclature has been deprecated).
Combinatorial arrays
Given the large number of GABAA receptors, a great diversity of final pentameric receptor subtypes is possible. Methods to produce cell-based laboratory access to a greater number of possible GABAA receptor subunit combinations allow teasing apart of the contribution of specific receptor subtypes and their physiological and pathophysiological function and role in the CNS and in disease.
Distribution
GABAA receptors are responsible for most of the physiological activities of GABA in the central nervous system, and the receptor subtypes vary significantly. Subunit composition can vary widely between regions and subtypes may be associated with specific functions. The minimal requirement to produce a GABA-gated ion channel is the inclusion of an α and a β subunit. The most common GABAA receptor is a pentamer comprising two α's, two β's, and a γ (α2β2γ). In neurons themselves, the type of GABAA receptor subunits and their densities can vary between cell bodies and dendrites. Benzodiazepines and barbiturates amplify the inhibitory effects mediated by the GABAA receptor.
GABAA receptors can also be found in other tissues, including leydig cells, placenta, immune cells, liver, bone growth plates and several other endocrine tissues. Subunit expression varies between 'normal' tissue and malignancies, as GABAA receptors can influence cell proliferation.
Ligands
A number of ligands have been found to bind to various sites on the GABAA receptor complex and modulate it besides GABA itself. A ligand can possess one or more properties of the following types. Unfortunately the literature often does not distinguish these types properly.
Types
Orthosteric agonists and antagonists: bind to the main receptor site (the site where GABA normally binds, also referred to as the "active" or "orthosteric" site). Agonists activate the receptor, resulting in increased conductance. Antagonists, though they have no effect on their own, compete with GABA for binding and thereby inhibit its action, resulting in decreased conductance.
First order allosteric modulators: bind to allosteric sites on the receptor complex and affect it either in a positive (PAM), negative (NAM) or neutral/silent (SAM) manner, causing increased or decreased efficiency of the main site and therefore an indirect increase or decrease in conductance. SAMs do not affect the conductance, but occupy the binding site.
Second order modulators: bind to an allosteric site on the receptor complex and modulate the effect of first order modulators.
Open channel blockers: prolong ligand-receptor occupancy, activation kinetics and Cl ion flux in a subunit configuration-dependent and sensitization-state dependent manner.
Non-competitive channel blockers: bind to or near the central pore of the receptor complex and directly block conductance through the ion channel.
Examples
Orthosteric agonists: GABA, gaboxadol, isoguvacine, muscimol, progabide, beta-alanine, taurine, piperidine-4-sulfonic acid (partial agonist).
Orthosteric antagonists: bicuculline, gabazine.
Positive allosteric modulators: abecarnil, azocarnil (photoswitchable), barbiturates, benzodiazepines, certain carbamates (e.g., carisoprodol, meprobamate, lorbamate), honokiol, magnolol, baicalin, baicelin, thienodiazepines, alcohol (ethanol), etomidate, glutethimide, kavalactones, meprobamate, quinazolinones (e.g., methaqualone, etaqualone, diproqualone), neuroactive steroids, niacin/niacinamide, nonbenzodiazepines (e.g., zolpidem, eszopiclone), propofol, stiripentol, theanine, valerenic acid, volatile/inhaled anesthetics, lanthanum, riluzole, and menthol.
Negative allosteric modulators: flumazenil, Ro15-4513, sarmazenil, pregnenolone sulfate, amentoflavone, and zinc.
Inverse allosteric agonists: beta-carbolines (e.g., harmine, harmaline, tetrahydroharmine).
Second-order modulators: (−)‐epigallocatechin‐3‐gallate.
Non-competitive channel blockers: cicutoxin, oenanthotoxin, pentylenetetrazol, picrotoxin, thujone, and lindane.
Effects
Ligands which contribute to receptor activation typically have anxiolytic, anticonvulsant, amnesic, sedative, hypnotic, euphoriant, and muscle relaxant properties. Some such as muscimol and the z-drugs may also be hallucinogenic. Ligands which decrease receptor activation usually have opposite effects, including anxiogenesis and convulsion. Some of the subtype-selective negative allosteric modulators such as α5IA are being investigated for their nootropic effects, as well as treatments for the unwanted side effects of other GABAergic drugs. Advances in molecular pharmacology and genetic manipulation of rat genes have revealed that distinct subtypes of the GABAA receptor mediate certain parts of the anaesthetic behavioral repertoire.
Novel drugs
A useful property of the many benzodiazepine site allosteric modulators is that they may display selective binding to particular subsets of receptors comprising specific subunits. This allows one to determine which GABAA receptor subunit combinations are prevalent in particular brain areas and provides a clue as to which subunit combinations may be responsible for behavioral effects of drugs acting at GABAA receptors. These selective ligands may have pharmacological advantages in that they may allow dissociation of desired therapeutic effects from undesirable side effects. Few subtype selective ligands have gone into clinical use as yet, with the exception of zolpidem which is reasonably selective for α1, but several more selective compounds are in development such as the α3-selective drug adipiplon. There are many examples of subtype-selective compounds which are widely used in scientific research, including:
Diazepam is a benzodiazepine medication that is FDA approved for the treatment of anxiety disorders, the short-term relief of anxiety symptoms, spasticity associated with upper motor neuron disorders, adjunct therapy for muscle spasms, preoperative anxiety relief, the management of certain refractory epileptic patients, and as an adjunct in severe recurrent convulsive seizures and status epilepticus.
CL-218,872 (highly α1-selective agonist)
bretazenil (subtype-selective partial agonist)
imidazenil and L-838,417 (both partial agonists at some subtypes, but weak antagonists at others)
QH-ii-066 (full agonist highly selective for α5 subtype)
α5IA (selective inverse agonist for α5 subtype)
SL-651,498 (full agonist at α2 and α3 subtypes, and as a partial agonist at α1 and α5
3-acyl-4-quinolones: selective for α1 over α3
Paradoxical reactions
There are multiple indications that paradoxical reactions upon — for example — benzodiazepines, barbiturates, inhalational anesthetics, propofol, neurosteroids, and alcohol are associated with structural deviations of GABAA receptors. The combination of the five subunits of the receptor (see images above) can be altered in such a way that for example the receptor's response to GABA remains unchanged but the response to one of the named substances is dramatically different from the normal one.
There are estimates that about 2–3% of the general population may suffer from serious emotional disorders due to such receptor deviations, with up to 20% suffering from moderate disorders of this kind. It is generally assumed that the receptor alterations are, at least partly, due to genetic and also epigenetic deviations. There are indication that the latter may be triggered by, among other factors, social stress or occupational burnout.
See also
4-Iodopropofol
GABA receptor
GABAB receptor
GABAA-ρ receptor
Gephyrin
Glycine receptor
GABAA receptor positive allosteric modulators
GABAA receptor negative allosteric modulators
References
Further reading
External links
Transmembrane receptors
Ion channels
GABA | GABAA receptor | [
"Chemistry"
] | 4,665 | [
"Transmembrane receptors",
"Neurochemistry",
"Ion channels",
"Signal transduction"
] |
1,565,861 | https://en.wikipedia.org/wiki/Comparison%20of%20user%20interface%20markup%20languages | The following tables compare general and technical information for some user interface markup languages. Please see the individual markup languages' articles for further information.
General information
Basic general information about the markup languages: creator, version, etc.
Features
Some features of the markup languages.
See also
List of user interface markup languages
Adobe Integrated Runtime (AIR)
Adobe Flex
JavaFX
Silverlight, XAML
References
User interface markup languages | Comparison of user interface markup languages | [
"Technology"
] | 90 | [
"Computing comparisons",
"Markup language comparisons"
] |
1,565,926 | https://en.wikipedia.org/wiki/Estimation%20theory | Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.
In estimation theory, two approaches are generally considered:
The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest
The set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector.
Examples
For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age.
Or, for example, in radar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.
As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal.
Basics
For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is a statistical sample – a set of data points taken from a random vector (RV) of size N. Put into a vector,
Secondly, there are M parameters
whose values are to be estimated. Third, the continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:
It is also possible for the parameters themselves to have a probability distribution (e.g., Bayesian statistics). It is then necessary to define the Bayesian probability
After the model is formed, the goal is to estimate the parameters, with the estimates commonly denoted , where the "hat" indicates the estimate.
One common estimator is the minimum mean squared error (MMSE) estimator, which utilizes the error between the estimated parameters and the actual value of the parameters
as the basis for optimality. This error term is then squared and the expected value of this squared value is minimized for the MMSE estimator.
Estimators
Commonly used estimators (estimation methods) and topics related to them include:
Maximum likelihood estimators
Bayes estimators
Method of moments estimators
Cramér–Rao bound
Least squares
Minimum mean squared error (MMSE), also known as Bayes least squared error (BLSE)
Maximum a posteriori (MAP)
Minimum variance unbiased estimator (MVUE)
Nonlinear system identification
Best linear unbiased estimator (BLUE)
Unbiased estimators — see estimator bias.
Particle filter
Markov chain Monte Carlo (MCMC)
Kalman filter, and its various derivatives
Wiener filter
Examples
Unknown constant in additive white Gaussian noise
Consider a received discrete signal, , of independent samples that consists of an unknown constant with additive white Gaussian noise (AWGN) with zero mean and known variance (i.e., ).
Since the variance is known then the only unknown parameter is .
The model for the signal is then
Two possible (of many) estimators for the parameter are:
which is the sample mean
Both of these estimators have a mean of , which can be shown through taking the expected value of each estimator
and
At this point, these two estimators would appear to perform the same.
However, the difference between them becomes apparent when comparing the variances.
and
It would seem that the sample mean is a better estimator since its variance is lower for every N > 1.
Maximum likelihood
Continuing the example using the maximum likelihood estimator, the probability density function (pdf) of the noise for one sample is
and the probability of becomes ( can be thought of a )
By independence, the probability of becomes
Taking the natural logarithm of the pdf
and the maximum likelihood estimator is
Taking the first derivative of the log-likelihood function
and setting it to zero
This results in the maximum likelihood estimator
which is simply the sample mean.
From this example, it was found that the sample mean is the maximum likelihood estimator for samples of a fixed, unknown parameter corrupted by AWGN.
Cramér–Rao lower bound
To find the Cramér–Rao lower bound (CRLB) of the sample mean estimator, it is first necessary to find the Fisher information number
and copying from above
Taking the second derivative
and finding the negative expected value is trivial since it is now a deterministic constant
Finally, putting the Fisher information into
results in
Comparing this to the variance of the sample mean (determined previously) shows that the sample mean is equal to the Cramér–Rao lower bound for all values of and .
In other words, the sample mean is the (necessarily unique) efficient estimator, and thus also the minimum variance unbiased estimator (MVUE), in addition to being the maximum likelihood estimator.
Maximum of a uniform distribution
One of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use of maximum likelihood estimators and likelihood functions.
Given a discrete uniform distribution with unknown maximum, the UMVU estimator for the maximum is given by
where m is the sample maximum and k is the sample size, sampling without replacement. This problem is commonly known as the German tank problem, due to application of maximum estimation to estimates of German tank production during World War II.
The formula may be understood intuitively as;
the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum.
This has a variance of
so a standard deviation of approximately , the (population) average size of a gap between samples; compare above. This can be seen as a very simple case of maximum spacing estimation.
The sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed above, it is biased.
Applications
Numerous fields require the use of estimation theory.
Some of these fields include:
Interpretation of scientific experiments
Signal processing
Clinical trials
Opinion polls
Quality control
Telecommunications
Project management
Software engineering
Control theory (in particular Adaptive control)
Network intrusion detection system
Orbit determination
Measured data are likely to be subject to noise or uncertainty and it is through statistical probability that optimal solutions are sought to extract as much information from the data as possible.
See also
Best linear unbiased estimator (BLUE)
Completeness (statistics)
Detection theory
Efficiency (statistics)
Expectation-maximization algorithm (EM algorithm)
Fermi problem
Grey box model
Information theory
Least-squares spectral analysis
Matched filter
Maximum entropy spectral estimation
Nuisance parameter
Parametric equation
Pareto principle
Rule of three (statistics)
State estimator
Statistical signal processing
Sufficiency (statistics)
Notes
References
Citations
Sources
External links
Signal processing
Mathematical and quantitative methods (economics) | Estimation theory | [
"Technology",
"Engineering"
] | 1,522 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
1,565,963 | https://en.wikipedia.org/wiki/IC50 | {{DISPLAYTITLE:IC50}}
Half maximal inhibitory concentration (IC50) is a measure of the potency of a substance in inhibiting a specific biological or biochemical function. IC50 is a quantitative measure that indicates how much of a particular inhibitory substance (e.g. drug) is needed to inhibit, in vitro, a given biological process or biological component by 50%. The biological component could be an enzyme, cell, cell receptor or microbe. IC50 values are typically expressed as molar concentration.
IC50 is commonly used as a measure of antagonist drug potency in pharmacological research. IC50 is comparable to other measures of potency, such as EC50 for excitatory drugs. EC50 represents the dose or plasma concentration required for obtaining 50% of a maximum effect in vivo.
IC50 can be determined with functional assays or with competition binding assays.
Sometimes, IC50 values are converted to the pIC50 scale.
Due to the minus sign, higher values of pIC50 indicate exponentially more potent inhibitors. pIC50 is usually given in terms of molar concentration (mol/L, or M), thus requiring IC50 in units of M.
The IC50 terminology is also used for some behavioral measures in vivo, such as the two bottle fluid consumption test. When animals decrease consumption from the drug-laced water bottle, the concentration of the drug that results in a 50% decrease in consumption is considered the IC50 for fluid consumption of that drug.
Functional antagonist assay
The IC50 of a drug can be determined by constructing a dose-response curve and examining the effect of different concentrations of antagonist on reversing agonist activity. IC50 values can be calculated for a given antagonist by determining the concentration needed to inhibit half of the maximum biological response of the agonist. IC50 values can be used to compare the potency of two antagonists.
IC50 values are very dependent on conditions under which they are measured. In general, a higher concentration of inhibitor leads to lowered agonist activity. IC50 value increases as agonist concentration increases. Furthermore, depending on the type of inhibition, other factors may influence IC50 value; for ATP dependent enzymes, IC50 value has an interdependency with concentration of ATP, especially if inhibition is competitive.
IC50 and affinity
Competition binding assays
In this type of assay, a single concentration of radioligand (usually an agonist) is used in every assay tube. The ligand is used at a low concentration, usually at or below its Kd value. The level of specific binding of the radioligand is then determined in the presence of a range of concentrations of other competing non-radioactive compounds (usually antagonists), in order to measure the potency with which they compete for the binding of the radioligand. Competition curves may also be computer-fitted to a logistic function as described under direct fit.
In this situation the IC50 is the concentration of competing ligand which displaces 50% of the specific binding of the radioligand. The IC50 value is converted to an absolute inhibition constant Ki using the Cheng-Prusoff equation formulated by Yung-Chi Cheng and William Prusoff (see Ki).
Cheng Prusoff equation
IC50 is not a direct indicator of affinity, although the two can be related at least for competitive agonists and antagonists by the Cheng-Prusoff equation. For enzymatic reactions, this equation is:
where Ki is the binding affinity of the inhibitor, IC50 is the functional strength of the inhibitor, [S] is fixed substrate concentration and Km is the Michaelis constant i.e. concentration of substrate at which enzyme activity is at half maximal (but is frequently confused with substrate affinity for the enzyme, which it is not).
Alternatively, for inhibition constants at cellular receptors:
where [A] is the fixed concentration of agonist and EC50 is the concentration of agonist that results in half maximal activation of the receptor. Whereas the IC50 value for a compound may vary between experiments depending on experimental conditions, (e.g. substrate and enzyme concentrations) the Ki is an absolute value. Ki is the inhibition constant for a drug; the concentration of competing ligand in a competition assay which would occupy 50% of the receptors if no ligand were present.
The Cheng-Prusoff equation produces good estimates at high agonist concentrations, but over- or under-estimates Ki at low agonist concentrations. In these conditions, other analyses have been recommended.
See also
Certain safety factor
EC50 (half maximal effective concentration)
LD50 (median lethal dose)
Ki (equilibrium constant)
References
External links
AAT Bioquest Online IC50 Calculator
Online IC50 calculator (www.ic50.tk) based on the C programming language and gnuplot
Alternative online IC50 calculator (www.ic50.org) based on Python, NumPy, SciPy and Matplotlib
ELISA IC50/EC50 Online Tool (link seems broken)
IC50 to pIC50 calculator
Online tool for analysis of in vitro resistance to antimalarial drugs
IC50-to-Ki converter of an inhibitor and enzyme that obey classic Michaelis-Menten kinetics.
Concentration indicators
Pharmacodynamics | IC50 | [
"Chemistry"
] | 1,090 | [
"Pharmacology",
"Pharmacodynamics"
] |
1,565,992 | https://en.wikipedia.org/wiki/Dihydroxyacetone%20phosphate | Dihydroxyacetone phosphate (DHAP, also glycerone phosphate in older texts) is the anion with the formula HOCH2C(O)CH2OPO32-. This anion is involved in many metabolic pathways, including the Calvin cycle in plants and glycolysis. It is the phosphate ester of dihydroxyacetone.
Role in glycolysis
Dihydroxyacetone phosphate lies in the glycolysis metabolic pathway, and is one of the two products of breakdown of fructose 1,6-bisphosphate, along with glyceraldehyde 3-phosphate. It is rapidly and reversibly isomerised to glyceraldehyde 3-phosphate.
The numbering of the carbon atoms indicates the fate of the carbons according to their position in fructose 6-phosphate.
Role in other pathways
In the Calvin cycle, DHAP is one of the products of the sixfold reduction of 1,3-bisphosphoglycerate by NADPH. It is also used in the synthesis of sedoheptulose 1,7-bisphosphate and fructose 1,6-bisphosphate, both of which are used to reform ribulose 5-phosphate, the 'key' carbohydrate of the Calvin cycle.
DHAP is also the product of the dehydrogenation of L-glycerol-3-phosphate, which is part of the entry of glycerol (sourced from triglycerides) into the glycolytic pathway. Conversely, reduction of glycolysis-derived DHAP to L-glycerol-3-phosphate provides adipose cells with the activated glycerol backbone they require to synthesize new triglycerides. Both reactions are catalyzed by the enzyme glycerol 3-phosphate dehydrogenase with NAD+/NADH as cofactor.
DHAP also has a role in the ether-lipid biosynthesis process in the protozoan parasite Leishmania mexicana.
DHAP is a precursor to 2-oxopropanal. This conversion is the basis of a potential biotechnological route to the commodity chemical 1,2-propanediol.
See also
Dihydroxyacetone
Glycerol 3-phosphate shuttle
References
Photosynthesis
Alpha-hydroxy ketones
Organophosphates
Phosphate esters
Glycolysis
Biomolecules
Metabolic intermediates | Dihydroxyacetone phosphate | [
"Chemistry",
"Biology"
] | 528 | [
"Carbohydrate metabolism",
"Natural products",
"Glycolysis",
"Photosynthesis",
"Organic compounds",
"Metabolic intermediates",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Metabolism",
"Molecular biology"
] |
1,566,039 | https://en.wikipedia.org/wiki/Approach%20space | In topology, a branch of mathematics, approach spaces are a generalization of metric spaces, based on point-to-set distances, instead of point-to-point distances. They were introduced by Robert Lowen in 1989, in a series of papers on approach theory between 1988 and 1995.
Definition
Given a metric space (X, d), or more generally, an extended pseudoquasimetric (which will be abbreviated ∞pq-metric here), one can define an induced map d: X × P(X) → [0,∞] by d(x, A) = inf{d(x, a) : a ∈ A}. With this example in mind, a distance on X is defined to be a map X × P(X) → [0,∞] satisfying for all x in X and A, B ⊆ X,
d(x, {x}) = 0,
d(x, Ø) = ∞,
d(x, A∪B) = min(d(x, A), d(x, B)),
For all 0 ≤ ε ≤ ∞, d(x, A) ≤ d(x, A(ε)) + ε,
where we define A(ε) = {x : d(x, A) ≤ ε}.
(The "empty infimum is positive infinity" convention is like the nullary intersection is everything convention.)
An approach space is defined to be a pair (X, d) where d is a distance function on X. Every approach space has a topology, given by treating A → A(0) as a Kuratowski closure operator.
The appropriate maps between approach spaces are the contractions. A map f: (X, d) → (Y, e) is a contraction if e(f(x), f[A]) ≤ d(x, A) for all x ∈ X and A ⊆ X.
Examples
Every ∞pq-metric space (X, d) can be distanced to (X, d), as described at the beginning of the definition.
Given a set X, the discrete distance is given by d(x, A) = 0 if x ∈ A and d(x, A) = ∞ if x ∉ A. The induced topology is the discrete topology.
Given a set X, the indiscrete distance is given by d(x, A) = 0 if A is non-empty, and d(x, A) = ∞ if A is empty. The induced topology is the indiscrete topology.
Given a topological space X, a topological distance is given by d(x, A) = 0 if x ∈ A, and d(x, A) = ∞ otherwise. The induced topology is the original topology. In fact, the only two-valued distances are the topological distances.
Let P = [0, ∞] be the extended non-negative reals. Let d+(x, A) = max(x − sup A, 0) for x ∈ P and A ⊆ P. Given any approach space (X, d), the maps (for each A ⊆ X) d(., A) : (X, d) → (P, d+) are contractions.
On P, let e(x, A) = inf{|x − a| : a ∈ A} for x < ∞, let e(∞, A) = 0 if A is unbounded, and let e(∞, A) = ∞ if A is bounded. Then (P, e) is an approach space. Topologically, P is the one-point compactification of [0, ∞). Note that e extends the ordinary Euclidean distance. This cannot be done with the ordinary Euclidean metric.
Let βN be the Stone–Čech compactification of the integers. A point U ∈ βN is an ultrafilter on N. A subset A ⊆ βN induces a filter F(A) = ∩ {U : U ∈ A}. Let b(U, A) = sup{ inf{ |n − j| : n ∈ X, j ∈ E } : X ∈ U, E ∈ F(A) }. Then (βN, b) is an approach space that extends the ordinary Euclidean distance on N. In contrast, βN is not metrizable.
Equivalent definitions
Lowen has offered at least seven equivalent formulations. Two of them are below.
Let XPQ(X) denote the set of xpq-metrics on X. A subfamily G of XPQ(X) is called a gauge if
0 ∈ G, where 0 is the zero metric, that is, 0(x, y) = 0 for all x, y,
e ≤ d ∈ G implies e ∈ G,
d, e ∈ G implies max(d,e) ∈ G (the "max" here is the pointwise maximum),
For all d ∈ XPQ(X), if for all x ∈ X, ε > 0, N < ∞ there is e ∈ G such that min(d(x,y), N) ≤ e(x, y) + ε for all y, then d ∈ G.
If G is a gauge on X, then d(x,A) = sup {e(x, a) } : e ∈ G} is a distance function on X. Conversely, given a distance function d on X, the set of e ∈ XPQ(X) such that e ≤ d is a gauge on X. The two operations are inverse to each other.
A contraction f: (X, d) → (Y, e) is, in terms of associated gauges G and H respectively, a map such that for all d ∈ H, d(f(.), f(.)) ∈ G.
A tower on X is a set of maps A → A[ε] for A ⊆ X, ε ≥ 0, satisfying for all A, B ⊆ X and δ, ε ≥ 0
A ⊆ A[ε],
Ø[ε] = Ø,
(A ∪ B)[ε] = A[ε] ∪ B[ε],
A[ε][δ] ⊆ A[ε+δ],
A[ε] = ∩δ>ε A[δ].
Given a distance d, the associated A → A(ε) is a tower. Conversely, given a tower, the map d(x,A) = inf{ε : x ∈ A[ε]} is a distance, and these two operations are inverses of each other.
A contraction f:(X, d)→(Y, e) is, in terms of associated towers, a map such that for all ε ≥ 0, f[A[ε]] ⊆ f[A][ε].
Categorical properties
The main interest in approach spaces and their contractions is that they form a category with good properties, while still being quantitative like metric spaces. One can take arbitrary products, coproducts, and quotients, and the results appropriately generalize the corresponding results for topologies. One can even "distancize" such badly non-metrizable spaces like βN, the Stone–Čech compactification of the integers.
Certain hyperspaces, measure spaces, and probabilistic metric spaces turn out to be naturally endowed with a distance. Applications have also been made to approximation theory.
References
External links
Robert Lowen
Closure operators | Approach space | [
"Mathematics"
] | 1,561 | [
"Order theory",
"Closure operators"
] |
1,566,105 | https://en.wikipedia.org/wiki/GTPase-activating%20protein | GTPase-activating proteins or GTPase-accelerating proteins (GAPs) are a family of regulatory proteins whose members can bind to activated G proteins and stimulate their GTPase activity, with the result of terminating the signaling event. GAPs are also known as RGS protein, or RGS proteins, and these proteins are crucial in controlling the activity of G proteins. Regulation of G proteins is important because these proteins are involved in a variety of important cellular processes. The large G proteins, for example, are involved in transduction of signaling from the G protein-coupled receptor for a variety of signaling processes like hormonal signaling, and small G proteins are involved in processes like cellular trafficking and cell cycling. GAP's role in this function is to turn the G protein's activity off. In this sense, GAPs function is opposite to that of guanine nucleotide exchange factors (GEFs), which serve to enhance G protein signaling.
Mechanism
GAP are heavily linked to the G-protein linked receptor family. The activity of G proteins comes from their ability to bind guanosine triphosphate (GTP). Binding of GTP inherently changes the activity of the G proteins and increases their activity, through the loss of inhibitory subunits. In this more active state, G proteins can bind other proteins and turn on downstream signalling targets. This whole process is regulated by GAPs, which can down regulate the activity of G proteins.
G proteins can weakly hydrolyse GTP, breaking a phosphate bond to make GDP. In the GDP-bound state, the G proteins are subsequently inactivated and can no longer bind their targets. This hydrolysis reaction, however, occurs very slowly, meaning G proteins have a built-in timer for their activity. G proteins have a window of activity followed by slow hydrolysis, which turns them off. GAP accelerates this G protein timer by increasing the hydrolytic GTPase activity of the G proteins, hence the name GTPase-activating protein.
It is thought that GAPs serve to make GTP on the G protein a better substrate for nucleophilic attack and lower the transition state energy for the hydrolysis reaction. For example, many GAPs of the small G proteins have a conserved finger-like domain, usually an arginine finger, which changes the conformation of the GTP-bound G protein to orient the GTP for better nucleophilic attack by water. This makes the GTP a better substrate for the reaction. Similarly, GAPs seem to induce a GDP-like charge distribution in the bound GTP. Because the change in charge distribution makes the GTP substrate more like the products of the reaction, GDP and monophosphate, this, along with opening the molecule for nucleophilic attack, lowers the transition state energy barrier of the reaction and allows GTP to be hydrolyzed more readily. GAPs, then, work to enhance the GTP hydrolysis reaction of the G proteins. By doing so, they accelerate the G protein's built-in timer, which inactivates the G proteins more quickly, and along with the inactivation of GEFs, this keeps the G protein signal off. GAPs, then, are critical in the regulation of G proteins.
Specificity to G proteins
In general, GAPs tend to be pretty specific for their target G proteins. The exact mechanism of target specificity is not fully known, but it is likely that this specificity comes from a variety of factors. At the most basic level, GAP-to-G protein specificity may come simply from the timing and location of protein expression. RGS9-1, for example, is specifically expressed in the rod and cone photoreceptors in the eye retina, and is the only one to interact with G proteins involved in phototransduction in this area. A certain GAP and a certain G protein happen to be expressed in the same time and place, and that is how the cell ensures specificity. Meanwhile, scaffold proteins can also sequester the proper GAP to its G protein and enhance the proper binding interactions. These binding interactions may be specific for a particular GAP and G protein. Also, GAPs may have particular amino acid domains that recognize only a particular G protein. Binding to other G proteins may not have the same favorable interactions, and they therefore do not interact. GAPs can, therefore, regulate specific G proteins.
Examples and classification
EIF5 is a GTPase-activating protein. Furthermore, YopE is a protein domain that is a Rho GTPase-activating protein (GAP), which targets small GTPases such as RhoA, Rac1, and Rac2.
Monomeric
The GAPs that act on small GTP-binding proteins of the Ras superfamily have conserved structures and use similar mechanisms,
An example of a GTPase is the monomer Ran, which is found in the cytosol as well as the nucleus. Hydrolysis of GTP by Ran is thought to provide the energy needed to transport nuclear proteins into the cell. Ran is turned on and off by GEFs and GAPs, respectively.
Heterotrimeric
Most GAPs that act on alpha subunits of heterotrimeric G proteins belong to a distinct family, the RGS protein family.
Regulation
While GAPs serve to regulate the G proteins, there is also some level of regulation of the GAPs themselves. Many GAPs have allosteric sites that serve as interfaces with downstream targets of the particular path that they regulate. For example, RGS9-1, the GAP in the photoreceptors from above, interacts with cGMP phosphodiesterase (cGMP PDE), a downstream component of phototransduction in the retina. Upon binding with cGMP PDE, RGS9-1 GAP activity is enhanced. In other words, a downstream target of photoreceptor-induced signaling binds and activates the inhibitor of signaling, GAP. This positive regulatory binding of downstream targets to GAP serves as a negative feedback loop that eventually turns off the signaling that was originally activated. GAPs are regulated by targets of the G protein that they regulate.
There are also examples of negative regulatory mechanisms, where downstream targets of G protein signaling inhibit the GAPs. In G protein-gated potassium channels, phosphatidylinositol 3, 4, 5-triphosphate (PIP3) is a downstream target of G protein signaling. PIP3 binds and inhibits the RGS4 GAP. Such inhibition of GAP may perhaps "prime" the signaling pathway for activation. This creates a window of activity for the G proteins once activated because the GAP is temporarily inhibited. When the potassium channel is activated, Ca2+ gets released and binds calmodulin. Together, they displace PIP3 from GAP by binding competitively to the same site, and by doing so, they reactivate GAP to turn G protein signaling off. This particular process demonstrates both inhibition and activation of GAP by its regulators. There is cross-talk between GAP and other components of the signaling pathway that regulate the activity of GAP.
There have been some findings suggesting the possibility of crosstalk between GAPs. A recent study showed that the p120Ras GAP could bind the DLC1 Rho GAP at its catalytic domain. The binding of the Ras GAP to the Rho GAP inhibits the activity of the Rho GAP, thereby activating the Rho G protein. One GAP serves as a negative regulator of another GAP. The reasons for such cross-regulation across GAPs are yet unclear, but one possible hypothesis is that this cross-talk across GAPs attenuates the "off" signal of all the GAPs. Although the p120Ras GAP is active, therefore inhibiting that particular pathway, other cellular processes can still continue because it inhibits other GAPs. This may ensure that the whole system does not shut down from a single off signal. GAP activity is highly dynamic, interacting with many other components of signaling pathways.
Disease associations and clinical relevance
The importance of GAPs comes from its regulation of the crucial G proteins. Many of these G proteins are involved in cell cycling, and as such are known proto-oncogenes. The Ras superfamily of G proteins, for example, has been associated with many cancers because Ras is a common downstream target of many growth factors like FGF, or fibroblast growth factor. Under normal conditions, this signaling ultimately induces regulated cell growth and proliferation. However, in the cancer state, such growth is no longer regulated and results in the formation of tumors. Often, this oncogenic behavior is due to a loss of function of GAPs associated with those G proteins or a loss of the G protein's ability to respond to its GAP. With the former, G proteins are unable to hydrolyze GTP quickly, resulting in sustained expression of the active form of G proteins. Although the G proteins have weak hydrolytic activity, in the presence of functional GEFs, the inactivated G proteins are constantly replaced with activated ones because the GEFs exchange GDP for GTP in these proteins. With no GAPs to curb the G protein's activity, this results in constitutively active G proteins, unregulated cell growth, and the cancerous state. In the case of the latter, a loss of the G protein's ability to respond to GAP, the G proteins have lost their ability to hydrolyze GTP. With a nonfunctional G protein enzyme, GAPs cannot activate the GTPase activity, and the G protein is constitutively on. This also results in unregulated cell growth and cancer.
Examples of GAP malfunction are ubiquitous clinically. Some cases involve a decreased expression of the GAP gene. For example, some recently characterized cases of papillary thyroid cancer cells in patients show a decreased expression of Rap1GAP, and this expression is seemingly caused by a decreased expression of the GAP mRNA, shown by qRT-PCR experiments. In this case, there appears to be a loss of proper Rap1GAP gene expression. In another case, expression of the Ras GAP is lost in several cancers due to improper epigenetic silencing of the gene. These cells have CpG methylations near the gene that, in effect, silence gene transcription. Regulation of G proteins is lost because the regulator is absent, resulting in cancer.
Other cancers show a loss of sensitivity of the G protein to the GAPs. These G proteins acquire missense mutations that disrupt the inherent GTPase activity of the proteins. The mutant G proteins are still bound by GAPs, but enhancing GTPase activity by the GAPs is meaningless when GTPase activity of the G protein itself is lost. GAP works to activate a nonfunctional hydrolytic enzyme. T24 bladder cancer cells, for example, were shown to have a missense mutation, G12V, resulting in constitutively active Ras protein. Despite the presence of the G protein regulator, regulation is lost due to a loss of function in the G protein itself. This loss of function also manifests itself in cancer. GAPs and their interaction with G proteins are, therefore, highly important clinically and are potential targets for cancer therapies.
References
External links
GTP-binding protein regulators
Proteins | GTPase-activating protein | [
"Chemistry"
] | 2,317 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
1,566,127 | https://en.wikipedia.org/wiki/BD%E2%88%9210%C2%B03166 | BD−10°3166 is a K-type main sequence star approximately 268 light-years away in the constellation of Crater. It was inconspicuous enough not be included in the Draper catalog (HD). The Hipparcos satellite also did not study it, so its true distance is poorly known. The distance measured by the Gaia spacecraft of 268 light years rules out a suggested companion star, LP 731-076, being its true binary star companion.
Stellar characteristics
The star is very enriched with metals, being two to three times as metal-rich as the Sun. Planets are common around such stars, and BD−10°3166 is not an exception. In 2000, the California and Carnegie Planet Search team discovered an extrasolar planet orbiting the star.
Planetary system
In 2000, the California and Carnegie Planet Search discovered a hot Jupiter-type extrasolar planet that has a minimum mass less than half that of Jupiter's, and which takes only 3.49 days to revolve around BD−10°3166.
See also
Lists of exoplanets
References
External links
Crater (constellation)
K-type main-sequence stars
Planetary systems with one confirmed planet
Durchmusterung objects | BD−10°3166 | [
"Astronomy"
] | 250 | [
"Crater (constellation)",
"Constellations"
] |
1,566,149 | https://en.wikipedia.org/wiki/HD%202638 | HD 2638 is a ternary star system system in the equatorial constellation of Cetus. The pair have an angular separation of along a position angle of 166.7°, as of 2015. This is system too faint to be visible to the naked eye, having a combined apparent visual magnitude of 9.44; a small telescope is required. The distance to this system is 179.5 light years based on parallax, and it is drifting further away with a radial velocity of +9.6 km/s. The magnitude 7.76 star HD 2567 forms a common proper motion companion to this pair at projected separation 839″.
The HD 2638 members A and BC have a projected separation of about and thus an orbital period of around 130 years. They have a combined stellar classification of K1V. The primary component is a G-type main-sequence star with a class of G8V. It is smaller and less massive than the Sun, and has a lower luminosity. The secondary is a binary consisting of who red dwarf stars on close orbit with combined mass less than half the mass of the primary, and a composite spectral class of M1V.
Planetary system
In 2005, the discovery of an extrasolar planet HD 2638 b orbiting the primary was announced by the Geneva Extrasolar Planet Search Team. The planet has a mass 0.48 times that of Jupiter and 152.6 times that of Earth. The planet existence was placed under doubt in 2015 due to discovered additional stellar companions.
See also
List of extrasolar planets
References
G-type main-sequence stars
M-type main-sequence stars
Triple star systems
Planetary systems with one confirmed planet
Cetus
Durchmusterung objects
002638
002350
J00295988-0545502 | HD 2638 | [
"Astronomy"
] | 363 | [
"Cetus",
"Constellations"
] |
1,566,181 | https://en.wikipedia.org/wiki/Ethylene-vinyl%20acetate | Ethylene-vinyl acetate (EVA), also known as poly(ethylene-vinyl acetate) (PEVA), is a copolymer of ethylene and vinyl acetate. The weight percent of vinyl acetate usually varies from 10 to 50%, with the remainder being ethylene. There are three different types of EVA copolymer, which differ in the vinyl acetate (VA) content and the way the materials are used.
The EVA copolymer which is based on a low proportion of VA (approximately up to 4%) may be referred to as vinyl acetate modified polyethylene. It is a copolymer and is processed as a thermoplastic material – just like low-density polyethylene. It has some of the properties of a low-density polyethylene but increased gloss (useful for film), softness and flexibility. The material is generally considered non-toxic.
The EVA copolymer which is based on a medium proportion of VA (approximately 4 to 30%) is referred to as thermoplastic ethylene-vinyl acetate copolymer and is a thermoplastic elastomer material. It is not vulcanized but has some of the properties of a rubber or of plasticized polyvinyl chloride particularly at the higher end of the range. Both filled and unfilled EVA materials have good low temperature properties and are tough. The materials with approximately 11% VA are used as hot-melt adhesives.
The EVA copolymer which is based on a high proportion of VA (greater than 60%) is referred to as ethylene-vinyl acetate rubber.
EVA is an elastomeric polymer that produces materials which are "rubber-like" in softness and flexibility. The material has good clarity and gloss, low-temperature toughness, stress-crack resistance, hot-melt adhesive waterproof properties, and resistance to UV radiation. EVA has a distinctive vinegar-like odor and is competitive with rubber and vinyl polymer products in many electrical applications.
Production
EVA is made by mixing ethylene and vinyl acetate in a processor, which creates an unrefined mass of EVA. It is fed through rollers that flatten it into sheets, which are then put into a pressure oven. Ethylene-vinyl acetate is based on products from the production of petroleum and natural gas.
Hydrolysis of EVA gives ethylene vinyl alcohol (EVOH) copolymer (and acetic acid).
Applications
Hot-melt adhesives (such as hot glue sticks) and top-of-the-line soccer cleats are usually made from EVA, generally with additives like wax and resin. EVA is also used as a clinginess-enhancing additive in plastic wraps. Craft-foam sheets are made of EVA and are popularly used for children's foam stickers.
EVA is also used in biomedical engineering applications as a drug-delivery device. The polymer is dissolved in an organic solvent (such as dichloromethane). Powdered drug and filler (typically an inert sugar) are added to the liquid solution and rapidly mixed to obtain a homogeneous mixture. The drug-filler-polymer mixture is then cast into a mold at −80 °C and freeze-dried until solid. These devices are used in drug delivery research to slowly release a compound. The polymer does not biodegrade within the body, but is quite inert and causes little or no reaction following implantation.
EVA is one of the materials popularly known as expanded rubber or foam rubber. EVA foam is used as padding in equipment for various sports such as ski boots, bicycle saddles, hockey pads, boxing and mixed-martial-arts gloves and helmets, wakeboard boots, waterski boots, fishing rods, and fishing-reel handles. It is typically used as a shock absorber in sports shoes, for example. (Some manufacturers of running shoes, such as Nike, market EVA-based compression-moulded foam used in the manufacture of running shoes as "Phylon".) It is used for the manufacture of floats for commercial fishing gear such as purse seine (seine fishing) and gillnets. In addition, because of its buoyancy, EVA has made its way into non-traditional products such as floating eyewear. It is also used in the photovoltaics industry as an encapsulation material for crystalline silicon solar cells in the manufacture of photovoltaic modules. EVA slippers and sandals are popular, being lightweight, easy to form, odourless, glossy, and cheaper than natural rubber. In fishing rods, EVA is used to construct handles on the rod-butt end. EVA can be used as a substitute for cork in many applications.
EVA copolymers are adhesives used in packaging, textile, bookbinding for bonding plastic films, metal surfaces, coated paper, and as redispersible powders in plasters and cement renders.
In recent years, EVA foam has seen popular use in cosplay communities, largely in part due to its ease to work with, durability, and comfort in comparison to traditional plastic-based costumes.
Flower-making foam is a thin sheet made of EVA, which is flexible, and is used by artists and craft makers to make artificial flowers. These foams are presented as raw sheets and they can be cut into the desired petal shape and then can be formed by ironing to assemble artificial flowers by putting these petals together.
EVA is also used in coatings formulation of good-quality interior water-borne paints at 53% primary dispersant.
Other uses
EVA is used in orthotics, surfboard and skimboard traction pads, car mats, and for the manufacturing of some artificial flowers.
EVA is used as a cold flow improver for diesel fuel. They are added at the 50-250 ppm level to diesel fuels to inhibit crystallization of waxes which could block fuel filters.
EVA is a separator in HEPA filters. EVA can easily be cut from sheets and molded to shape. It is also used to make thermoplastic mouthguards that soften in boiling water for a user-specific fit. It is also used for conditioning and waterproofing fabrics and leather. EVA finds application in the making of nicotine transdermal patches, since the copolymer binds well with other agents to form gel-like substances. EVA is also sometimes used as a material for some plastic model kit parts. One common use of EVA foam rubber is in low frequency (woofer) speaker cone membrane support rings (replacing rubber) because of its good mechanical and acoustic properties. Open cell EVA foam is used to damp high frequency acoustical diffraction from tweeter speakers and is often put in the area around the high frequency speaker driver to give better directivity and sonic imaging.
EVA may be used in custom-made dental devices with a proper approach to hygiene.
Safety and environmental considerations
Polyethylene vinyl acetate has recently become a popular alternative to polyvinyl chloride because it does not contain chlorine. As of 2014, EVA has not been found to be carcinogenic by the NTP, ACGIH, IARC, or OSHA, and has no known adverse effect on human health. Like many plastics, it is difficult to biodegrade. One study suggested it may have adverse effects on certain organisms, but its actual effect on humans has not been determined.
See also
Polyethylene
Polyvinyl acetate
Polyvinyl ester
Vinyl polymer
References
External links
List of EVA tradenames (2007; last update 2008; last archived 2022)
Acetate esters
Commodity chemicals
Copolymers
Elastomers
Plastics
Thermoplastics
Vinyl polymers | Ethylene-vinyl acetate | [
"Physics",
"Chemistry"
] | 1,611 | [
"Commodity chemicals",
"Products of chemical industry",
"Synthetic materials",
"Unsolved problems in physics",
"Elastomers",
"Amorphous solids",
"Plastics"
] |
13,422,294 | https://en.wikipedia.org/wiki/Argon%20oxygen%20decarburization | Argonoxygen decarburization (AOD) is a process primarily used in stainless steel making and other high grade alloys with oxidizable elements such as chromium and aluminium. After initial melting the metal is then transferred to an AOD vessel where it will be subjected to three steps of refining; decarburization, reduction, and desulfurization.
The AOD process was invented in 1954 by the Lindé Division of The Union Carbide Corporation (which became known as Praxair in 1992).
Process
The AOD process is usually divided in three main steps: decarburization, reduction, and desulfurization.
Decarburization
Prior to the decarburization step, one more step should be taken into consideration: de-siliconization, which is a very important factor for refractory lining and further refinement.
The decarburization step is controlled by ratios of oxygen to argon or nitrogen to remove the carbon from the metal bath. The ratios can be done in any number of phases to facilitate the reaction. The gases are usually blown through a top lance (oxygen only) and tuyeres in the sides/bottom (oxygen with an inert gas shroud). The stages of blowing remove carbon by the combination of oxygen and carbon forming CO gas.
4 Cr(bath) + 3 O2 → 2 Cr2O3(slag)
Cr2O3(slag) + 3 C(bath) → 3 CO(gas) + 2 Cr(bath)
To drive the reaction to the forming of CO, the partial pressure of CO is lowered using argon or nitrogen. Since the AOD vessel is not externally heated, the blowing stages are also used for temperature control. The burning of carbon increases the bath temperature. By the end of this process around 97% of Cr is retained in the steel.
Reduction
After a desired carbon and temperature level have been reached the process moves to reduction. Reduction recovers the oxidized elements such as chromium from the slag. To achieve this, alloy additions are made with elements that have a higher affinity for oxygen than chromium, using either a silicon alloy or aluminium. The reduction mix also includes lime (CaO) and fluorspar (CaF2). The addition of lime and fluorspar help with driving the reduction of Cr2O3 and managing the slag, keeping the slag fluid and its volume small.
Desulfurization
Desulfurization is achieved by having a high lime concentration in the slag and a low oxygen activity in the metal bath.
S(bath) + CaO(slag) → CaS(slag) + O(bath)
So, additions of lime are added to dilute sulfur in the metal bath. Also, aluminium or silicon may be added to remove oxygen. Other trimming alloy additions might be added at the end of the step. After sulfur levels have been achieved the slag is removed from the AOD vessel and the metal bath is ready for tapping. The tapped bath is then either sent to a stir station for further chemistry trimming or to a caster for casting.
The desulfurization step is usually the first step of the process.
History
The AOD process has a significant place in the history of steelmaking, introducing a transformative method for refining stainless steel and shaping the industry's landscape.
1960s
The development of AOD technology began in the 1960s as an alternative to traditional steelmaking methods. The process was initially introduced by American chemical companies who aimed to refine stainless steel more efficiently and economically.
Late 1960s
In the late 1960s, the AOD process gained recognition for its ability to remove carbon efficiently, achieving lower carbon levels than other refining methods. It also offered the advantage of being able to produce stainless steel with low carbon content, making it suitable for various applications.
1970s
During the 1970s, the AOD process underwent further refinements and improvements. Steel companies in Europe and the United States increasingly adopted the AOD method in their operations, attracted by its flexibility and ability to produce high-quality stainless steel.
1980s
In the 1980s, the AOD process became widely accepted as a standard refining method for stainless steel worldwide. Its advantages, such as high metallic yields, precise control over chemical composition, carbon control, desulfurization capabilities, and cleaner metal production, contributed to its popularity.
Present Day
Today, the AOD process remains a prominent method in the stainless steel industry. It offers steelmakers greater flexibility in raw material selection, enabling the use of cost-effective inputs and ensuring accurate and consistent results.The process has also contributed to increased production capacity with relatively small capital investments compared to conventional electric furnace methods.
Additional uses
In additional to its primary application in the production of stainless steel, many various additional uses have been found for AOD across different industries and materials.
Carbon Capture and Utilization
AOD slag has shown promising potential for usage as a carbon-capture construction material due to its high capacity for CO2 and its low cost. Carbonation curing, a process utilizing CO2 as a curing agent in concrete manufacturing, enhances the chemical properties of stainless steel slag by stabilizing it. During carbonation, g-C2S (di-calcium silicate) in the slag reacts with CO2 to produce compounds like calcite and silica gel, resulting in increased compressive strength and improved durability of cementitious materials. The incorporation of AOD slag as a replacement material in ordinary Portland cement (OPC) during carbonation curing has been studied, demonstrating positive effects on strength and reduced porosity.
Cementitious Activity and Modifiers
AOD slag exhibits cementitious activity, but its properties can be changed by modifiers. Studies have focused on the impact of modifiers, such as B2O3 and P2O5 on preventing the crystal transition of β-C2S and improving the cementitious activity of the slag. Addition of B2O3 and P2O5 has shown curing effects and increased compressive strength. These findings suggest that proper selection of modifiers can enhance the performance of stainless steel slag in cementitious applications.
Chromium Leachability and Carbonation
Another aspect of AOD slag research is its carbonation potential and its impact on chromium leachability. Carbonation of the dicalcium silicate in AOD slag leads to the formation of various compounds, including amorphous calcium carbonate, crystalline calcite, and silica gel. The carbonation ratio of the slag affects the mineral phases, which subsequently influence chromium leachability. Optimal carbonation ratios have been identified to minimize chromium leaching risks during carbonation-related production activities.
References
American inventions
Stainless steel
Steelmaking | Argon oxygen decarburization | [
"Chemistry"
] | 1,389 | [
"Metallurgical processes",
"Steelmaking"
] |
13,422,760 | https://en.wikipedia.org/wiki/Reclaimed%20lumber | Reclaimed lumber is processed wood retrieved from its original application for purposes of subsequent use. Most reclaimed lumber comes from timbers and decking rescued from old barns, factories and warehouses, although some companies use wood from less traditional structures such as boxcars, coal mines and wine barrels. Reclaimed or antique lumber is used primarily for decoration and home building, for example for siding, architectural details, cabinetry, furniture and flooring.
Wood origins
In the United States of America, wood once functioned as the primary building material because it was strong, relatively inexpensive and abundant. Today, many of the woods that were once plentiful are only available in large quantities through reclamation. One common reclaimed wood, longleaf pine, was used to build factories and warehouses during the Industrial Revolution. The trees were slow-growing (taking 200 to 400 years to mature), tall, straight, and had a natural ability to resist mold and insects. They were also abundant. Longleaf pine grew in thick forests that spanned over of North America. Reclaimed longleaf pine is often sold as Heart Pine, where the word "heart" refers to the heartwood of the tree.
Previously common woods for building barns and other structures were redwood (Sequoia sempervirens) on the U.S. west coast and American Chestnut on the U.S. east coast. Beginning in 1904, a chestnut blight spread across the US, killing billions of American Chestnuts, so when these structures were later dismantled, they were a welcome source of this desirable but later rare wood for subsequent reuse. American Chestnut wood can be identified as pre- or post-blight by analysis of worm tracks in sawn timber. The presence of worm tracks suggests the trees were felled as dead standing timber, and may be post-blight lumber.
Barns are one of the most common sources for reclaimed wood in the United States. Those constructed through the early 19th century were typically built using whatever trees were growing on or near the builder's property. They often contain a mix of oak, chestnut, poplar, hickory and pine timber. Beam sizes were limited to what could be moved by man and horse. The wood was often hand-hewn with an axe and/or adze. Early settlers likely recognized American oak from their experience with its European species. Red, white, black, scarlet, willow, post, and pine oak varieties have all been used in North American barns.
Mill buildings throughout the Northeast also provide an abundant source of reclaimed wood. Wood that is reclaimed from these buildings includes structural timbers - such as beams, posts, and joists - along with decking, flooring, and sheathing. These buildings often have no economic or reuse possibility, can be a fire hazard, and may require varying degrees of environmental cleanup. Reclaiming lumber and brick from these retired mills is considered a better use of materials than landfill-based disposal.
Another source of reclaimed wood is old snowfence. At the end of their tenure on the mountains and plains of the Rocky Mountain region, snowfence boards are a valued source of consistent, structurally sound and reliable reclaimed wood.
Other woods recycled and reprocessed into new wood products include coast redwood, hard maple, Douglas Fir, walnuts, hickories, red and White Oak, and Eastern white pine.
Properties
Reclaimed lumber is popular for many reasons: the wood's unique appearance, its contribution to green building, the history of the wood's origins, and the wood's physical characteristics such as strength, stability and durability. The increased strength of reclaimed wood is often attributed to the wood often having been harvested from virgin growth timber, which generally grew more slowly, producing a denser grain.
Reclaimed beams can often be sawn into wider planks than newly harvested lumber, and many companies claim their products are more stable than newly-cut wood because reclaimed wood has been exposed to changes in humidity for far longer.
Reclaimed lumber industry
The reclaimed lumber industry gained momentum in the early 1980s on the West Coast when large-scale reuse of softwoods began. The industry grew due to a growing concern for environmental impact as well as declining quality in new lumber. On the East Coast, industry pioneers began selling reclaimed wood in the early 1970s but the industry stayed mostly small until the 1990s as waste disposal increased and deconstruction became a more economical alternative to demolition. A trade association, the Reclaimed Wood Council, was formed in May 2003 but dissolved in January 2008 due to a lack of participation among the larger reclaimed wood distributors.
Reclaimed lumber is sold under a number of names, such as antique lumber, distressed lumber, recovered lumber, upcycled lumber, and others. It is often confused with salvage logging.
LEED
The Leadership in Energy and Environmental Design (LEED) Green Building Rating System is the US Green Building Council's (USGBC) benchmark for designing, building and operating green buildings. To be certified, projects must first meet the prerequisites designated by the USGBC and then earn a certain number of credits within six categories: sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, innovation and design process.
Using reclaimed wood can earn credits towards achieving LEED project certification. Because reclaimed wood is considered recycled content, it meets the 'materials and resources' criteria for LEED certification, and because some reclaimed lumber products are Forest Stewardship Council (FSC) certified, they can qualify for LEED credits under the 'certified wood' category.
See also
Pallet crafts
Timber recycling
References
Recycled building materials
Building engineering
Sustainable architecture
Recycling by product
Wood products | Reclaimed lumber | [
"Engineering",
"Environmental_science"
] | 1,138 | [
"Sustainable architecture",
"Building engineering",
"Civil engineering",
"Environmental social science",
"Architecture"
] |
13,427,598 | https://en.wikipedia.org/wiki/Comparison%20of%20digital%20audio%20editors | The following tables compare general and technical information among a number of digital audio editors and multitrack recording software.
Digital Audio Workstations
Basic general information about the software: creator/company, license/price etc.
Wave editors
Basic general information about the software: creator/company, license/price etc.
Support
Plugin support
The plugin types the software can run natively (without emulation).
File format support
The various file types the software can read/write.
Notes
See also
List of music software
Notes
Audio engineering
Audio editors
Sound recording | Comparison of digital audio editors | [
"Engineering"
] | 111 | [
"Electrical engineering",
"Audio engineering"
] |
13,428,046 | https://en.wikipedia.org/wiki/Volari%20V3 | The Volari V3 is a video card manufactured by XGI Technology.
History
The V3 was introduced on September 15, 2003. It is a budget option, available with an 8x Accelerated Graphics Port interface from Walton Chaintech Corporation. It is similar in performance to the ATI Radeon 9200 SE, but is generally lower-priced.
References
External links
XGI Technology website
PSA Walton Chaintech Corporation
Graphics cards | Volari V3 | [
"Technology"
] | 88 | [
"Computing stubs",
"Computer hardware stubs"
] |
13,428,133 | https://en.wikipedia.org/wiki/Volari%20V5 | On September 15, 2003, XGI Technology Inc introduced the Volari V5. The V5 is a video card and was available with an Accelerated Graphics Port (AGP) 8x interface in Taiwan. It is similar in terms of clock speed to the Radeon 9600 Pro and the GeForce FX 5600.
References
Graphics cards | Volari V5 | [
"Technology"
] | 71 | [
"Computing stubs",
"Computer hardware stubs"
] |
13,428,186 | https://en.wikipedia.org/wiki/Provider%20router | In Multiprotocol Label Switching (MPLS), a P router or provider router is a label switch router (LSR) that functions as a transit router of the core network. The P router is typically connected to one or more PE routers.
Here's one scenario: A customer who has facilities in LA and Atlanta wants to connect these sites over an MPLS VPN provided by AT&T. To do this, the customer would purchase a link from the on-site CE router to the PE router in AT&T's central office in LA and would also do the same thing in Atlanta. The PE routers would connect over AT&T's backbone routers (P routers) to enable the two CE routers in LA and Atlanta to communicate over the MPLS network.
See also
Customer edge router
Provider edge router
References
MPLS networking | Provider router | [
"Technology"
] | 187 | [
"Computing stubs",
"Computer network stubs"
] |
13,429,072 | https://en.wikipedia.org/wiki/Scottish%20Civic%20Trust | The Scottish Civic Trust is a registered charity. Founded in 1967, and based in the Category A-listed Tobacco Merchant's House in Glasgow, the Trust aims to provide "leadership and focus in the protection, enhancement and development of Scotland's built environment". It often comments on planning applications. From 1990 to 2011 the Trust maintained the Buildings at Risk Register for Scotland on behalf of Historic Environment Scotland and delivers the popular Doors Open Days programme in Scotland. The current director of the Trust is Susan O'Connor.
See also
Cockburn Association (Edinburgh Civic Trust)
Doors Open Days Scotland
Architectural Heritage Society of Scotland
Cambusnethan House
References
External links
Buildings at Risk Register for Scotland
1967 establishments in Scotland
Charities based in Glasgow
Buildings and structures in Scotland
Interested parties in planning in Scotland
Organizations established in 1967
Architectural conservation
Heritage organisations in Scotland
Architecture in Scotland
Civic societies in the United Kingdom | Scottish Civic Trust | [
"Engineering"
] | 176 | [
"Architecture stubs",
"Architecture"
] |
13,429,182 | https://en.wikipedia.org/wiki/Nuova%20Accademia%20di%20Belle%20Arti | The Nuova Accademia di Belle Arti, "New Academy of Fine Arts", also known as NABA, is a private academy of fine art in Milan, in Lombardy in northern Italy. It has approximately 3000 students, some of whom are from abroad; it participates in the Erasmus Programme.
History
NABA was founded in Milan in 1980.
In 1994 the Nuova Accademia received one of the forty "Ambrogino" certificates of civic merit awarded each year by the Comune of Milan.
In 2008 NABA began hosting a "node" of the Planetary-Collegium research platform of the University of Plymouth.
NABA was bought by Bastogi Spa of Milan in 2002. In December 2009 Bastogi sold it to Laureate Education of Baltimore, Maryland, for €22 million. In 2017, Laureate Education sold it to Galileo Global Education as part of a $263 million deal that also included Domus Academy.
The school is listed by the Ministero dell'Istruzione, dell'Università e della Ricerca, the Italian ministry of education, as a "legally recognised academy" in the AFAM classification of schools of music, art and dance that are considered equivalent to a traditional university.
References
Art schools in Italy
Fashion schools
Design schools in Italy
Communication design
Graphic design schools
Education in Milan
Higher education in Italy
Educational institutions established in 1980
1980 establishments in Italy | Nuova Accademia di Belle Arti | [
"Engineering"
] | 283 | [
"Design",
"Communication design"
] |
13,430,116 | https://en.wikipedia.org/wiki/Microsoft%20SQL%20Server%20Master%20Data%20Services | Microsoft SQL Server Master Data Services (MDS) is a Master Data Management (MDM) product from Microsoft that ships as a part of the Microsoft SQL Server relational database management system. Master data management (MDM) allows an organization to discover and define non-transactional lists of data, and compile maintainable, reliable master lists. Master Data Services first shipped with Microsoft SQL Server 2008 R2. Microsoft SQL Server 2016 introduced enhancements to Master Data Services, such as improved performance and security, and the ability to clear transaction logs, create custom indexes, share entity data between different models, and support for many-to-many relationships.
Overview
In Master Data Services, the model is the highest level container in the structure of your master data. You create a model to manage groups of similar data. A model contains one or more entities, and entities contain members that are the data records. An entity is similar to a table.
Like other MDM products, Master Data Services aims to create a centralized data source and keep it synchronized, and thus reduce redundancies, across the applications which process the data.
Sharing the architectural core with Stratature +EDM, Master Data Services uses a Microsoft SQL Server database as the physical data store. It is a part of the Master Data Hub, which uses the database to store and manage data entities. It is a database with the software to validate and manage the data, and keep it synchronized with the systems that use the data. The master data hub has to extract the data from the source system, validate, sanitize and shape the data, remove duplicates, and update the hub repositories, as well as synchronize the external sources. The entity schemas, attributes, data hierarchies, validation rules and access control information are specified as metadata to the Master Data Services runtime. Master Data Services does not impose any limitation on the data model. Master Data Services also allows custom Business rules, used for validating and sanitizing the data entering the data hub, to be defined, which is then run against the data matching the specified criteria. All changes made to the data are validated against the rules, and a log of the transaction is stored persistently. Violations are logged separately, and optionally the owner is notified, automatically. All the data entities can be versioned.
Master Data Services allows the master data to be categorized by hierarchical relationships, such as employee data are a subtype of organization data. Hierarchies are generated by relating data attributes. Data can be automatically categorized using rules, and the categories are introspected programmatically. Master Data Services can also expose the data as Microsoft SQL Server views, which can be pulled by any SQL-compatible client. It uses a role-based access control system to restrict access to the data. The views are generated dynamically, so they contain the latest data entities in the master hub. It can also push out the data by writing to some external journals. Master Data Services also includes a web-based UI for viewing and managing the data. It uses ASP.NET in the back-end. The Silverlight front-end was replaced with HTML5 in SQL Server 2019.
Master Data Services provides a Web service interface to expose the data, as well as an API, which internally uses the exposed web services, exposing the feature set, programmatically, to access and manipulate the data. It also integrates with Active Directory for authentication purposes. Unlike +EDM, Master Data Services supports Unicode characters, as well as support multilingual user interfaces.
SQL Server 2016 introduced a significant performance increase in Master Data Services over previous versions.
Terminology
Model is the highest level of an MDS instance. It is the primary container for specific groupings of master data. In many ways it is very similar to the idea of a database.
Entities are containers created within a model. Entities provide a home for members, and are in many ways analogous to database tables. (e.g. Customer)
Members are analogous to the records in a database table (Entity) e.g. Will Smith. Members are contained within entities. Each member is made up of two or more attributes.
Attributes are analogous to the columns within a database table (Entity) e.g. Surname. Attributes exist within entities and help describe members (the records within the table). Name and Code attributes are created by default for each entity and serve to describe and uniquely identify leaf members. Attributes can be related to other attributes from other entities which are called 'domain-based' attributes. This is similar to the concept of a foreign key.
Other attributes however, will be of type 'free-form' (most common) or 'file'.
Attribute Groups are explicitly defined collections of particular attributes. Say you have an entity "customer" that has 50 attributes — too much information for many of your users. Attribute groups enable the creation of custom sets of hand-picked attributes that are relevant for specific audiences. (e.g. "customer - delivery details" that would include just their name and last known delivery address). This is very similar to a database view.
Hierarchies organize members into either Derived or Explicit hierarchical structures. Derived hierarchies, as the name suggests, are derived by the MDS engine based on the relationships that exist between attributes. Explicit hierarchies are created by hand using both leaf and consolidated members.
Business Rules can be created and applied against model data to ensure that custom business logic is adhered to. In order to be committed into the system data must pass all business rule validations applied to them. e.g. Within the Customer Entity you may want to create a business rule that ensures all members of the 'Country' Attribute contain either the text "USA" or "Canada". The Business Rule once created and ran will then verify all the data is correct before it accepts it into the approved model.
Versions provide system owners / administrators with the ability to Open, Lock or Commit a particular version of a model and the data contained within it at a particular point in time. As the content within a model varies, grows or shrinks over time versions provide a way of managing metadata so that subscribing systems can access to the correct content.
References
External links
Microsoft SQL Server 2016 Master Data Services
What's New in Master Data Services (MDS)
Data management
SQL Server Master Data Services
2010 software | Microsoft SQL Server Master Data Services | [
"Technology"
] | 1,313 | [
"Data management",
"Data"
] |
13,430,526 | https://en.wikipedia.org/wiki/GeoVax | GeoVax is a clinical-stage biotechnology company which develops vaccines. GeoVax's development platform uses Modified Vaccinia Ankara (MVA) vector technology, with improvements to antigen design and manufacturing capabilities. GeoVax uses recombinant DNA or recombinant viruses to produce virus-like particles (VLPs) in the person being vaccinated.
History
Formed in 2001 in Atlanta, Georgia, the approach was originally based on preclinical work done by Harriet Robinson from 1992 to 2002. In 2002, laboratory space, equipment and personnel were acquired and work on an HIV-1 vaccine development plan began. In May 2006, human clinical trials of the drug began. The company is now working on vaccines for Marburg, Lassa Fever, Ebola, Zika and Covid-19.
In January 2020, the company announced initiation of efforts to develop a vaccine against novel coronavirus disease (COVID-19) caused by the SARS-CoV-2 coronavirus. The GeoVax program has been added to the "Draft Landscape of COVID-19 Candidate Vaccines" by the World Health Organization.
On September 25, 2020, GeoVax closed an underwritten public offering, raising gross proceeds of $12.8 million. Concurrent with the offering, GeoVax common stock began trading on The Nasdaq Capital Market under the symbol "GOVX".
Recently, the company began collaborating with Emory University on the development of a therapeutic vaccine for human papillomavirus (HPV) infection, with a specific focus on head and neck cancer (HNC).
Non-Covid Pipeline: Disease Definitions and Stats
HIV/AIDS
In human clinical trials of the company's HIV vaccines, GeoVax demonstrated that VLPs are safe and eliciting both strong and durable humoral and cellular immune response.
Pre-Clinical and Clinical History
From preclinical results (using SHIV) 96% of primates (22/23) were protected from the virus over a three-and-a-half-year period when vaccinated, while by contrast five out of six primates died within eight months after being infected when left untreated. The vaccine works with a combined DNA vaccine and MVA (modified vaccinia Ankara) vaccine both of which lead to the insertion of genes into primate DNA which leads to foreign protein expression. With the GeoVax vaccine a variety of HIV proteins (both surface and internal) are expressed from genes which include the Env, Pol and Gag genes.
GeoVax is currently conducting multiple site Phase 2 Human clinical trials for HIV/AIDS preventive vaccine products following successful completion of multiple Phase 1 human clinical trials.
In 2010 GeoVax began enrolling patients in a Phase 1 therapeutic clinical trial for individuals already infected with HIV. The long-term therapeutic goal is to develop a vaccine-based mechanism to treat infected individuals that either prevents or significantly slows progression to symptomatic HIV, including AIDS, by stimulating an infected individual's immune system to resist the progression of infection. The study is being completed at the AIDS Research Consortium of Atlanta.
Past trials
Therapeutic Vaccine Clinical Trials
During 2010, the AIDS Research Consortium of Atlanta (ARCA) began patient recruitment for a Phase 1 clinical trial sponsored by GeoVax Labs, Inc., investigating GeoVax's DNA/MVA vaccine as a treatment for individuals already infected with HIV. The trial is primarily designed as a safety study, but will also collect and report data on the vaccine's ability to elicit protective immune responses and control re-emergent virus during a pause in drug treatment. As part of the trial protocol, a volunteer must have begun drug treatment in the first year of infection and have achieved 6 months of stable viral control on drug treatment before entry into the trial and receipt of the first vaccination.
Preventive Vaccine Clinical Trials
GeoVax published results of Phase 1 safety and immunogenicity testing for its preventive vaccine trial on March 1, 2011. Based on published results, GeoVax will be advancing two regimens forward into the Phase 2a HVTN 205 trial. Regimens selected for advancement are the DDMM combination, which produced the highest T cell response rates, and the MMM combination. The MMM regimen produced the highest antibody-induced immune response.
The DDMM regimen consists of priming with two doses of the pGA2/JS7 recombinant DNA vaccine and boosting with two doses of VA/HIV62B recombinant MVA vaccine. The MMM regimen consists of priming and boosting with a total of three doses of the recombinant MVA vaccine.
HVTN 205 Study—In early 2009, the HVTN began enrolling patients in a preventive Phase 2 clinical trial sponsored by GeoVax Labs, Inc. This study is investigating a prime-boost approach using GeoVax's combination DNA/MVA vaccine. During 2010, the study was expanded from 225 to 300 participants to include an arm gathering additional data on three MVA injections, without the use of the DNA component, which is an addition to the original arm testing two DNA priming and two MVA boosting injections. Preliminary results were announced by the company on December 9, 2010. Preliminary results indicate an excellent safety profile and highly reproducible immunogenicity subsequently confirmed by the official publication above in The Journal of Infectious Diseases.
Phase 2a Clinical Trial Expansion
In April 2011, GeoVax Labs, Inc. in collaboration with the National Institute of Allergy and Infectious Diseases (NIAID), part of the U.S. National Institutes of Health (NIH) and the HIV Vaccine Trials Network (HVTN) announced an expansion of the current Phase 2a clinical trial to include a new component. The new trial is HVTN 094 and will be conducted by the HIV Vaccine Trials Network. "Specifically, the HVTN plans to clinically test a novel vaccine product developed by GeoVax scientists that expresses human granulocyte-macrophage colony stimulating factor (GM-CSF) in combination with inactivated HIV proteins. The novel vaccine consists of a recombinant DNA vaccine co-expressing human GM-CSF and non-infectious HIV virus-likeparticles. The DNA vaccine is used to prime immune responses that are subsequently boosted by vaccination with a recombinant modified vaccinia Ankara (MVA) vectored vaccine. The MVA expresses the HIV virus-like-particles, but does not express GM-CSF. The regimen builds on the GeoVax DNA/MVA vaccine that is currently in Phase 2a clinical testing through the HVTN."
References
Notes
External links
GeoVax
GeoVax News Links
GeoVax Video Links
Relevant Published Research
Biotechnology companies of the United States
Companies based in Atlanta
HIV vaccine research
Biotechnology companies established in 2001
2001 establishments in Georgia (U.S. state) | GeoVax | [
"Chemistry"
] | 1,432 | [
"HIV vaccine research",
"Drug discovery"
] |
13,431,106 | https://en.wikipedia.org/wiki/PRINTS | In molecular biology, the PRINTS database is a collection of so-called "fingerprints": it provides both a detailed annotation resource for protein families, and a diagnostic tool for newly determined sequences. A fingerprint is a group of conserved motifs taken from a multiple sequence alignment - together, the motifs form a characteristic signature for the aligned protein family. The motifs themselves are not necessarily contiguous in sequence, but may come together in 3D space to define molecular binding sites or interaction surfaces. The particular diagnostic strength of fingerprints lies in their ability to distinguish sequence differences at the clan, superfamily, family and subfamily levels. This allows fine-grained functional diagnoses of uncharacterised sequences, allowing, for example, discrimination between family members on the basis of the ligands they bind or the proteins with which they interact, and highlighting potential oligomerisation or allosteric sites.
PRINTS is a founding partner of the integrated resource, InterPro, a widely used database of protein families, domains and functional sites.
References
External links
PRINTS Database (University of Manchester Bioinformatics Education and Research)
Biological databases
Department of Computer Science, University of Manchester | PRINTS | [
"Biology"
] | 233 | [
"Bioinformatics",
"Biological databases"
] |
13,431,137 | https://en.wikipedia.org/wiki/Indoor%20positioning%20system | An indoor positioning system (IPS) is a network of devices used to locate people or objects where GPS and other satellite technologies lack precision or fail entirely, such as inside multistory buildings, airports, alleys, parking garages, and underground locations.
A large variety of techniques and devices are used to provide indoor positioning ranging from reconfigured devices already deployed such as smartphones, WiFi and Bluetooth antennas, digital cameras, and clocks; to purpose built installations with relays and beacons strategically placed throughout a defined space. Lights, radio waves, magnetic fields, acoustic signals, and behavioral analytics are all used in IPS networks. IPS can achieve position accuracy of 2 cm, which is on par with RTK enabled GNSS receivers that can achieve 2 cm accuracy outdoors.
IPS use different technologies, including distance measurement to nearby anchor nodes (nodes with known fixed positions, e.g. WiFi / LiFi access points, Bluetooth beacons or Ultra-Wideband beacons), magnetic positioning, dead reckoning. They either actively locate mobile devices and tags or provide ambient location or environmental context for devices to get sensed.
The localized nature of an IPS has resulted in design fragmentation, with systems making use of various optical, radio, or even acoustic
technologies.
IPS has broad applications in commercial, military, retail, and inventory tracking industries. There are several commercial systems on the market, but no standards for an IPS system. Instead each installation is tailored to spatial dimensions, building materials, accuracy needs, and budget constraints.
For smoothing to compensate for stochastic (unpredictable) errors there must be a sound method for reducing the error budget significantly. The system might include information from other systems to cope for physical ambiguity and to enable error compensation.
Detecting the device's orientation (often referred to as the compass direction in order to disambiguate it from smartphone vertical orientation) can be achieved either by detecting landmarks inside images taken in real time, or by using trilateration with beacons. There also exist technologies for detecting magnetometric information inside buildings or locations with steel structures or in iron ore mines.
Applicability and precision
Due to the signal attenuation caused by construction materials, the satellite based Global Positioning System (GPS) loses significant power indoors affecting the required coverage for receivers by at least four satellites. In addition, the multiple reflections at surfaces cause multi-path propagation serving for uncontrollable errors. These very same effects are degrading all known solutions for indoor locating which uses electromagnetic waves from indoor transmitters to indoor receivers. A bundle of physical and mathematical methods are applied to compensate for these problems. Promising direction radio frequency positioning error correction opened by the use of alternative sources of navigational information, such as inertial measurement unit (IMU), monocular camera Simultaneous localization and mapping (SLAM) and WiFi SLAM. Integration of data from various navigation systems with different physical principles can increase the accuracy and robustness of the overall solution.
The U.S. Global Positioning System (GPS) and other similar Global navigation satellite systems (GNSS) are generally not suitable to establish indoor locations, since microwaves will be attenuated and scattered by roofs, walls and other objects. However, in order to make the positioning signals become ubiquitous, integration between GPS and indoor positioning can be made.
Currently, GNSS receivers are becoming more and more sensitive due to increasing microchip processing power. High Sensitivity GNSS receivers are able to receive satellite signals in most indoor environments and attempts to determine the 3D position indoors have been successful. Besides increasing the sensitivity of the receivers, the technique of A-GPS is used, where the almanac and other information are transferred through a mobile phone.
However, despite the fact that proper coverage for the required four satellites to locate a receiver is not achieved with all current designs (2008–11) for indoor operations, GPS emulation has been deployed successfully in Stockholm metro. GPS coverage extension solutions have been able to provide zone-based positioning indoors, accessible with standard GPS chipsets like the ones used in smartphones.
Types of usage
Locating and positioning
While most current IPS are able to detect the location of an object, they are so coarse that they cannot be used to detect the orientation or direction of an object.
Locating and tracking
One of the methods to thrive for sufficient operational suitability is "tracking". Whether a sequence of locations determined form a trajectory from the first to the most actual location. Statistical methods then serve for smoothing the locations determined in a track resembling the physical capabilities of the object to move. This smoothing must be applied, when a target moves and also for a resident target, to compensate erratic measures. Otherwise the single resident location or even the followed trajectory would compose of an itinerant sequence of jumps.
Identification and segregation
In most applications the population of targets is larger than just one. Hence the IPS must serve a proper specific identification for each observed target and must be capable to segregate and separate the targets individually within the group. An IPS must be able to identify the entities being tracked, despite the "non-interesting" neighbors. Depending on the design, either a sensor network must know from which tag it has received information, or a locating device must be able to identify the targets directly.
Wireless technologies
Any wireless technology can be used for locating. Many different systems take advantage of existing wireless infrastructure for indoor positioning. There are three primary system topology options for hardware and software configuration, network-based, terminal-based, and terminal-assisted. Positioning accuracy can be increased at the expense of wireless infrastructure equipment and installations.
Wi-Fi-based positioning system (WPS)
Wi-Fi positioning system (WPS) is used where GPS is inadequate. The localization technique used for positioning with wireless access points is based on measuring the intensity of the received signal (received signal strength in English RSS) and the method of "fingerprinting". To increase the accuracy of fingerprinting methods, statistical post-processing techniques (like Gaussian process theory) can be applied, to transform discrete set of "fingerprints" to a continuous distribution of RSSI of each access point over entire location. Typical parameters useful to geolocate the Wi-Fi hotspot or wireless access point include the SSID and the MAC address of the access point. The accuracy depends on the number of positions that have been entered into the database. The possible signal fluctuations that may occur can increase errors and inaccuracies in the path of the user.
Bluetooth
Originally, Bluetooth was concerned about proximity, not about exact location.
Bluetooth was not intended to offer a pinned location like GPS, however is known as a geo-fence or micro-fence solution which makes it an indoor proximity solution, not an indoor positioning solution.
Micromapping and indoor mapping has been linked to Bluetooth and to the Bluetooth LE based iBeacon promoted by Apple Inc. Large-scale indoor positioning system based on iBeacons has been implemented and applied in practice.
Bluetooth speaker position and home networks can be used for broad reference.
In 2021 Apple released their AirTags which allow a combination of Bluetooth and UWB technology to track Apple devices amongst the Find My network causing a surge of popularity for tracking technology.
Choke point concepts
Simple concept of location indexing and presence reporting for tagged objects, uses known sensor identification only. This is usually the case with passive radio-frequency identification (RFID) / NFC systems, which do not report the signal strengths and various distances of single tags or of a bulk of tags and do not renew any before known location coordinates of the sensor or current location of any tags. Operability of such approaches requires some narrow passage to prevent from passing by out of range.
Grid concepts
Instead of long range measurement, a dense network of low-range receivers may be arranged, e.g. in a grid pattern for economy, throughout the space being observed. Due to the low range, a tagged entity will be identified by only a few close, networked receivers. An identified tag must be within range of the identifying reader, allowing a rough approximation of the tag location. Advanced systems combine visual coverage with a camera grid with the wireless coverage for the rough location.
Long range sensor concepts
Most systems use a continuous physical measurement (such as angle and distance or distance only) along with the identification data in one combined signal. Reach by these sensors mostly covers an entire floor, or an aisle or just a single room. Short reach solutions get applied with multiple sensors and overlapping reach.
Angle of arrival
Angle of arrival (AoA) is the angle from which a signal arrives at a receiver. AoA is usually determined by measuring the time difference of arrival (TDOA) between multiple antennas in a sensor array. In other receivers, it is determined by an array of highly directional sensors—the angle can be determined by which sensor received the signal. AoA is usually used with triangulation and a known base line to find the location relative to two anchor transmitters.
Time of arrival
Time of arrival (ToA, also time of flight) is the amount of time a signal takes to propagate from transmitter to receiver. Because the signal propagation rate is constant and known (ignoring differences in mediums) the travel time of a signal can be used to directly calculate distance. Multiple measurements can be combined with trilateration and multilateration to find a location. This is the technique used by GPS and Ultra Wideband systems. Systems which use ToA, generally require a complicated synchronization mechanism to maintain a reliable source of time for sensors (though this can be avoided in carefully designed systems by using repeaters to establish coupling).
The accuracy of the TOA based methods often suffers from massive multipath conditions in indoor localization, which is caused by the reflection and diffraction of the RF signal from objects (e.g., interior wall, doors or furniture) in the environment. However, it is possible to reduce the effect of multipath by applying temporal or spatial sparsity based techniques.
Joint angle and time of arrival
Joint estimation of angles and times of arrival is another method of estimating the location of the user. Indeed, instead of requiring multiple access points and techniques such as triangulation and trilateration, a single access point will be able to locate a user with combined angles and times of arrival. Even more, techniques that leverage both space and time dimensions can increase the degrees of freedom of the whole system and further create more virtual resources to resolve more sources, via subspace approaches.
Received signal strength indication
Received signal strength indication (RSSI) is a measurement of the power level received by sensor. Because radio waves propagate according to the inverse-square law, distance can be approximated (typically to within 1.5 meters in ideal conditions and 2 to 4 meters in standard conditions) based on the relationship between transmitted and received signal strength (the transmission strength is a constant based on the equipment being used), as long as no other errors contribute to faulty results. The inside of buildings is not free space, so accuracy is significantly impacted by reflection and absorption from walls. Non-stationary objects such as doors, furniture, and people can pose an even greater problem, as they can affect the signal strength in dynamic, unpredictable ways.
A lot of systems use enhanced Wi-Fi infrastructure to provide location information. None of these systems serves for proper operation with any infrastructure as is. Unfortunately, Wi-Fi signal strength measurements are extremely noisy, so there is ongoing research focused on making more accurate systems
Others wireless technologies
Radio frequency identification (RFID): passive tags are very cost-effective, but do not support any metrics
Ultra-wideband (UWB): reduced interference with other devices
Infrared (IR): previously included in most mobile devices
Gen2IR (second generation infrared)
Visible light communication (VLC), as LiFi: can use existing lighting systems
Ultrasound: waves move very slowly, which results in much higher accuracy
Other technologies
Non-radio technologies can be used for positioning without using the existing wireless infrastructure. This can provide increased accuracy at the expense of costly equipment and installations.
Magnetic positioning
Magnetic positioning can offer pedestrians with smartphones an indoor accuracy of 1–2 meters with 90% confidence level, without using the additional wireless infrastructure for positioning. Magnetic positioning is based on the iron inside buildings that create local variations in the Earth's magnetic field. Un-optimized compass chips inside smartphones can sense and record these magnetic variations to map indoor locations.
Inertial measurements
Pedestrian dead reckoning and other approaches for positioning of pedestrians propose an inertial measurement unit carried by the pedestrian either by measuring steps indirectly (step counting) or in a foot mounted approach, sometimes referring to maps or other additional sensors to constrain the inherent sensor drift encountered with inertial navigation. The MEMS inertial sensors suffer from internal noises which result in cubically growing position error with time. To reduce the error growth in such devices a Kalman Filtering based approach is often used.
However, in order to make it capable to build map itself, the SLAM algorithm framework will be used.
Inertial measures generally cover the differentials of motion, hence the location gets determined with integrating and thus requires integration constants to provide results. The actual position estimation can be found as the maximum of a 2-d probability distribution which is recomputed at each step taking into account the noise model of all the sensors involved and the constraints posed by walls and furniture.
Based on the motions and users' walking behaviors, IPS is able to estimate users' locations by machine learning algorithms.
Positioning based on visual markers
A visual positioning system can determine the location of a camera-enabled mobile device by decoding location coordinates from visual markers. In such a system, markers are placed at specific locations throughout a venue, each marker encoding that location's coordinates: latitude, longitude and height off the floor. Measuring the visual angle from the device to the marker enables the device to estimate its own location coordinates in reference to the marker. Coordinates include latitude, longitude, level and altitude off the floor.
Location based on known visual features
A collection of successive snapshots from a mobile device's camera can build a database of images that is suitable for estimating location in a venue. Once the database is built, a mobile device moving through the venue can take snapshots that can be interpolated into the venue's database, yielding location coordinates. These coordinates can be used in conjunction with other location techniques for higher accuracy. Note that this can be a special case of sensor fusion where a camera plays the role of yet another sensor.
Mathematics
Once sensor data has been collected, an IPS tries to determine the location from which the received transmission was most likely collected. The data from a single sensor is generally ambiguous and must be resolved by a series of statistical procedures to combine several sensor input streams.
Empirical method
One way to determine position is to match the data from the unknown location with a large set of known locations using an algorithm such as k-nearest neighbor. This technique requires a comprehensive on-site survey and will be inaccurate with any significant change in the environment (due to moving persons or moved objects).
Mathematical modeling
Location will be calculated mathematically by approximating signal propagation and finding angles and / or distance. Inverse trigonometry will then be used to determine location:
Trilateration (distance from anchors)
Triangulation (angle to anchors)
Advanced systems combine more accurate physical models with statistical procedures:
Bayesian statistical analysis (probabilistic model)
Kalman filtering (for estimating proper value streams under noise conditions).
Sequential Monte Carlo method (for approximating the Bayesian statistical models).
Uses
The major consumer benefit of indoor positioning is the expansion of location-aware mobile computing indoors. As mobile devices become ubiquitous, contextual awareness for applications has become a priority for developers. Most applications currently rely on GPS, however, and function poorly indoors. Applications benefiting from indoor location include:
Accessibility aids for the visually impaired.
Augmented reality
School campus
Museum guided tours
Shopping malls, including hypermarkets.
Warehouses
Factory
Airports, bus, train and subway stations
Parking lots, including these in hypermarkets
Targeted advertising
Social networking service
Hospitals
Hotels
Sports
Cruise ships
Indoor robotics
Tourism
Amusement parks
See also
References
External links
Asset Agent Indoor RTLS, based on active RFID and Chirp technology
Pozyx Indoor Real-Time Location System (RTLS), based on UWB technology
OpenHPS Hybrid Solution for Indoor and Outdoor Positioning
Micromapping in OpenStreetMap
Indoor Mapping in OpenStreetMap
IPIN Conferences.
Automatic identification and data capture
Radio-frequency identification
Radio navigation
Ubiquitous computing
Maps
Geomagnetism
Geopositioning | Indoor positioning system | [
"Technology",
"Engineering"
] | 3,409 | [
"Radio electronics",
"Wireless locating",
"Wireless networking",
"Indoor positioning system",
"Data",
"Automatic identification and data capture",
"Radio-frequency identification"
] |
13,431,536 | https://en.wikipedia.org/wiki/Material%20point%20method | The material point method (MPM) is a numerical technique used to simulate the behavior of solids, liquids, gases, and any other continuum material. Especially, it is a robust spatial discretization method for simulating multi-phase (solid-fluid-gas) interactions. In the MPM, a continuum body is described by a number of small Lagrangian elements referred to as 'material points'. These material points are surrounded by a background mesh/grid that is used to calculate terms such as the deformation gradient. Unlike other mesh-based methods like the finite element method, finite volume method or finite difference method, the MPM is not a mesh based method and is instead categorized as a meshless/meshfree or continuum-based particle method, examples of which are smoothed particle hydrodynamics and peridynamics. Despite the presence of a background mesh, the MPM does not encounter the drawbacks of mesh-based methods (high deformation tangling, advection errors etc.) which makes it a promising and powerful tool in computational mechanics.
The MPM was originally proposed, as an extension of a similar method known as FLIP (a further extension of a method called PIC) to computational solid dynamics, in the early 1990 by Professors Deborah L. Sulsky, Zhen Chen and Howard L. Schreyer at University of New Mexico. After this initial development, the MPM has been further developed both in the national labs as well as the University of New Mexico, Oregon State University, University of Utah and more across the US and the world. Recently the number of institutions researching the MPM has been growing with added popularity and awareness coming from various sources such as the MPM's use in the Disney film Frozen.
The algorithm
An MPM simulation consists of the following stages:
(Prior to the time integration phase)
Initialization of grid and material points.
A geometry is discretized into a collection of material points, each with its own material properties and initial conditions (velocity, stress, temperature, etc.)
The grid, being only used to provide a place for gradient calculations is normally made to cover an area large enough to fill the expected extent of computational domain needed for the simulation.
(During the time integration phase - explicit formulation)
Material point quantities are extrapolated to grid nodes.
Material point mass (), momenta (), stresses (), and external forces () are extrapolated to the nodes at the corners of the cells within which the material points reside. This is most commonly done using standard linear shape functions (), the same used in FEM.
The grid use the material point values to create the masses (), velocities (), internal and external force vectors (,) for the nodes:
Equations of motion are solved on the grid.
Newton's 2nd Law is solved to obtain the nodal acceleration ()
New nodal velocities are found ().
Derivative terms are extrapolated back to material points
Material point acceleration (), deformation gradient () (or strain rate () depending on the strain theory used) is extrapolated from the surrounding nodes using similar shape functions to before ().
Variables on the material points: positions, velocities, strains, stresses etc. are then updated with these rates depending on integration scheme of choice and a suitable constitutive model.
Resetting of grid.
Now that the material points are fully updated at the next time step, the grid is reset to allow for the next time step to begin.
History of PIC/MPM
The PIC was originally conceived to solve problems in fluid dynamics, and developed by Harlow at Los Alamos National Laboratory in 1957. One of the first PIC codes was the Fluid-Implicit Particle (FLIP) program, which was created by Brackbill in 1986 and has been constantly in development ever since. Until the 1990s, the PIC method was used principally in fluid dynamics.
Motivated by the need for better simulating penetration problems in solid dynamics, Sulsky, Chen and Schreyer started in 1993 to reformulate the PIC and develop the MPM, with funding from Sandia National Laboratories. The original MPM was then further extended by Bardenhagen et al.. to include frictional contact, which enabled the simulation of granular flow, and by Nairn to include explicit cracks and crack propagation (known as CRAMP).
Recently, an MPM implementation based on a micro-polar Cosserat continuum has been used to simulate high-shear granular flow, such as silo discharge. MPM's uses were further extended into Geotechnical engineering with the recent development of a quasi-static, implicit MPM solver which provides numerically stable analyses of large-deformation problems in Soil mechanics.
Annual workshops on the use of MPM are held at various locations in the United States. The Fifth MPM Workshop was held at Oregon State University, in Corvallis, OR, on April 2 and 3, 2009.
Applications of PIC/MPM
The uses of the PIC or MPM method can be divided into two broad categories: firstly, there are many applications involving fluid dynamics, plasma physics, magnetohydrodynamics, and multiphase applications. The second category of applications comprises problems in solid mechanics.
Fluid dynamics and multiphase simulations
The PIC method has been used to simulate a wide range of fluid-solid interactions, including sea ice dynamics, penetration of biological soft tissues, fragmentation of gas-filled canisters, dispersion of atmospheric pollutants, multiscale simulations coupling molecular dynamics with MPM, and fluid-membrane interactions. In addition, the PIC-based FLIP code has been applied in magnetohydrodynamics and plasma processing tools, and simulations in astrophysics and free-surface flow.
As a result of a joint effort between UCLA's mathematics department and Walt Disney Animation Studios, MPM was successfully used to simulate snow in the 2013 animated film Frozen.
Solid mechanics
MPM has also been used extensively in solid mechanics, to simulate impact, penetration, collision and rebound, as well as crack propagation. MPM has also become a widely used method within the field of soil mechanics: it has been used to simulate granular flow, quickness test of sensitive clays, landslides, silo discharge, pile driving, fall-cone test, bucket filling, and material failure; and to model soil stress distribution, compaction, and hardening. It is now being used in wood mechanics problems such as simulations of transverse compression on the cellular level including cell wall contact. The work also received the George Marra Award for paper of the year from the Society of Wood Science and Technology.
Classification of PIC/MPM codes
MPM in the context of numerical methods
One subset of numerical methods are Meshfree methods, which are defined as methods for which "a predefined mesh is not necessary, at least in field variable interpolation". Ideally, a meshfree method does not make use of a mesh "throughout the process of solving the problem governed by partial differential equations, on a given arbitrary domain, subject to all kinds of boundary conditions," although existing methods are not ideal and fail in at least one of these respects. Meshless methods, which are also sometimes called particle methods, share a "common feature that the history of state variables is traced at points (particles) which are not connected with any element mesh, the distortion of which is a source of numerical difficulties." As can be seen by these varying interpretations, some scientists consider MPM to be a meshless method, while others do not. All agree, however, that MPM is a particle method.
The Arbitrary Lagrangian Eulerian (ALE) methods form another subset of numerical methods which includes MPM. Purely Lagrangian methods employ a framework in which a space is discretised into initial subvolumes, whose flowpaths are then charted over time. Purely Eulerian methods, on the other hand, employ a framework in which the motion of material is described relative to a mesh that remains fixed in space throughout the calculation. As the name indicates, ALE methods combine Lagrangian and Eulerian frames of reference.
Subclassification of MPM/PIC
PIC methods may be based on either the strong form collocation or a weak form discretisation of the underlying partial differential equation (PDE). Those based on the strong form are properly referred to as finite-volume PIC methods. Those based on the weak form discretisation of PDEs may be called either PIC or MPM.
MPM solvers can model problems in one, two, or three spatial dimensions, and can also model axisymmetric problems. MPM can be implemented to solve either quasi-static or dynamic equations of motion, depending on the type of problem that is to be modeled. Several versions of MPM include Generalized Interpolation Material Point Method ;Convected Particle Domain Interpolation Method; Convected Particle Least Squares Interpolation Method.
The time-integration used for MPM may be either explicit or implicit. The advantage to implicit integration is guaranteed stability, even for large timesteps. On the other hand, explicit integration runs much faster and is easier to implement.
Advantages
Compared to FEM
Unlike FEM, MPM does not require periodical remeshing steps and remapping of state variables, and is therefore better suited to the modeling of large material deformations. In MPM, particles and not the mesh points store all the information on the state of the calculation. Therefore, no numerical error results from the mesh returning to its original position after each calculation cycle, and no remeshing algorithm is required.
The particle basis of MPM allows it to treat crack propagation and other discontinuities better than FEM, which is known to impose the mesh orientation on crack propagation in a material. Also, particle methods are better at handling history-dependent constitutive models.
Compared to pure particle methods
Because in MPM nodes remain fixed on a regular grid, the calculation of gradients is trivial.
In simulations with two or more phases it is rather easy to detect contact between entities, as particles can interact via the grid with other particles in the same body, with other solid bodies, and with fluids.
Disadvantages of MPM
MPM is more expensive in terms of storage than other methods, as MPM makes use of mesh as well as particle data. MPM is more computationally expensive than FEM, as the grid must be reset at the end of each MPM calculation step and reinitialised at the beginning of the following step. Spurious oscillation may occur as particles cross the boundaries of the mesh in MPM, although this effect can be minimized by using generalized interpolation methods (GIMP). In MPM as in FEM, the size and orientation of the mesh can impact the results of a calculation: for example, in MPM, strain localisation is known to be particularly sensitive to mesh refinement.
One stability problem in MPM that does not occur in FEM is the cell-crossing errors and null-space errors because the number of integration points (material points) does not remain constant in a cell.
Notes
External links
Center for Simulation of Accidental Fires and Explosions – MPM code available
NairnMPM – open source
MPM3D - open source (MPM3D-F90) and free trial version (MPM3D)
Taichi - Physically Based Computer Graphics Library – open source MPM code available
Anura3D open source – software for geotechnical problems and soil-water-structure interactions by Anura3D MPM Research Community
Numerical analysis
Numerical differential equations
Computational fluid dynamics
Computational mathematics
Simulation | Material point method | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,397 | [
"Computational fluid dynamics",
"Applied mathematics",
"Computational mathematics",
"Computational physics",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Fluid dynamics"
] |
13,431,657 | https://en.wikipedia.org/wiki/Siemens%20Hearing%20Instruments | Sivantos, Inc. (formerly Siemens Hearing Instruments) is the United States affiliate of Sivantos Group, which maintains a global headquarters in Singapore. Sivantos Group (formerly Siemens Audiology Group, a division of Siemens Healthcare) is one of the world's leading manufacturers of hearing aids. They serve hearing care professionals in more than 120 countries, offering hearing aids branded Siemens, Audio Service, Rexton, and A&M. Sivantos, Inc., and is located in Piscataway, NJ, where approximately 500 employees work in manufacturing, research and development, sales, marketing finance, and customer care.
Siemens Hearing Instruments changed its name in 2015 to Sivantos, Inc. when Sivantos Group was spun off from Siemens Audiology Solutions after Siemens AG sold the company to EQT VI and Santo Holding GmbH.
References
External links
Companies based in Middlesex County, New Jersey
Hearing aid manufacturers | Siemens Hearing Instruments | [
"Biology"
] | 188 | [
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
13,432,264 | https://en.wikipedia.org/wiki/Full%20spectral%20imaging | Full spectral imaging (FSI) is a form of imaging spectroscopy and is the successor to hyperspectral imaging. Full spectral imaging was developed to improve the capabilities of remote sensing including Earth remote sensing.
Data acquisition
Whereas hyperspectral imaging acquires data as many contiguous spectral bands, full spectral imaging acquires data as spectral curves. A significant advantage of FSI over hyperspectral imaging is a significant reduction in data rate and volume. FSI extracts and saves only the information that is in the raw data. The information is contained in the shape of the spectral curves. The rate at which data is produced by an FSI system is proportional to the amount of information in the scene/image.
Applications
Full spectral imaging, along with empirical reflectance retrieval and autonomous remote sensing are the components of the new systems for remote sensing and the successor to the Landsat series of satellites of the Landsat program.
References
Spectroscopy
Remote sensing | Full spectral imaging | [
"Physics",
"Chemistry",
"Astronomy"
] | 189 | [
"Spectroscopy stubs",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Astronomy stubs",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
13,432,297 | https://en.wikipedia.org/wiki/EAZA%20Ex-situ%20Programme | The EAZA Ex-situ Programme (EEP) is a population management and conservation programme by European Association of Zoos and Aquaria (EAZA) for wild animals living in European zoos. The programme was formerly known as the European Endangered Species Programme.
Each EEP has a coordinator who is assisted by a species committee. The coordinator collects information on the status of all the animals kept in EAZA zoos and aquariums of the species for which he or she is responsible, produces a studbook, carries out demographic and genetic analyses, produces a plan for the future management of the species and provides recommendations to participating institutions. Together with the EAZA Species Committee, recommendations are made each year about relocating and breeding animals, and the conditions of such a move (breeding loan, exchange, term free disposition, etc.).
Even though EEP participation is mainly reserved for EAZA zoos, it is possible for non-EAZA collections to be included in these programmes. There are generally however more restrictions on such zoos (which may go as far as only holding non-breeding animals for educational purposes), and on the number of programmes they may participate in.
Necessity of international cooperation in breeding programmes
For zoo visitors to have the opportunity to see how wild animals look, live, and behave, zoos must ensure that truly wild animals, with all of their natural characteristics, are presented. Zoo animals are vulnerable to three very serious breeding problems inherent to small, artificial populations: inbreeding depression, loss of genetic variability, and accumulation of deleterious mutations. These problems can easily result in loss of original wild traits, and in the expression of heritable abnormalities. If what was once a pure, wild population of animals deteriorates through generations of uncontrolled breeding into inferior or partially domesticated stock, then the animals are no longer suitable for any conservation effort, and the zoos have failed to perform an important educational task.
The effects of breeding captive populations of wild animals over periods of many generations have been well studied. Based on these studies and genetic theory, guidelines for breeding such small populations have been developed. Following such guidelines should sharply reduce possibilities of breeding problems and concurrently should maximize the number of generations in which the original founding diversity can be maintained. Guidelines for captive populations follow some basic principles including beginning with as many "founders" as possible (preferably at least 20-30 of animals), increase the number of individuals within the population rapidly, all individuals from the founder population should have "equal genetic representation" and inbreeding should be avoided. The application of these guidelines, and many others tailored to specific populations, results in strictly controlled breeding programmes in which nothing is left to chance. Only in this manner can healthy and truly wild populations be maintained over a period of one or two hundred years. Such strict control is entirely dependent on cooperation among zoos that hold individuals of the species, as single zoos generally do not have the facilities to maintain a population of adequate size independently.
The species in the programme
As of 2022, over 400 animal species are represented in the EAZA Ex-situ Programme.
Examples of the (sub)species
Sumatran tiger (Panthera tigris sumatrae)
One species which has been handled by EEP was Sumatran tiger which have only a few hundred left in the wild, and only about 7% of their habitat remains. They used to live in all the Sumatran territory, but nowadays most of them can only be found in the mountain regions of the Burit Barisan volcanic area. This species is currently threatened with the destruction of habitat and poaching, for trade of its parts in traditional oriental medicine or as a trophy. The Tiger EEP has made contributions to the conservation of Amur and Sumatran tigers in the wild, via fundraising for wild tiger conservation projects, raising awareness and providing educational opportunities, and assisting with relevant research and training. For example, in November 2011, Sumatran tiger Kirana has delivered three cubs at Chester Zoo under EEP which attempts to coordinate breeding between zoos and maintain genetic diversity.
Gorilla (Gorilla gorilla gorilla)
The Gorilla EEP is one of the most intensively managed and oldest breeding programmes in European zoos. The Gorilla EEP was started in 1987 and was run by the Frankfurt Zoological Garden, who continue to maintain the Gorilla Studbook. In the past decade, some major improvements have been achieved in the management of the EEP for the western lowland gorilla Gorilla gorilla gorilla. Neonatal mortality and hand‐rearing rates have decreased; transfers in most cases proved to be successful: almost all gorillas were integrated into their new groups and most animals introduced to a breeding group had their first offspring within two years. The results show that the current management approach is successful, and that the population is sustainable and has good genetic health.
Challenges
The largest problem encountered in the functioning of the EEP is undoubtedly the actual execution of breeding management recommendations: it is often difficult to develop policies applicable to an entire group of zoos (varying from 10 to well over 50 depending on the species programme) when these are spread throughout several countries with different languages and laws, and with dissimilar political and economic backgrounds. Just the incongruencies in laws can sometimes make exchange of specimens for breeding purposes by two closely situated zoos a formidable task if a border happens to lie between them. Yet successes have been achieved: the growth of the EEP has been considerable since its initiation in 1985.
Countries participating in the EEP
As of 2022, (insert number here) EAZA and non-EAZA/WAZA and non-WAZA countries participate in the EEP program, including:
Worldwide counterparts
EEP is one of the worldwide assembly of such regional breeding programs in zoos for threatened species. The North America counterpart is the Species Survival Plan, and Australian, Japanese, Indian, and Chinese zoos also have similar programs. Combined, there are now many hundred zoos worldwide involved in regional breeding programs.
See also
European Association of Zoos and Aquaria
Ex situ conservation
References
External links
EEP; Breeding Programmes; European Association of Zoos and Aquaria (EAZA)
Animal breeding organizations
Endangered biota of Europe
Nature conservation organisations based in Europe
Endangered species
Environment of Europe | EAZA Ex-situ Programme | [
"Biology"
] | 1,265 | [
"Biota by conservation status",
"Endangered species"
] |
13,432,475 | https://en.wikipedia.org/wiki/Intonaco | Intonaco is an Italian term for the final, very thin layer of plaster on which a fresco is painted. The plaster is painted while still wet, in order to allow the pigment to penetrate into the intonaco itself. An earlier layer, called arriccio, is laid slightly coarsely to provide a key for the intonaco, and must be allowed to dry, usually for some days, before the final very thin layer is applied and painted on. In Italian the term intonaco is also used much more generally for normal plaster or mortar wall-coatings in buildings.
Intonaco is traditionally a mixture of sand (with granular dimensions less than two millimeters) and a binding substance.
Types of intonaco
Different types of intonaco are classified based on the binding material used:
Intonaco based on lime, where the only binding substance is hydrated lime
Intonaco lime/cement, where the binding element is a mixture of hydrated lime with Portland cement, with a majority of lime
Intonaco cement/lime, where the binding element is a mixture of hydrated lime, and Portland cement, with a majority of Portland cement
Intonaco with a plaster base, where the binding element is exclusively plaster
The sand utilized in the intonaco can be limestone or silicate, taken from a natural source such as a river or from sand that is pulverized.
Types of stabilizers
Intonaco can be stabilized using:
Lime
plaster
Calcium-sulfate based plaster
Terracotta-based cement
References
Painting techniques
Plastering | Intonaco | [
"Chemistry",
"Engineering"
] | 317 | [
"Building engineering",
"Coatings",
"Plastering"
] |
13,432,608 | https://en.wikipedia.org/wiki/Geographical%20centre | In geography, the centroid of the two-dimensional shape of a region of the Earth's surface (projected radially to sea level or onto a geoid surface) is known as its geographic centre or geographical centre or (less commonly) gravitational centre. Informally, determining the centroid is often described as finding the point upon which the shape (cut from a uniform plane) would balance. This method is also sometimes described as the "gravitational method".
One example of a refined approach using an azimuthal equidistant projection, also potentially incorporating an iterative process, was described by Peter A. Rogerson in 2015. The abstract says "the new method minimizes the sum of squared great circle distances from all points in the region to the center". However, as that property is also true of a centroid (of area), this aspect is effectively just different terminology for determining the centroid.
In 2019, New Zealand's GNS Science also used an iterative approach (and a variety of different projections) when determining a centre position for New Zealand's Extended Continental Shelf.
However, other methods have also been proposed or used to determine the centres of various countries and regions. These include:
centroid of volume (incorporating elevations into calculations), instead of the more usual centroid of area as described above.
centre point of a bounding box completely enclosing the area. While relatively easy to determine, a centre point calculated using this method will generally also vary (relative to the shape of the landmass or region) depending on the orientation of the bounding box to the area under consideration. In this sense it is not a robust method.
finding the longitude that divides the region into two equal area parts to the east and west, and then similarly the latitude that divides the region into two equal area parts to the north and south. Like the bounding box approach described above this method would not generally locate precisely the same point if the same shaped region was oriented differently.
As noted in a United States Geological Survey document, "There is no generally accepted definition of geographic center, and no completely satisfactory method for determining it."
In general, there is room for debate around various details such as whether or not to include islands and similarly, large bodies of water, how best to handle the curvature of the Earth (a more significant factor with larger regions) and closely related to that issue, which map projection to use.
Notable geographical centres
Geographical centre of Earth
Axis mundi
Omphalos of Delphi
Geographic centres in Africa
Geographic Centre of Uganda (Amolatar Monument)
Geographic centres in Asia
Geographical midpoints of Asia, in China or Russia
Geographical centre of India
Zero Mile Stone (Nagpur)
Geographic center of Iran
Geographic centre of Sri Lanka
Geographical centre of the Korean Peninsula
Geographical centre of the Philippines
Geographical centre of the Russian Federation
Geographical centre of the Soviet Union
Geographic center of Taiwan
Geographic centres in Europe
Geographical midpoint of Europe
Geographical centre of Austria
Geographic center of Belarus
Geographical centre of Belgium (Nil-Saint-Vincent-Saint-Martin)
Geographical centre of Estonia (Paenasti)
Geographical centre of Germany (Niederdorla)
Central Germany (geography)
Geographical centre of Hungary (Pusztavacs)
Geographical centre of Ireland
Geographical centre of Lithuania (Ruoščiai)
Geographical centre of Norway
Geographical centre of Poland
Geographical center of Romania (Făgăraș)
Geographical centre of the Russian Federation (Lake Vivi)
Geographical centre of Serbia (Drača)
Geographical centre of Slovenia
Geographical centre of the Soviet Union
Geographical center of Spain (Cerro de los Ángeles)
Geographical center of Sweden
Geographical centre of Switzerland
Centre points of the United Kingdom
Geographical centre of Great Britain (Brennand Farm)
Geographic centre of England
Geographical centre of Scotland
Geographic centre of Wales (Cwmystwyth)
Geographic centres in North America
Geographic center of North America
Geographic centre of Canada
Geographic center of the United States
List of geographic centers of the United States
Geographic centres in Oceania
Centre points of Australia
Geographic centre of New Zealand
Geographic centres in South America
Geographical Center of South America
Geographical Center of Colombia
See also
Extremes on Earth
Pole of inaccessibility
Center of population
References
External links | Geographical centre | [
"Physics",
"Mathematics"
] | 842 | [
"Point (geometry)",
"Geometric centers",
"Geographical centres",
"Symmetry"
] |
13,433,185 | https://en.wikipedia.org/wiki/Burst%20dimming | Burst dimming is a method to control dimming of cold cathode fluorescent lamps (CCFLs) and LEDs by using pulse width modulation (PWM) at approximately 100-300 Hz which is supposed to be above the noticeable flicker limit for the human eye.
This technique is sometimes used with TFT displays to control backlighting. An alternative dimming method is to control lamp current.
References
Display technology
Lighting
Liquid crystal displays | Burst dimming | [
"Engineering"
] | 91 | [
"Electronic engineering",
"Display technology"
] |
13,433,266 | https://en.wikipedia.org/wiki/Gadolinium%28III%29%20nitrate | Gadolinium(III) nitrate is an inorganic compound of gadolinium. This salt is used as a water-soluble neutron poison in nuclear reactors. Gadolinium nitrate, like all nitrate salts, is an oxidizing agent.
The most common form of this substance is hexahydrate Gd(NO3)3•6H2O with molecular weight 451.36 g/mol and CAS Number: 19598-90-4.
Use
Gadolinium nitrate was used at the Savannah River Site heavy water nuclear reactors and had to be separated from the heavy water for storage or reuse.
The Canadian CANDU reactor, a pressurized heavy water reactor, also uses gadolinium nitrate as a water-soluble neutron poison in heavy water.
Gadolinium nitrate is also used as a raw material in the production of other gadolinium compounds, for production of specialty glasses and ceramics and as a phosphor.
References
Gadolinium compounds
Nitrates
Neutron poisons | Gadolinium(III) nitrate | [
"Chemistry"
] | 211 | [
"Oxidizing agents",
"Nitrates",
"Salts"
] |
13,433,465 | https://en.wikipedia.org/wiki/Handango | Handango was an online store selling mobile apps for personal digital assistants and smartphones headquartered in Irving, Texas.
History
Handango InHand was founded in 1999 by Randy Eisenman. It is an app store for finding, installing, and buying software for mobile devices. It was made available in 2003 for Symbian UIQ users, 2004 for Windows Mobile and Palm OS, 2005 for Blackberry, and 2006 for Symbian S60.
Application downloads and purchases are completed directly on the device. Descriptions, ratings and screenshots are available for all applications.
In 2010, PocketGear announced its acquisition of Handango. In 2011, PocketGear rebranded the company as Appia. Appia shifted focus to on-device OEM branded store apps, shifting its model to become a white-label app marketplace platform. Consequently, both Handango's and PocketGear's websites were shut down in 2013.
See also
List of digital distribution platforms for mobile devices
Amazon Appstore
App Store (iOS/iPadOS)
BlackBerry World
References
External links
Official website
Personal digital assistant software
Pocket PC software
Symbian software
Windows Mobile Standard software
Mobile software distribution platforms | Handango | [
"Technology"
] | 235 | [
"Mobile content",
"Mobile software distribution platforms"
] |
13,433,476 | https://en.wikipedia.org/wiki/Edholm%27s%20law | Edholm's law, proposed by and named after Phil Edholm, refers to the observation that the three categories of telecommunication, namely wireless (mobile), nomadic (wireless without mobility) and wired networks (fixed), are in lockstep and gradually converging. Edholm's law also holds that data rates for these telecommunications categories increase on similar exponential curves, with the slower rates trailing the faster ones by a predictable time lag. Edholm's law predicts that the bandwidth and data rates double every 18 months, which has proven to be true since the 1970s. The trend is evident in the cases of Internet, cellular (mobile), wireless LAN and wireless personal area networks.
Concept
Edholm's law was proposed by Phil Edholm of Nortel Networks. He observed that telecommunication bandwidth (including Internet access bandwidth) was doubling every 18 months, since the late 1970s through to the early 2000s. This is similar to Moore's law, which predicts an exponential rate of growth for transistor counts. He also found that there was a gradual convergence between wired (e.g. Ethernet), nomadic (e.g. modem and Wi-Fi) and wireless networks (e.g. cellular networks). The name "Edholm's law" was coined by his colleague, John H. Yoakum, who presented it at a 2004 Internet telephony press conference.
Slower communications channels like cellphones and radio modems were predicted to eclipse the capacity of early Ethernet, due to developments in the standards known as UMTS and MIMO, which boosted bandwidth by maximizing antenna usage. Extrapolating forward indicates a convergence between the rates of nomadic and wireless technologies around 2030. In addition, wireless technology could end wireline communication if the cost of the latter's infrastructure remains high.
Underlying factors
In 2009, Renuka P. Jindal observed the bandwidths of online communication networks rising from bits per second to terabits per second, doubling every 18 months, as predicted by Edholm's law. Jindal identified the following three major underlying factors that have enabled the exponential growth of communication bandwidth.
The MOSFET was invented at Bell Labs between 1955 and 1960, after Frosch and Derick discovered and used surface passivation by silicon dioxide to create the first planar transistors, the first in which drain and source were adjacent at the same surface. Advances in MOSFET technology (MOS technology) has been the most important contributing factor in the rapid rise of bandwidth in telecommunications networks. Continuous MOSFET scaling, along with various advances in MOS technology, has enabled both Moore's law (transistor counts in integrated circuit chips doubling every two years) and Edholm's law (communication bandwidth doubling every 18 months).
Laser lightwave systems The laser was demonstrated by Charles H. Townes and Arthur Leonard Schawlow at Bell Labs in 1960. Laser technology was later adopted in the design of integrated electronics using MOS technology, leading to the development of lightwave systems around 1980. This has led to exponential growth of bandwidth since the early 1980s.
Information theory Information theory, as enunciated by Claude Shannon at Bell Labs in 1948, provided a theoretical foundation to understand the trade-offs between signal-to-noise ratio, bandwidth, and error-free transmission in the presence of noise, in telecommunications technology. In the early 1980s, Renuka Jindal at Bell Labs used information theory to study the noise behaviour of MOS devices, improving their noise performance and resolving issues that limited their receiver sensitivity and data rates. This led to a significant improvement in the noise performance of MOS technology, and contributed to the wide adoption of MOS technology in lightwave and then wireless terminal applications.
The bandwidths of wireless networks have been increasing at a faster pace compared to wired networks. This is due to advances in MOSFET wireless technology enabling the development and growth of digital wireless networks. The wide adoption of RF CMOS (radio frequency CMOS), power MOSFET and LDMOS (lateral diffused MOS) devices led to the development and proliferation of digital wireless networks by the 1990s, with further advances in MOSFET technology leading to rapidly increasing bandwidth since the 2000s. Most of the essential elements of wireless networks are built from MOSFETs, including the mobile transceivers, base station modules, routers, RF power amplifiers, telecommunication circuits, RF circuits, and radio transceivers, in networks such as 2G, 3G, 4G, and 5G.
In recent years, another enabling factor in the growth of wireless communication networks has been interference alignment, which was discovered by Syed Ali Jafar at the University of California, Irvine. He established it as a general principle, along with Viveck R. Cadambe, in 2008. They introduced "a mechanism to align an arbitrarily large number of interferers, leading to the surprising conclusion that wireless networks are not essentially interference limited." This led to the adoption of interference alignment in the design of wireless networks. According to New York University senior researcher Dr. Paul Horn, this "revolutionized our understanding of the capacity limits of wireless networks" and "demonstrated the astounding result that each user in a wireless network can access half of the spectrum without interference from other users, regardless of how many users are sharing the spectrum."
See also
History of the Internet
History of telecommunication
Internet access
Internet traffic
Moore's law
Telecommunications
Nielsen's law
References
Bibliography
Adages
Computing culture
Computer architecture statements
Computer industry
MOSFETs | Edholm's law | [
"Technology"
] | 1,134 | [
"Computer industry",
"Computing culture",
"Computing and society"
] |
8,748,291 | https://en.wikipedia.org/wiki/Expansion%20ratio | The expansion ratio of a liquefied and cryogenic substance is the volume of a given amount of that substance in liquid form compared to the volume of the same amount of substance in gaseous form, at room temperature and normal atmospheric pressure.
If a sufficient amount of liquid is vaporized within a closed container, it produces pressures that can rupture the pressure vessel. Hence the use of pressure relief valves and vent valves are important.
The expansion ratio of liquefied and cryogenic from the boiling point to ambient is:
nitrogen – 1 to 696
liquid helium – 1 to 745
argon – 1 to 842
liquid hydrogen – 1 to 850
liquid oxygen – 1 to 860
neon – Neon has the highest expansion ratio with 1 to 1445.
See also
Liquid-to-gas ratio
Boiling liquid expanding vapor explosion
Thermal expansion
References
External links
cryogenic-gas-hazards
Cryogenics | Expansion ratio | [
"Physics",
"Chemistry"
] | 181 | [
"Physical chemistry stubs",
"Applied and interdisciplinary physics",
"Cryogenics"
] |
8,749,147 | https://en.wikipedia.org/wiki/Galerina%20vittiformis | Galerina vittiformis, also called the hairy leg bell, is a species of agaric fungus in the family Hymenogastraceae, and the type species of the genus Galerina. It is widely distributed in temperate regions, where it typically grows in moist locations, often among mosses. The fungus has been shown to bioaccumulate various heavy metal from contaminated soil.
Morphology
Galerina vittiformis has a honey-coloured, striped, hygrophanous cap, that is 0 wide. Its shape is bluntly conical becoming broadly convex and even flat with age, often with a prominent umbo. The gills of Galerina vittiformis are adnate and tawny to cream coloured. Its spore print is reddish brown.
The flesh of Galerina vittiformis is thin and fragile. Its stem is equal and pale yellow to chestnut brown, and is initially slightly downy. Its dimensions are 3-6cm x 0.07-0.2cm, and it has no veil.
Microscopically, its spores measure 10-12.3 x 5-6.5 microns and egg shaped. Its plage is sharply defined, and the spores have an apical callus. Each basidium has 2 spores, and measures 20-24 x 7-8 microns. They are colorless in KOH. The pleurocystidia and cheilocystidia measure 56-74 x 10-16 microns, and are abundant to scattered. They are thin, and fusoid-ventricose with an acute or rounded tip. They are also colorless in KOH.
References
Fungi described in 1838
Hymenogastraceae
Taxa named by Elias Magnus Fries
Fungus species | Galerina vittiformis | [
"Biology"
] | 356 | [
"Fungi",
"Fungus species"
] |
8,749,153 | https://en.wikipedia.org/wiki/Galerina%20marginata | Galerina marginata, known colloquially as funeral bell, deadly skullcap, autumn skullcap or deadly galerina, is a species of extremely poisonous mushroom-forming fungus in the family Hymenogastraceae of the order Agaricales. It contains the same deadly amatoxins found in the death cap (Amanita phalloides). Ingestion in toxic amounts causes severe liver damage with vomiting, diarrhea, hypothermia, and eventual death if not treated rapidly. About ten poisonings have been attributed to the species now grouped as G. marginata over the last century.
G. marginata is widespread in the Northern Hemisphere, including Europe, North America, and Asia, and has also been found in Australia. It is a wood-rotting fungus that grows predominantly on decaying conifer wood. The fruit bodies of the mushroom have brown to yellow-brown caps that fade in color when drying. The gills are brownish and give a rusty spore print. A well-defined membranous ring is typically seen on the stems of young specimens but often disappears with age. In older fruit bodies, the caps are flatter and the gills and stems browner. The species is a classic "little brown mushroom" – a catchall category that includes all small to medium-sized, hard-to-identify brownish mushrooms, and may be easily confused with several edible species.
Before 2001, the species G. autumnalis, G. oregonensis, G. unicolor, and G. venenata were thought to be distinct from G. marginata due to differences in habitat and the viscidity of their caps, but phylogenetic analysis showed that they are all the same species.
Taxonomy and naming
What is now recognized as a single morphologically variable taxon named Galerina marginata was once split into five distinct species. Norwegian mycologist Gro Gulden and colleagues concluded that all five represented the same species after comparing the DNA sequences of the internal transcribed spacer region of ribosomal DNA for various North American and European specimens in Galerina section Naucoriopsis. The results showed no genetic differences between G. marginata and G. autumnalis, G. oregonensis, G. unicolor, and G. venenata, thus reducing all these names to synonymy. The oldest of these names are Agaricus marginatus, described by August Batsch in 1789, and Agaricus unicolor, described by Martin Vahl in 1792. Agaricus autumnalis was described by Charles Horton Peck in 1873, and later moved to Galerina by A. H. Smith and Rolf Singer in their 1962 worldwide monograph on that genus. In the same publication they also introduced the G. autumnalis varieties robusta and angusticystis. Another of the synonymous species, G. oregonensis, was first described in that monograph. Galerina venenata was first identified as a species by Smith in 1953. Since Agaricus marginatus is the oldest validly published name, it has priority according to the rules of botanical nomenclature.
Another species analysed in Gulden's 2001 study, Galerina pseudomycenopsis, also could not be distinguished from G. marginata based on ribosomal DNA sequences and restriction fragment length polymorphism analyses. Because of differences in ecology, fruit body color and spore size combined with inadequate sampling, the authors preferred to maintain G. pseudomycenopsis as a distinct species. A 2005 study again failed to separate the two species using molecular methods, but reported that the incompatibility demonstrated in mating experiments suggests that the species are distinct.
In the fourth edition (1986) of Singer's comprehensive classification of the Agaricales, G. marginata is the type species of Galerina section Naucoriopsis, a subdivision first defined by French mycologist Robert Kühner in 1935. It includes small brown-spored mushrooms characterized by cap edges initially curved inwards, fruit bodies resembling Pholiota or Naucoria and thin-walled, obtuse or acute-ended pleurocystidia that are not rounded at the top. Within this section, G. autumnalis and G. oregonensis are in stirps Autumnalis, while G. unicolor, G. marginata, and G. venenata are in stirps Marginata. Autumnalis species are characterized by having a viscid to lubricous cap surface while Marginata species lack a gelatinous cap—the surface is moist, "fatty-shining", or matte when wet. However, as Gulden explains, this characteristic is highly variable: "Viscidity is a notoriously difficult character to assess because it varies with the age of the fruitbody and the weather conditions during its development. Varying degrees of viscidity tend to be described differently and applied inconsistently by different persons applying terms such as lubricous, fatty, fatty-shiny, sticky, viscid, glutinous, or (somewhat) slimy."
The specific epithet marginata is derived from the Latin word for "margin" or "edge", while autumnalis means "of the autumn". Common names of the species include the "marginate Pholiota" (resulting from its synonymy with Pholiota marginata), "funeral bell", "deadly skullcap", and "deadly Galerina". G. autumnalis was known as the "fall Galerina" or the "autumnal Galerina", while G. venenata was the "deadly lawn Galerina".
Description
The cap ranges from in diameter. It starts convex, sometimes broadly conical, and has edges (margins) that are curved in against the gills. As the cap grows and expands, it becomes broadly convex and then flattened, sometimes developing a central elevation, or umbo, which may project prominently from the cap surface.
Based on the collective descriptions of the five taxa now considered to be G. marginata, the texture of the surface shows significant variation. Smith and Singer give the following descriptions of surface texture: from "viscid" (G. autumnalis), to "shining and viscid to lubricous when moist" (G. oregonensis), to "shining, lubricous to subviscid (particles of dirt adhere to surface) or merely moist, with a fatty appearance although not distinctly viscid", to "moist but not viscid" (G. marginata). The cap surface remains smooth and changes colors with humidity (hygrophanous), pale to dark ochraceous tawny over the disc and yellow-ochraceous on the margin (at least when young), but fading to dull tan or darker when dry. When moist, the cap is somewhat transparent so that the outlines of the gills may be seen as striations. The flesh is pale brownish ochraceous to nearly white, thin and pliant, with an odor and taste varying from very slightly to strongly like flour (farinaceous).
The gills are typically narrow and crowded together, with a broadly adnate to nearly decurrent attachment to the stem and convex edges. They are a pallid brown when young, becoming tawny at maturity. Some short gills, called lamellulae, do not extend entirely from the cap edge to the stem, and are intercalated among the longer gills. The stem ranges from long, 3–9 mm thick at the apex, and stays equal in width throughout or is slightly enlarged downward. Initially solid, it becomes hollow from the bottom up as it matures. The membranous ring is located on the upper half of the stem near the cap, but may be sloughed off and missing in older specimens. Its color is initially whitish or light brown, but usually appears a darker rusty-brown in mature specimens that have dropped spores on it. Above the level of the ring, the stem surface has a very fine whitish powder and is paler than the cap; below the ring it is brown down to the reddish-brown to bistre base. The lower portion of the stem has a thin coating of pallid fibrils which eventually disappear and do not leave any scales. The spore print is rusty-brown.
Microscopic characteristics
The spores measure 8–10 by 5–6 μm, and are slightly inequilateral in profile view, and egg-shaped in face view. Like all Galerina species, the spores have a plage, which has been described as resembling "a slightly wrinkled plastic shrink-wrap covering over the distal end of the spore". The spore surface is warty and full of wrinkles, with a smooth depression where the spore was once attached via the sterigmatum to the basidium (the spore-bearing cell). When in potassium hydroxide (KOH) solution, the spores appear tawny or darker rusty-brown, with an apical callus. The basidia are four-spored (rarely with a very few two-spored ones), roughly cylindrical when producing spores, but with a slightly tapered base, and measure 21–29 by 5–8.4 μm.
Cystidia are cells of the fertile hymenium that do not produce spores. These sterile cells, which are structurally distinct from the basidia, are further classified according to their position. In G. marginata, the pleurocystidia (cystidia from the gill sides) are 46–60 by 9–12 μm, thin-walled, and hyaline in KOH, fusoid to ventricose in shape with wavy necks and blunt to subacute apices (3–6 μm diameter near apex). The cheilocystidia (cystidia on the gill edges) are similar in shape but often smaller than the pleurocystidia, abundant, with no club-shaped or abruptly tapering (mucronate) cells present. Clamp connections are present in the hyphae.
Similar species
The deadly Galerina marginata may be mistaken for a few edible mushroom species such as Armillaria mellea and Kuehneromyces mutabilis. produces fruit bodies roughly similar in appearance and also grows on wood, but may be distinguished from G. marginata by its stems bearing scales up to the level of the ring, and from growing in large clusters (which is not usual of G. marginata). However, the possibility of confusion is such that this good edible species is "not recommended to those lacking considerable experience in the identification of higher fungi." Furthermore, microscopic examination shows smooth spores in Pholiota. One source notes "Often, G. marginata bears an astonishing resemblance to this fungus, and it requires careful and acute powers of observation to distinguish the poisonous one from the edible one." K. mutabilis may be distinguished by the presence of scales on the stem below the ring, the larger cap, which may reach a diameter of , and spicy or aromatic odor of the flesh. The related K. vernalis is a rare species and even more similar in appearance to G. marginata. Examination of microscopic characteristics is typically required to reliably distinguish between the two, revealing smooth spores with a germ pore.
Another potential edible lookalike is the "velvet shank", Flammulina velutipes. This species has gills that are white to pale yellow, a white spore print, and spores that are elliptical, smooth, and measure 6.5–9 by 2.5–4 μm. A rough resemblance has also been noted with the edible Hypholoma capnoides, the 'magic' mushroom Psilocybe subaeruginosa, as well as Conocybe filaris, another poisonous amatoxin-containing species.
Habitat and distribution
Galerina marginata is a saprobic fungus, obtaining nutrients by breaking down organic matter. It is known to have most of the major classes of secreted enzymes that dissolve plant cell wall polysaccharides, and has been used as a model saprobe in recent studies of ectomycorrhizal fungi. Because of its variety of enzymes capable of breaking down wood and other lignocellulosic materials, the Department of Energy Joint Genome Institute (JGI) is currently sequencing its genome. The fungus is typically reported to grow on or near the wood of conifers, although it has been observed to grow on hardwoods as well. Fruit bodies may grow solitarily, but more typically in groups or small clusters, and appear in the summer to autumn. Sometimes, they may grow on buried wood and thus appear to be growing on soil.
Galerina marginata is widely distributed throughout the Northern Hemisphere, found in North America, Europe, Japan, Iran, continental Asia, and the Caucasus. In North America, it has been collected as far north as the boreal forest of Canada and subarctic and arctic habitats in Labrador, and south to Jalisco, Mexico. It is also found in Australia, and in Antarctica.
Toxicity
The toxins found in Galerina marginata are known as amatoxins. Amatoxins belong to a family of bicyclic octapeptide derivatives composed of an amino acid ring bridged by a sulfur atom and characterized by differences in their side groups; these compounds are responsible for more than 90% of fatal mushroom poisonings in humans. The amatoxins inhibit the enzyme RNA polymerase II, which copies the genetic code of DNA into messenger RNA molecules. The toxin naturally accumulates in liver cells, and the ensuing disruption of metabolism accounts for the severe liver dysfunction cause by amatoxins. Amatoxins also lead to kidney failure because, as the kidneys attempt to filter out poison, it damages the convoluted tubules and reenters the blood to recirculate and cause more damage. Initial symptoms after ingestion include severe abdominal pain, vomiting, and diarrhea which may last for six to nine hours. Beyond these symptoms, toxins severely affect the liver which results in gastrointestinal bleeding, a coma, kidney failure, or even death, usually within seven days of consumption.
Galerina marginata was shown in various studies to contain the amatoxins α-amanitin and γ-amanitin, first as G. venenata, then as G. marginata and G. autumnalis. The ability of the fungus to produce these toxins was confirmed by growing the mycelium as a liquid culture (only trace amounts of β-amanitin were found). G. marginata is thought to be the only species of the amatoxin-producing genera that will produce the toxins while growing in culture. Both amanitins were quantified in G. autumnalis (1.5 mg/g dry weight) and G. marginata (1.1 mg/g dry weight). Later experiments confirmed the occurrence of γ-amanitin and β-amanitin in German specimens of G. autumnalis and G. marginata and revealed the presence of the three amanitins in the fruit bodies of G. unicolor. Although some mushroom field guides claim that the species (as G. autumnalis) also contains phallotoxins (however phallotoxins cannot be absorbed by humans), scientific evidence does not support this contention. A 2004 study determined that the amatoxin content of G. marginata varied from 78.17 to 243.61 μg/g of fresh weight. In this study, the amanitin amounts from certain Galerina specimens were higher than those from some Amanita phalloides, a European fungus generally considered as the richest in amanitins. The authors suggest that "other parameters such as extrinsic factors (environmental conditions) and intrinsic factors (genetic properties) could contribute to the significant variance in amatoxin contents from different specimens." The lethal dose of amatoxins has been estimated to be about 0.1 mg/kg human body weight, or even lower. Based on this value, the ingestion of 10 G. marginata fruit bodies containing about 250 μg of amanitins per gram of fresh tissue could poison a child weighing approximately . However, a 20-year retrospective study of more than 2100 cases of amatoxin poisonings from North American and Europe showed that few cases were due to ingestion of Galerina species. This low frequency may be attributed to the mushroom's nondescript appearance as a "little brown mushroom" leading to it being overlooked by collectors, and by the fact that 21% of amatoxin poisonings were caused by unidentified species.
The toxicity of certain Galerina species has been known for a century. In 1912, Charles Horton Peck reported a human poisoning case due to G. autumnalis. In 1954, a poisoning was caused by G. venenata. Between 1978 and 1995, ten cases caused by amatoxin-containing Galerinas were reported in the literature. Three European cases, two from Finland and one from France were attributed to G. marginata and G. unicolor, respectively. Seven North American exposures included two fatalities from Washington due to G. venenata, with five cases reacting positively to treatment; four poisonings were caused by G. autumnalis from Michigan and Kansas, in addition to poisoning caused by an unidentified Galerina species from Ohio. Several poisonings have been attributed to collectors consuming the mushrooms after mistaking them for the hallucinogenic Psilocybe stuntzii.
See also
List of deadly fungi
Footnotes
Cited books
External links
Hymenogastraceae
Deadly fungi
Fungi of Asia
Fungi of Australia
Fungi of North America
Fungi described in 1789
Taxa named by August Batsch
Fungi of Europe
Fungus species | Galerina marginata | [
"Biology"
] | 3,672 | [
"Fungi",
"Fungus species"
] |
8,749,261 | https://en.wikipedia.org/wiki/Hepoxilin | Hepoxilins (Hx) are a set of epoxyalcohol metabolites of polyunsaturated fatty acids (PUFA), i.e. they possess both an epoxide and an alcohol (i.e. hydroxyl) residue. HxA3, HxB3, and their non-enzymatically formed isomers are nonclassic eicosanoid derived from acid the (PUFA), arachidonic acid. A second group of less well studied hepoxilins, HxA4, HxB4, and their non-enzymatically formed isomers are nonclassical eicosanoids derived from the PUFA, eicosapentaenoic acid. Recently, 14,15-HxA3 and 14,15-HxB3 have been defined as arachidonic acid derivatives that are produced by a different metabolic pathway than HxA3, HxB3, HxA4, or HxB4 and differ from the aforementioned hepoxilins in the positions of their hydroxyl and epoxide residues. Finally, hepoxilin-like products of two other PUFAs, docosahexaenoic acid and linoleic acid, have been described. All of these epoxyalcohol metabolites are at least somewhat unstable and are readily enzymatically or non-enzymatically to their corresponding trihydroxy counterparts, the trioxilins (TrX). HxA3 and HxB3, in particular, are being rapidly metabolized to TrXA3, TrXB3, and TrXC3. Hepoxilins have various biological activities in animal models and/or cultured mammalian (including human) tissues and cells. The TrX metabolites of HxA3 and HxB3 have less or no activity in most of the systems studied but in some systems retain the activity of their precursor hepoxilins. Based on these studies, it has been proposed that the hepoxilins and trioxilins function in human physiology and pathology by, for example, promoting inflammation responses and dilating arteries to regulate regional blood flow and blood pressure.
History
HxA3 and HxB3 were first identified, named, shown to have biological activity in stimulating insulin secretion in cultured rat pancreatic islets of Langerhans in Canada in 1984 by CR Pace-Asciak and JM Martin. Shortly thereafter, Pace-Asciak identified, named, and showed to have insulin secretagogue activity HxA4 and HxB4.
Nomenclature
HxA3, HxB3, and their isomers are distinguished from most other eicosanoids (i.e. signaling molecules made by oxidation of 20-carbon fatty acids) in that they contain both epoxide and hydroxyl residues; they are structurally differentiated in particular from two other classes of arachidonic acid-derived eicosanoids, the leukotrienes and lipoxins, in that they lack conjugated double bonds. HxA4 and HxB4 are distinguished from HxA3 and HxB3 by possessing four rather than three double bonds. The 14,15-HxA3 and 14,15-HxB3 non-classical eicosanoids are distinguished from the aforementioned hepoxilins in that they are formed by a different metabolic pathway and differ in the positioning of their epoxide and hydroxyl residues. Two other classes of epoxyalcohol fatty acids, those derived from the 22-carbon polyunsaturated fatty acid, docosahexaenoic acid, and the 18-carbon fatty acid, linoleic acid, are distinguished from the aforementioned hepoxilins by their carbon chain length; they are termed hepoxilin-like rather than hepoxilins. A hepoxilin-like derivative of linoleic acid is formed on linoleic acid that is esterified to a sphingosine in a complex lipid termed esterified omega-hydroxylacyl-sphingosin (EOS).
Note on nomenclature ambiguities
The full structural identities of the hepoxilins and hepoxilin-like compounds in most studies are unclear in two important respects. First, the R versus S chirality of their hydroxy residue in the initial and most studies thereafter is undefined and therefore given with, for example, HxB3 as 10R/S-hydroxy or just 10-hydroxy. Second, the R,S versus S,R chirality of the epoxide residue in these earlier studies likewise goes undefined and given with, for example, HxB3 as 11,12-epoxide. While some later studies have defined the chirality of these residues for the products they isolated, it is often not clear that the earlier studies dealt with products that had exactly the same or a different chirality at these residues.
Biochemistry
Hepoxilins, such as HxA3 and HxB3, are metabolic intermediates derived from the polyunsaturated fatty acid (PUFA), arachidonic acid. They possess both an epoxide and a hydroxyl residue. As metabolic intermediates, hepoxilins play several roles in human physiology and pathology. They have various biological activities in animal models and/or cultured mammalian (including human) tissues and cells. For example, they have been implicated in promoting the neutrophil-based inflammatory response to various bacteria in the intestines and lungs of rodents.
Production
Human HxA3 and HxB3 are formed in a two-step reaction. First, molecular oxygen (O2) is added to carbon 12 of arachidonic acid (i.e. 5Z,8Z,11Z,14Z-eicosatetraenoic acid) and concurrently the 11Z double bond in this arachidonate moves to the 10E position to form the intermediate product, 12S-hydroperoxy-5Z,8Z,10E,14Z-eicosatetraenoic acid (i.e. 12S-hydroperoxyeicosatetraenoic acid or 12S-HpETE). Second, 12S-HpETE is converted to the hepoxilin products, HxA3 (i.e. 8R/S-hydroxy-11,12-oxido-5Z,9E,14Z-eicosatrienoic acid) and HxB3 (i.e. 10R/S-hydroxy-11,12-oxido-5Z,8Z,14Z-eicosatrienoic acid). This two-step metabolic reaction is illustrated below:
The second step in this reaction, the conversion of 12(S)-HpETE to HxA3 and HxB3, may be catalyzed by ALOX12 as an intrinsic property of the enzyme. Based on gene knockout studies, however, the epidermal lipoxygenase, ALOXE3, or more correctly, its mouse ortholog Aloxe3, appears responsible for converting 12(S)-HpETE to HxB3 in mouse skin and spinal tissue. It is suggested that ALOXE3 contributes in part or whole to the production of HxB3 and perhaps other hepoxilins by tissues where it is expressed such as the skin. Furthermore, hydroperoxide-containing unsaturated fatty acids can rearrange non-enzymatically to form a variety of epoxyalcohol isomers. The 12(S)-HpETE formed in tissues, it is suggested, may similar rearrange non-enzymatically to form HxA3 and HXB3. Unlike the products made by ALOX12 and ALOXE3, which are stereospecific in forming only HxA3 and HxB3, however, this non-enzymatic production of hepoxilins may form a variety of hepoxilin isomers and occur as an artifact of tissue processing. Finally, cellular peroxidases readily and rapidly reduce 12(S)-HpETE to its hydroxyl analog, 12S-hydroxy-5Z,8Z,10E,14Z-eicosatetraenoic acid (12S-HETE; see 12-hydroxyeicosatetraenoic acid; this reaction competes with the hepoxilin-forming reaction and in cells expressing very high peroxidase activity may be responsible for blocking the formation of the hepoxilins.
ALOX15 is responsible for metabolizing arachidonic acid to 14,15-HxA3 and 14,15-HxB3 as indicated in the following two-step reaction which first forms 15(S)-hydroperoxy-5Z,8Z,11Z,13E-eicosatetraenoic acid (15S-HpETE) and then two specific isomers of 11S/R-hydroxy-14S,15S-epoxy-5Z,8Z,12E-eicosatrienoic acid (i.e. 14,15-HxA3) and 13S/R)-hydroxy-14S,15S-epoxy-5Z,8Z,11Z-eicosatrienoic acid (i.e. 14,15-HxB3):
ALOX15 appears capable of conducting both steps in this reaction although further studies may show that ALOXE3, non-enzymatic rearrangements, and the reduction of 15S-HpETE to 15(S)-hydroxy-5Z,8Z,11Z,13E-eicosatetraenoic acid (i.e. 15S-HETE; see 15-hydroxyicosatetraenoic acid) may be involved in the production of 14,15-HxA3 and 14,15-HxB3 as they are in that of HxA3 and HxB3.
Production of the hepoxilin-like metabolites of docosahexaenoic acid, 7R/S-hydroxy-10,11-epoxy-4Z,7E,13Z,16Z,19Z-docosapentaenoic acid (i.e. 7-hydroxy-bis-α-dihomo-HxA5) and 10-hydroxy-13,14-epoxy-4Z,7EZ,11E,16Z,19Z-docosapentaenoic acid (i.e. 10-hydroxy-bis-α-dihomo-HxA5) was formed (or inferred to be formed based on the formation of their tihydroxy metabolites (see trioxilins, below) as a result of adding docosahexaenoic acid to the pineal gland or hippocampus isolated from rats; the pathway(s) making these products has not been described.
A hepoxilin-like metabolite of linoleic acid forms in the skin of humans and rodents. This hepoxilin is esterified to sphinganine in a lipid complex termed EOS i.e. esterified omega-hydroxyacyl-sphingosine (see ) that also contains a very long chain fatty acid. In this pathway, ALOX12B metabolizes the esterified linoleic acid to its 9R-hydroperoxy derivative and then ALOXE3 metabolizes this intermediate to its 13R-hydroxy-9R,10R-epoxy product. The pathway functions to deliver very long chain fatty acids to the cornified lipid envelope of the skin surface.
Further metabolism
HxA3 is extremely unstable and HxB3 is moderately unstable, rapidly decomposing to their tri-hydroxy products, for example, during isolation procedures that use an even mildly acidic methods; they are also rapidly metabolized enzymatically in cells to these same tri-hydroxy products, termed trioxilins (TrX's) or trihydroxyeicoxatrienoic acids (THETA's); HxA3 is converted to 8,11,12-trihydroxy-5Z,9E,14Z-eicosatrienoic acid (trioxilin A3 or TrXA3) while HxB3 is converted to 10,11,12-trihydroxy-5Z,8Z,14Z-eicosatrienoic acid (trioxilin B3 or TrXB3). A third trihydroxy acid, 8,9,12-trihydroxy-5Z,10E,14Z eicosatrienoic acid (trioxilin C3 or TrXC3), has been detected in rabbit and mouse aorta tissue incubated with arachidonic acid. The metabolism of HxA3 to TrXA3 and HXB3 to TrX is accomplished by soluble epoxide hydrolase in mouse liver; since it is widely distributed in various tissues of various mammalian species, including humans, soluble epoxide hydrolase may be the principal enzyme responsible for metabolizing these and perhaps other hepoxilin compounds. It seems possible, however, that other similarly acting epoxide hydrolases such as microsomal epoxide hydrolase or epoxide hydrolase 2 may prove to hepoxilin hydrolase activity. While the trihydroxy products of hepoxilin synthesis are generally considered to be inactive and the sEH pathway therefore considered as functioning to limiting the actions of the hepoxilins, some studies found that TrXA3, TrXB3, and TrXC3 were more powerful than HxA3 in relaxing pre-contracted mouse arteries and that TrXC3 was a relatively potent relaxer of rabbit pre-contracted aorta.
HxA3 was converted through a Michael addition catalyzed by glutathione transferase to its glutathione conjugate, HxA3-C, i.e., 11-glutathionyl-HxA3, in a cell-free system or in homogenates of rat brain hippocampus tissue; HxA3-C proved to be a potent stimulator of membrane hyperpolarization in rat hippocampal CA1 neurons. This formation of hepoxilin A3-C appears analogous to the formation of leukotriene C4 by the conjugation of glutathione to leukotriene A4. Glutathione conjugates of 14,15-HxA3 and 14,15-HxB3 have also been detected the human Hodgkin disease Reed–Sternberg cell line, L1236.
HxB3 and TrX3 are found esterified into the sn-2 position of phospholipid in human psoriasis lesions and samples of human psoriatic skin acylate HxBw and TrX2 into these phospholipids in vitro.
Physiological effects
Virtually all of the biological studies on hepoxilins have been conducted in animals or in vitro on animal and human tissues, However, these studies give species-specific different results which complicate their relevancy to humans. The useful translation of these studies to human physiology, pathology, and clinical medicine and therapies requires much further study.
Inflammation
HxA3 and HxB3 possess pro-inflammatory actions in, for example, stimulating human neutrophil chemotaxis and increasing the permeability of skin capillaries. Studies in humans have found that the amount of HxB3 is >16-fold higher in psoriatic lesions than normal epidermis. It is present in psoriatic scales at ~10 micromolar, a concentration which is able to exert biologic effects; HxB3 was not detected in these tissues although its present was strongly indicated by the presence of its metabolite, TrXB3, at relatively high levels in psoriatic scales but not normal epidermal tissue. These results suggest that the pro-inflammatory effects of HxA3 and HxB3 may contribute to the inflammatory response that accompanies psoriasis and perhaps other inflammatory skin conditions. HxA3 has also been implicating in promoting the neutrophil-based inflammatory response to various bacteria in the intestines and lungs of rodents.; this allows that this hepoxilin may also promote the inflammatory response of humans in other tissues, particularly those with a mucosa surface, besides the skin. In addition, HxA3 and a synthetic analog of HxB3, PBT-3, induce human neutrophils to produce neutrophil extracellular traps, i.e. DNA-rich extracellular fibril matrixes able to kill extracellular pathogens while minimizing tissue; hence these hepoxilins may contribute to innate immunity by being responsible of the direct killing of pathogens.
Circulation
In addition to 12S-HETE and 12R-HETE (see ), HxA3, TrXA3, and TrXC3 but neither HxB3 nor TrXB3 relax mouse mesentery arteries pre-contracted by thromboxane A2 (TXA2). Mechanistically, these metabolites form in the vascular endothelium, move to the underlining smooth muscle, and reverse the smooth muscle contraction caused by TXA2 by functioning as a receptor antagonist, i.e. they competitively inhibit the binding of TXA2 to its thromboxane receptor, α isoform. In contrast, 15-lipoxygenase-derived epoxyalcohol and trihydroxy metabolites of arachidonic acid viz., 15-hydroxy-11,12-epoxyeicosatrienoic acid, 13-hydroxy-14,15-epoxy-eicosatrienoic acid (a 14,15-HxA4 isomer), and 11,12,15-trihydroxyeicosatrienoic acid dilate rabbit aorta by an endothelium-derived hyperpolarizing factor (EDHF) mechanism, i.e. they form in the vessels endothelium, move to underlying smooth muscles, and trigger a response of hyperpolarization-induced relaxation by binding to and thereby opening their apamin-sensitive small conductance (SK) calcium-activated potassium channels. The cited metabolites may use one or the other of these two mechanisms in different vascular beds and in different animal species to contribute in regulating regional blood flow and blood pressure. While the role of these metabolites in the human vasculature has not been studied, 12S-HETE, 12R-HETE, HxA3, TrXA3, and TrXC3 do inhibit the binding of TXA2 to the human thromboxane receptor.
Pain perception
HXA3 and HXB3 appear responsible for hyperalgesia and tactile allodynia (pain caused by a normally non-painful stimulus) response of mice to skin inflammation. In this model, the hepoxilins are released in spinal cord and directly activate TRPV1 and TRPA1 receptors to augment the perception of pain. TRPV1 (the transient receptor potential cation channel subfamily V member 1 (TrpV1), also termed the capsaicin receptor or vanilloid receptor) and TRPA1 (Transient receptor potential cation channel, member A1) are plasma membrane ion channels on cells; these channels are known to be involved in the perception of pain caused by exogenous and endogenous physical and chemical stimuli in a wide range of animal species including humans.
Oxidative stress
Cultured rat RINm5F pancreatic islet cells undergoing oxidative stress secrete HxB3; HxB3 (and HxA3) in turn upregulates peroxidase enzymes which act to decrease this stress; it is proposed that this HxB3-triggered induction of oxidases constitutes a general compensatory defense response used by a variety of cells to protect their vitality and functionality.
Insulin secretion
The insulin-secreting actions of HxA3 and HxB3 on isolate rat pancreatic islet cells involves their ability to increase or potentiate the insulin-secreting activity of glucose, requires very high concentrations (e.g. 2 micromolar) of the hepoxilins, and has not been extended to intact animals or humans.
Hepoxilins are also produced in the brain.
References
Metabolic intermediates
Human physiology
Animal physiology
Fatty acids
Eicosanoids
Immunology
Cell signaling
Epoxides | Hepoxilin | [
"Chemistry",
"Biology"
] | 4,395 | [
"Animals",
"Animal physiology",
"Immunology",
"Metabolic intermediates",
"Biomolecules",
"Metabolism"
] |
8,749,373 | https://en.wikipedia.org/wiki/Catchment%20area | A catchment area in human geography, is the area from which a location, such as a city, service or institution, attracts a population that uses its services and economic opportunities. Catchment areas may be defined based on from where people are naturally drawn to a location (for example, a labour catchment area) or as established by governments or organizations such as education authorities or healthcare providers, for the provision of services.
Governments and community service organizations often define catchment areas for planning purposes and public safety such as ensuring universal access to services like fire departments, police departments, ambulance bases and hospitals. In business, a catchment area is used to describe the influence from which a retail location draws its customers. Airport catchment areas can inform efforts to estimate route profitability. A health catchment area is of importance in public health, and healthcare planning, as it helps in resource allocation, service delivery, and accessibility assessment.
Types of catchment areas
A catchment area can be defined relative to a location, and based upon a number of factors, including distance, travel time, geographic boundaries or population within the catchment.
Catchment areas generally fall under two categories, those that occur organically, i.e., "de facto" catchment area, and a place people are naturally drawn to, such as a large shopping centre. A catchment area in terms of a place people are drawn to could be a city, service or institution.
Catchment area boundaries can be modeled using geographic information systems (GIS). There can be large variability in the services provided within different catchment areas in the same region depending upon how and when those catchments were established. They are usually contiguous but can overlap when they describe competing services. For example, the boundaries of catchment areas can also vary by travel time, whereby 1-hour is indicative of daily commuting time and a 3-hour cut-off reflecting essential, but less frequent services.
Defining
Identification of "de facto" catchment areas
GIS technology has allowed for the modeling of catchment areas, and in particular those relating to urban areas. Based on travel time between rural areas and cities of different sizes, the urban–rural catchment areas (URCAs) is a global GIS dataset that allows for comparison across countries, such as the distribution of population along the rural–urban continuum. Functional economic areas (FEAs), also called larger urban zone or functional urban areas, are catchment areas of commuters or commuting zones. A limitation of the URCA and FEA is that the models link locations to a single urban center of reference, even though there may be multiple centers of reference for varying activities.
Establishment of catchment areas for specific services
Catchment areas may be established for the provision of services. For example, a school catchment area is the geographic area from which students are eligible to attend a local school. When a facility's capacity can only service a specific volume, the catchment may be used to limit a population's ability to access services outside that area. In the case of a school catchment area, children may be unable to enroll in a school outside their catchment to prevent the school's services being exceeded.
GIS can also inform for the establishment of health care or hospital catchment areas. Such catchment areas can also define the epidemiological disease burdens or forecast hospital needs amid a disease outbreak. They are used to evaluate population health outcomes, especially for diseases like cancer and chronic conditions. Understanding the catchment area helps health systems optimize service coverage, measure healthcare utilization, and identify underserved regions.
Health catchment areas are often employed in research to study the relationship between geographical factors and healthcare outcomes. For example, they are used in cancer research to understand the distribution of cases and ensure that healthcare resources are equitably distributed. They are also used in epidemiological studies to assess the reach and impact of healthcare interventions. One challenge in defining catchment areas is that they may not accurately reflect patient behavior or health-seeking patterns, particularly in areas where patients have access to multiple health facilities.
Defining city–regions based on overlapping catchment areas
Overlapping catchment areas can be used to determine city–regions, reflecting the interconnectedness of urban centers. The Nature Cities article “Worldwide Delineation of Multi-Tier City–Regions” maps the catchment areas of urban centers across four tiers—town, small, intermediate, and large city—based on travel time using a global travel friction grid, acknowledging that individuals may rely on multiple centers for various needs, with larger centers offering a wider range of activities. The dataset, classifying over 30,000 urban centers into the four tiers, is publicly available.
Examples
Airports can be built and maintained in locations which minimize the driving distance for the surrounding population to reach them.
A neighborhood or district of a city often has several small convenience shops, each with a catchment area of several streets. Supermarkets, on the other hand, have a much lower density, with catchment areas of several neighborhoods (or several villages in rural areas). This principle, similar to the central place theory, makes catchment areas an important area of study for geographers, economists, and urban planners.
In order to compensate for income inequalities, distances, variations in secondary educational level, and other similar factors, a nation may structure its higher education catchment areas to ensure a good mixture of students from different backgrounds.
Hong Kong divides its primary schools into School Nets under its Primary One Admission System, functioning as catchment areas for allocation of school places.
To inform prospective employers, transport providers, planners and local authorities, data detailing the travel to work patterns of seven towns in the Western Region of Ireland were used to define each towns’ labour catchments.
See also
City region
Hinterland
Rural-urban commuting area
School district
Urban planning
Catchment area (health)
References
Human geography
Economic geography | Catchment area | [
"Environmental_science"
] | 1,170 | [
"Environmental social science",
"Human geography"
] |
8,749,476 | https://en.wikipedia.org/wiki/Conocybe%20rugosa | Conocybe rugosa is a common species of mushroom that is widely distributed and especially common in the Pacific Northwest of the United States. It grows in woodchips, flowerbeds and compost. It has been found in Europe, Asia and North America. It contains the same mycotoxins as the death cap mushroom. Conocybe rugosa was originally described in the genus Pholiotina, and its morphology and a 2013 molecular phylogenetics study supported its continued classification there.
Description
Conocybe rugosa has a conical cap that expands to flat, usually with an umbo. It is less than 3 cm across, has a smooth brown top, and the margin is often striate. The gills are rusty brown, close, and adnexed. The stalk is 2 mm thick and 1 to 6 cm long, smooth, and brown, with a prominent and movable ring. The spores are rusty brown, and it may be difficult to identify the species without a microscope.
Toxicity
This species is deadly poisonous, the fruiting bodies containing alpha-amanitin, a cyclic peptide that is highly toxic to the liver and is responsible for many deaths by poisoning from mushrooms in the genera Amanita and Lepiota. They are sometimes mistaken for species of the genus Psilocybe due to their similar looking cap.
See also
List of deadly fungi
References
Bolbitiaceae
Deadly fungi
Fungi described in 1898
Fungi of North America
Fungi of Europe
Fungi of Asia
Taxa named by Charles Horton Peck
Fungus species | Conocybe rugosa | [
"Biology"
] | 309 | [
"Fungi",
"Fungus species"
] |
8,750,824 | https://en.wikipedia.org/wiki/Coordination%20isomerism | Coordination isomerism is a form of structural isomerism in which the composition of the coordination complex ion varies. In a coordination isomer the total ratio of ligand to metal remains the same, but the ligands attached to a specific metal ion change. Examples of a complete series of coordination isomers require at least two metal ions and sometimes more.
For example, a solution containing ([Co(NH3)6]3+ and [Cr(CN)6]3−) is a coordination isomer with a solution containing [Cr(NH3)6]3+ and [Co(CN)6]3−.
See also
Coordination complex#Isomerism – This type of isomerism arises from the interchange of ligands between cationic and anionic entities of different metal ions present in a complex.
References
Zumdahl, Steven. Chemistry. Fifth Edition, 2000.
Miessler, Tarr. Inorganic Chemistry. Fourth Edition, 2011.
Coordination chemistry
Isomerism | Coordination isomerism | [
"Chemistry"
] | 204 | [
"Coordination chemistry",
"Isomerism",
"Stereochemistry",
"Stereochemistry stubs"
] |
8,750,940 | https://en.wikipedia.org/wiki/Queen%20Fabiola%20Foundation%20for%20Mental%20Health | The Queen Fabiola Foundation for Mental Health is a Belgian non-profit organization, named after Queen Fabiola. It operates within the framework of the King Baudouin Foundation
The fund was founded on 10 October 2004 on World Mental Health Day. As its main objective the foundation organizes activities in the field of mental health and stimulates the exchange of ideas and good practice between the various organisations and association, which are active in this area.
Objectives
The stated objectives of the foundation are:
stress the importance of mental health in Belgian society
obtain the involvement of the users and their families in the content and organisation of mental healthcare
support the work of healthcare providers who are active in the various forms of mental healthcare
encourage the sectors and actors concerned to participate actively in the optimisation of mental health
support the reflection on the various issues of mental health
See also
Queen Elisabeth Medical Foundation
Sources
Queen Fabiola Foundation for Mental Health
External links
King Baudouin Foundation
Biomedical research foundations
Foundations based in Belgium
2004 establishments in Belgium
Organizations established in 2004
Medical and health organisations based in Belgium
King Baudouin Foundation | Queen Fabiola Foundation for Mental Health | [
"Engineering",
"Biology"
] | 214 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Medical technology stubs",
"Biotechnology organizations",
"Biomedical research foundations",
"Medical technology"
] |
8,750,985 | https://en.wikipedia.org/wiki/Insulantarctica | Insulantarctica is a biogeographic province of the Antarctic Realm according to the classification developed by Miklos Udvardy in 1975. It comprises scattered islands of the Southern Ocean, which show clear affinity to each other. These islands belong to different countries. Some of them constitute UNESCO's protected areas.
New Zealand Subantarctic Islands protected area (New Zealand):
Auckland Islands National Nature Reserve Ia
Campbell Islands National Nature Reserve Ia
Antipodes Islands National Nature Reserve Ia
Snares Islands National Nature Reserve Ia
Bounty Islands National Nature Reserve Ia
Auckland Islands Marine Mammal Sanctuary - Category unassigned
Territorial seas at Campbell, Antipodes, Snares and Bounty Islands - Category unassigned
Heard Island and McDonald Islands (HIMI) protected area (Australia)
Macquarie Island (Australia), on World Heritage List since 1997
Kerguelen Islands protected area (France)
Tristan da Cunha Islands (United Kingdom), on World Heritage List since 1995
Prince Edward Islands protected area (South Africa)
Gough Island Wildlife Reserve (UK)
References
Udvardy, M. D. F. (1975). A classification of the biogeographical provinces of the world. IUCN Occasional Paper no. 18. Morges, Switzerland: IUCN.
Udvardy, Miklos D. F. (1975) "World Biogeographical Provinces" (Map). The CoEvolution Quarterly, Sausalito, California.
Clark, M. R. and Dingwall, P. R. (1985). "Conservation of islands in the Southern Ocean. A review of the protected areas of Insulantarctica." International Union for Conservation of Nature and Natural Resources. Cambridge University Press.
External links
Wcmc.org: New Zealand Subantarctic Islands
Wcmc.org: HIMI
Wcmc.org: Macquarie Island
Jncc.gov.uk: Tristan da Cunha Islands
.
Biogeography
Environment of Antarctica | Insulantarctica | [
"Biology"
] | 399 | [
"Biogeography"
] |
8,751,011 | https://en.wikipedia.org/wiki/Cross-figure | A cross-figure (also variously called cross number puzzle or figure logic) is a puzzle similar to a crossword in structure, but with entries that consist of numbers rather than words, where individual digits are entered in the blank cells. Clues may be mathematical ("the seventh prime number"), use general knowledge ("date of the Battle of Hastings") or refer to other clues ("9 down minus 3 across").
Clues
The numbers can be clued in various ways:
The clue can make it possible to find the number required directly, by using general knowledge (e.g. "Date of the Battle of Hastings") or arithmetic (e.g. "27 times 79") or other mathematical facts (e.g. "Seventh prime number")
The clue may require arithmetic to be applied to another answer or answers (e.g. "25 across times 3" or "9 down minus 3 across")
The clue may indicate possible answers but make it impossible to give the correct one without using crosslights (e.g. "A prime number")
One answer may be related to another in a non-determinate way (e.g. "A multiple of 24 down" or "5 across with its digits rearranged")
Some entries may either not be clued at all, or refer to another clue (e.g. 7 down may be clued as "See 13 down" if 13 down reads "7 down plus 5")
Entries may be grouped together for clueing purposes, e.g. "1 across, 12 across, and 17 across together contain all the digits except 0"
Some cross-figures use an algebraic type of clue, with various letters taking unknown values (e.g. "A - 2B, where neither A nor B is known in advance)
Another special type of puzzle uses a real-world situation such as a family outing and base most clues on this (e.g. "Time taken to travel from Ayville to Beetown")
Cross-figures that use mostly the first type of clue may be used for educational purposes, but most enthusiasts would agree that this clue type should be used rarely, if at all. Without this type a cross-figure may superficially seem to be impossible to solve, since no answer can apparently be filled in until another has first been found, which without the first type of clue appears impossible. However, if a different approach is adopted where, instead of trying to find complete answers (as would be done for a crossword) one gradually narrows down the possibilities for individual cells (or, in some cases, whole answers) then the problem becomes tractable. For example, if 12 across and 7 down both have three digits and the clue for 12 across is "7 down times 2", one can work out that (i) the last digit of 12 across must be even, (ii) the first digit of 7 down must be 1, 2, 3 or 4, and (iii) the first digit of 12 across must be between 2 and 9 inclusive. (It is an implicit rule of cross-figures that numbers cannot start with 0; however, some puzzles explicitly allow this) By continuing to apply this sort of argument, a solution can eventually be found. Another implicit rule of cross-figures is that no two answers should be the same (in cross-figures allowing numbers to start with 0, 0123 and 123 may be considered different.)
Creation
A curious feature of cross-figures is that it makes perfect sense for the setter of a puzzle to try to solve it themself. Indeed, the setter should ideally do this (without direct reference to the answer) as it is essentially the only way to find out if the puzzle has a single unique solution. Alternatively, there are computer programs available that can be used for this purpose; however, they may not make it clear how difficult the puzzle is.
Popularity
Given that some basic mathematical knowledge is needed to solve cross-figures, they are much less popular than crosswords. As a result, very few books of them have ever been published. Dell Magazines publishes a magazine called Math & Logic Problems four times a year that includes these puzzles, which they name "Figure Logics"; the eighteen puzzles contained within each issue generally increase in difficulty, from easy to "challenger". A magazine called Figure it Out, which was dedicated to number puzzles, included some, but it was very short-lived. This also explains why cross-figures have fewer established conventions than crosswords (especially cryptic crosswords). One exception is the use of the semicolon (;) to attach two strings of numbers together, for example 1234;5678 becomes 12345678. Some cross-figures voluntarily ignore this option and other "non-mathematical" approaches (e.g. palindromic numbers and repunits) where same result can be achieved through algebraic means.
External links
"Cross-figure Puzzles by Yochanan Dvir"
"On Crossnumber Puzzles and The Lucas-Bonaccio Farm 1998"
The Little Pigley Farm crossnumber puzzle and its history by Joel Pomerantz
Cross-figure/Crossword Hybrids by Jordan Inman
Logic puzzles
Recreational mathematics
Crosswords | Cross-figure | [
"Mathematics"
] | 1,076 | [
"Recreational mathematics"
] |
8,751,138 | https://en.wikipedia.org/wiki/Candy%20cap | Candy cap or curry milkcap is the English-language common name for two closely related edible species of Lactarius: Lactarius camphoratus and Lactarius rubidus. Additionally, L. rufulus is termed the southern candy cap. Many similar species are known.
Candy caps are valued for their highly aromatic qualities and are used culinarily as a flavoring rather than as a constituent of a full meal.
Taxonomy
The candy caps have been placed in various infrageneric groups of Lactarius depending on the author. Bon defined the candy caps and allies as making up the subsection Camphoratini of the section Olentes. Subsection Camphoratini is defined by their similarity in color, odor (with the exception of L. rostratus), and by the presence of macrocystidia on their hymenium. (The other subsection of Olentes, Serifluini, is also aromatic, but have very different aromas from the Camphoratini and are entirely lacking in cystidia.)
Bon and later European authors treated all species that were aromatic and had at least a partially epithelial pileipellis as section Olentes, whereas Hesler and Smith and later North American authors treat all species with such a pileipellis (both aromatic and non-aromatic) as the section Thojogali. However, a thorough molecular phylogenetic investigation of Lactarius has yet to be published, and older classification systems of Lactarius are generally not regarded as natural.
Description
Candy caps are small to medium-size mushrooms, with a pileus ranging from in diameter (though L. rubidus can be slightly larger), and with coloration ranging through various burnt orange to burnt orange-red to orange-brown shades. The pileus shape ranges from broadly convex in young specimens to plane to slightly depressed in older ones; lamellae are attached to subdecurrent. The stipe is long.
The entire fruiting body can be either firm or fragile and brittle. Like all members of Lactarius, the fruiting body exudes a latex when broken, which in these species is whitish and watery in appearance, and is often compared to whey or nonfat milk. The latex may have little flavor or may be slightly sweet, but should never taste bitter or acrid. These species are particularly distinguishable by their scent, which has been variously compared to maple syrup, camphor, curry, fenugreek, burnt sugar, Malt-O-Meal, or Maggi-Würze. This scent may be quite faint in fresh specimens, but typically becomes quite strong when the fruiting body is dried.
Microscopically, they share features typical of Lactarius, including round to slightly ovular spores with distinct amyloid ornamentation and sphaerocysts that are abundant in the pileus and stipe trama, but infrequent in the lamellar trama.
Chemistry
The chemical responsible for the distinct odor of the candy cap was isolated in 2012 by chemical ecologist and natural product chemist William Wood of Humboldt State University, from collections of Lactarius rubidus. The odoriferous compound found in the fresh tissue and latex of the mushroom was found to be quabalactone III, an aromatic lactone. When the tissue and latex is dried, quabalactone III is hydrolyzed into sotolon, an even more powerfully aromatic compound, and one of the main compounds responsible for the aroma of maple syrup, as well as that of curry.
The question of what compound was responsible for the odor of candy cap had been under investigation by Wood and various students for a period of 27 years, when a mycology student in a class he was teaching asked what compound was responsible for the mushroom's odor, triggering investigation into the question. Isolation of the compound remained elusive, until solid-phase microextraction was used to extract the volatile compounds, which were then analyzed using gas chromatography–mass spectrometry.
Earlier investigation of the aromatic compounds of L. helvus by Rapor, et al. had also yielded sotolon (among a large number of other aromatic compounds), which was identified as giving this species its distinct fenugreek odor. Other important volatile compounds identified included decanoic acid and 2-methylbutyric acid.
Analysis of Lactarius camphoratus has shown that it contains 12-hydroxycaryophyllene-4,5-oxide, a caryophyllene compound. However, this was not identified as an aromatic component of this mushroom.
Identification
It is possible to mistake other distasteful or toxic species of mushrooms for candy caps or mistakenly include such species in a larger collection of candy caps. Those inexperienced with mushroom identification may mistake any number of "little brown mushrooms" ("LBMs") for candy caps, including the deadly galerina (Galerina marginata and allies), which can occur in the same habitat. Candy caps can be distinguished from non-Lactarius species by their brittle stipe, while most other "LBMs" have a more flexible stipe. It is therefore recommended that candy caps be gathered by hand, breaking the fragile stipe in one's fingers. By this method, LBM's with a cartilaginous stipe will easily be distinguished.
Candy caps may also be confused with any of a large number of small, similarly colored species of Lactarius that may be distasteful to downright toxic depending on the species and the number consumed.
Candy caps may be distinguished from other Lactarius by the following characteristics:
Odor: Candy caps have a distinctive odor (described above) that should not be present in other species of Lactarius. However, that other species of Lactarius may have different, but also distinctive, odors. Fresh candy caps (especially Lactarius rubidus) may also not have a noticeable odor, limiting the utility of this characteristic. A sweet odor is much more evident after briefly singeing the flesh of a candy cap with a match or lighter, which can be useful for identification.
Taste: The flesh and latex of candy caps should always be mild-tasting to somewhat sweet, lacking any hint of bitterness or acridity. Note, however, that there are some species of Lactarius, such as L. luculentus, where the bitterness is subtle and also may not be noticeable for a minute or so after tasting.
Latex: The latex of candy caps appears thin and whey-like, like milk that has been mixed with water. This latex does not change color nor does it discolor the flesh of the mushroom. Other species of Lactarius have a distinctly white or colored latex, which in some species discolors the flesh of the mushroom.
Pileus: Candy caps never have a zonate pattern of coloration on the surface of the pileus, nor is the pileus ever even slightly viscid.
Similar species
A number of species of Lactarius are distinctly aromatic, though only some of these species are thought to be closely related to the candy cap group.
The subsection Camphoratini includes Lactarius rostratus, a species found in northern Europe, though quite rare. Unlike other members of subsection Camphoratini, L. rostratus has an unpleasant (even nauseating) smell, described as resembling ivy. Lactarius cremor is a name sometimes used for mushrooms in this group, however, Heilmann-Clausen, et al. consider this name to be nomen dubium, referring variously to Lactarius rostratus, L. serifluus, or L. fulvissimus depending on the author's concept of L. cremor. Lactarius mukteswaricus and L. verbekenae, two species described from the Kumaon area of the Indian Himalaya in 2004, are reported to be very closely related to L. camphoratus, including in odor.
Lactarius rufulus is reported by one source as being a "candy cap" species and having a similar odor to the other candy caps, though earlier monographs do not report such an aroma and describe the flavor as subacrid.
Lactarius helvus and L. aquifluus, found in Europe and North America, respectively, are also strongly aromatic and similar to candy caps, the former having the odor of fenugreek. Lactarius helvus is known to be mildly toxic, causing gastrointestinal upset. The edibility of L. aquifluus is unknown, but as it is a close relative of L. helvus, it is suspected of being toxic. Lactarius species with yellow latex (or white latex that turns yellow) may be dangerous.
Lactarius glyciosmus and L. cocosiolens both have a distinct coconut odor. L. glyciosmus, however, has a subacrid flavor, though it is reported as having been gathered commercially in Scotland.
Distribution and habitat
Near the West Coast of North America, candy caps can be found from December through March.
Ecology
Like other species of Lactarius, candy caps are generally thought to be ectotrophic, with L. camphoratus having been identified in ectomycorrhizal root tips. However, unusually for a mycorrhizal species, L. rubidus is also commonly observed growing directly on decaying conifer wood. All candy cap species seem to be associated with a range of tree species.
Uses
Candy caps are not typically consumed as a vegetable the way most other edible mushrooms are consumed. Because of the strongly aromatic quality of these mushrooms, they are instead used primarily as a flavoring, much the way vanilla, saffron, or truffles are used. They impart a flavor and aroma to foods that has been compared to maple syrup or curry, but with a much stronger aroma than either of these seasonings. Candy caps are unique among edible mushrooms in that they are often used in sweet and dessert foods, such as cookies and ice cream. They are also sometimes used to flavor savory dishes that are traditionally prepared with sweet accompaniments, such as pork, and are also sometimes used in place of curry seasoning.
They are usually used in dried form, as the characteristic aroma intensifies greatly upon drying. To use them as a flavoring, the dried mushrooms are either powdered or they are infused into one of the liquid ingredients used in the dish, for example, being steeped in hot milk, much the same way whole vanilla beans are.
As a result of these culinary properties, candy caps are highly sought after by many chefs. Lactarius rubidus is commercially gathered and sold in California while L. camphoratus is gathered and sold in the United Kingdom and Yunnan Province, China.
Marchand reports that some individuals use L. camphoratus as part of a pipe tobacco mix.
See also
List of Lactarius species
References
Arora D. (1986). Mushrooms Demystified (2nd ed). Berkeley, CA: Ten Speed Press.
External links
From North American species of Lactarius by L. R. Hesler and Alexander H. Smith, 1979:
Lactarius fragilis var. rubidus: page 505 page 506
Lactarius camphoratus: page 506 page 507 page 508
From MushroomExpert.Com by Michael Kuo:
"Lactarius rubidus", February 2004.
"Lactarius camphoratus", March 2005.
"California Fungi: Lactarius rubidus" by Michael Wood & Fred Stevens, MykoWeb.com, 2001
"Fungus of the Month for October 2005: Lactarius rubidus, candy caps" by Tom Volk.
"Why we're wild about the curry mushroom" by Martyn McLaughlin, The Herald, September 28, 2006.
"Bay Area Mushrooms: Lactarius rubidus and Lactarius rufulus: The Candy Cap" by Debbie Viess, BayAreaMushrooms.org, 2007
Lactarius
Edible fungi
Spices
Fungus common names | Candy cap | [
"Biology"
] | 2,520 | [
"Fungus common names",
"Fungi",
"Common names of organisms"
] |
8,751,221 | https://en.wikipedia.org/wiki/Queen%20Elisabeth%20Medical%20Foundation | The Queen Elisabeth Medical Foundation (QEMF) is a Belgian non-profit organization, founded in 1926 by Elisabeth of Bavaria, wife of Albert I. She founded the organization, based on her experience with the wounded from the front-line during the First World War. The foundation wants to encourage laboratory research and contacts between researchers and clinical practitioners, with a particular focus on neurosciences. The QEMF supports seventeen university teams throughout Belgium.
See also
King Baudouin Foundation
National Fund for Scientific Research
Queen Elisabeth Music Competition
Queen Fabiola Foundation for Mental Health
References
Queen Elisabeth Medical Foundation
External links
Queen Elisabeth Medical Foundation
Biomedical research foundations
Foundations based in Belgium
Medical and health organisations based in Belgium
1926 establishments in Belgium
Organizations established in 1926 | Queen Elisabeth Medical Foundation | [
"Engineering",
"Biology"
] | 146 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Medical technology stubs",
"Biotechnology organizations",
"Biomedical research foundations",
"Medical technology"
] |
8,752,455 | https://en.wikipedia.org/wiki/Rat%20Candy | Rat candy is rodenticide. The name is a slang nickname, the exact origins of which are not conclusively known. One possible origin is the way that a rat is attracted to rat poison like a child to candy, another possibility being the use of actual candy, particularly chocolate, as bait when luring a rat into a trap that will lead to its imprisonment or demise.
According to United States Environmental Protection Agency statistics, approximately 13,000 American children were treated for ingesting rat poison in 2004, most mistaking the rodenticide for candy.
Warfarin, an early rat poison, was derived from licorice. Tales of poisoned candy also abound in urban legends.
References
Rodenticides | Rat Candy | [
"Biology"
] | 147 | [
"Biocides",
"Rodenticides"
] |
8,752,642 | https://en.wikipedia.org/wiki/Nuclear%20structure | Understanding the structure of the atomic nucleus is one of the central challenges in nuclear physics.
Models
The cluster model
The cluster model describes the nucleus as a molecule-like collection of proton-neutron groups (e.g., alpha particles) with one or more valence neutrons occupying molecular orbitals.
The liquid drop model
The liquid drop model is one of the first models of nuclear structure, proposed by Carl Friedrich von Weizsäcker in 1935. It describes the nucleus as a semiclassical fluid made up of neutrons and protons, with an internal repulsive electrostatic force proportional to the number of protons. The quantum mechanical nature of these particles appears via the Pauli exclusion principle, which states that no two nucleons of the same kind can be at the same state. Thus the fluid is actually what is known as a Fermi liquid.
In this model, the binding energy of a nucleus with protons and neutrons is given by
where is the total number of nucleons (Mass Number). The terms proportional to and represent the volume and surface energy of the liquid drop, the term proportional to represents the electrostatic energy, the term proportional to represents the Pauli exclusion principle and the last term is the pairing term, which lowers the energy for even numbers of protons or neutrons.
The coefficients and the strength of the pairing term may be estimated theoretically, or fit to data.
This simple model reproduces the main features of the binding energy of nuclei.
The assumption of nucleus as a drop of Fermi liquid is still widely used in the form of Finite Range Droplet Model (FRDM), due to the possible good reproduction of nuclear binding energy on the whole chart, with the necessary accuracy for predictions of unknown nuclei.
The shell model
The expression "shell model" is ambiguous in that it refers to two different items. It was previously used to describe the existence of nucleon shells according to an approach closer to what is now called mean field theory.
Nowadays, it refers to a formalism analogous to the configuration interaction formalism used in quantum chemistry.
Introduction to the shell concept
Systematic measurements of the binding energy of atomic nuclei show systematic deviations with respect to those estimated from the liquid drop model. In particular, some nuclei having certain values for the number of protons and/or neutrons are bound more tightly together than predicted by the liquid drop model. These nuclei are called singly/doubly magic. This observation led scientists to assume the existence of a shell structure of nucleons (protons and neutrons) within the nucleus, like that of electrons within atoms.
Indeed, nucleons are quantum objects. Strictly speaking, one should not speak of energies of individual nucleons, because they are all correlated with each other. However, as an approximation one may envision an average nucleus, within which nucleons propagate individually. Owing to their quantum character, they may only occupy discrete energy levels. These levels are by no means uniformly distributed; some intervals of energy are crowded, and some are empty, generating a gap in possible energies. A shell is such a set of levels separated from the other ones by a wide empty gap.
The energy levels are found by solving the Schrödinger equation for a single nucleon moving in the average potential generated by all other nucleons. Each level may be occupied by a nucleon, or empty. Some levels accommodate several different quantum states with the same energy; they are said to be degenerate. This occurs in particular if the average nucleus exhibits a certain symmetry, like a spherical shape.
The concept of shells allows one to understand why some nuclei are bound more tightly than others. This is because two nucleons of the same kind cannot be in the same state (Pauli exclusion principle). Werner Heisenberg extended the principle of Pauli exclusion to nucleons, via the introduction of the iso-spin concept. Nucleons are thought to be composed of two kind of particles, the neutron and the proton that differ through their intrinsic property, associated with their iso-spin quantum number. This concept enables the explanation of the bound state of Deuterium, in which the proton and neutron can couple their spin and iso-spin in two different manners. So the lowest-energy state of the nucleus is one where nucleons fill all energy levels from the bottom up to some level. Nuclei that exhibit an odd number of either protons or neutrons are less bound than nuclei with even number. A nucleus with full shells is exceptionally stable, as will be explained.
As with electrons in the electron shell model, protons in the outermost shell are relatively loosely bound to the nucleus if there are only few protons in that shell, because they are farthest from the center of the nucleus. Therefore, nuclei which have a full outer proton shell will be more tightly bound and have a higher binding energy than other nuclei with a similar total number of protons. This is also true for neutrons.
Furthermore, the energy needed to excite the nucleus (i.e. moving a nucleon to a higher, previously unoccupied level) is exceptionally high in such nuclei. Whenever this unoccupied level is the next after a full shell, the only way to excite the nucleus is to raise one nucleon across the gap, thus spending a large amount of energy. Otherwise, if the highest occupied energy level lies in a partly filled shell, much less energy is required to raise a nucleon to a higher state in the same shell.
Some evolution of the shell structure observed in stable nuclei is expected away from the valley of stability. For example, observations of unstable isotopes have shown shifting and even a reordering of the single particle levels of which the shell structure is composed. This is sometimes observed as the creation of an island of inversion or in the reduction of excitation energy gaps above the traditional magic numbers.
Basic hypotheses
Some basic hypotheses are made in order to give a precise conceptual framework to the shell model:
The atomic nucleus is a quantum n-body system.
The internal motion of nucleons within the nucleus is non-relativistic, and their behavior is governed by the Schrödinger equation.
Nucleons are considered to be pointlike, without any internal structure.
Brief description of the formalism
The general process used in the shell model calculations is the following. First a Hamiltonian for the nucleus is defined. Usually, for computational practicality, only one- and two-body terms are taken into account in this definition. The interaction is an effective theory: it contains free parameters which have to be fitted with experimental data.
The next step consists in defining a basis of single-particle states, i.e. a set of wavefunctions describing all possible nucleon states. Most of the time, this basis is obtained via a Hartree–Fock computation. With this set of one-particle states, Slater determinants are built, that is, wavefunctions for Z proton variables or N neutron variables, which are antisymmetrized products of single-particle wavefunctions (antisymmetrized meaning that under exchange of variables for any pair of nucleons, the wavefunction only changes sign).
In principle, the number of quantum states available for a single nucleon at a finite energy is finite, say n. The number of nucleons in the nucleus must be smaller than the number of available states, otherwise the nucleus cannot hold all of its nucleons. There are thus several ways to choose Z (or N) states among the n possible. In combinatorial mathematics, the number of choices of Z objects among n is the binomial coefficient C. If n is much larger than Z (or N), this increases roughly like nZ. Practically, this number becomes so large that every computation is impossible for A=N+Z larger than 8.
To obviate this difficulty, the space of possible single-particle states is divided into core and valence, by analogy with chemistry (see core electron and valence electron). The core is a set of single-particles which are assumed to be inactive, in the sense that they are the well bound lowest-energy states, and that there is no need to reexamine their situation. They do not appear in the Slater determinants, contrary to the states in the valence space, which is the space of all single-particle states not in the core, but possibly to be considered in the choice of the build of the (Z-) N-body wavefunction. The set of all possible Slater determinants in the valence space defines a basis for (Z-) N-body states.
The last step consists in computing the matrix of the Hamiltonian within this basis, and to diagonalize it. In spite of the reduction of the dimension of the basis owing to the fixation of the core, the matrices to be diagonalized reach easily dimensions of the order of 109, and demand specific diagonalization techniques.
The shell model calculations give in general an excellent fit with experimental data. They depend however strongly on two main factors:
The way to divide the single-particle space into core and valence.
The effective nucleon–nucleon interaction.
Mean field theories
The independent-particle model (IPM)
The interaction between nucleons, which is a consequence of strong interactions and binds the nucleons within the nucleus, exhibits the peculiar behaviour of having a finite range: it vanishes when the distance between two nucleons becomes too large; it is attractive at medium range, and repulsive at very small range. This last property correlates with the Pauli exclusion principle according to which two fermions (nucleons are fermions) cannot be in the same quantum state. This results in a very large mean free path predicted for a nucleon within the nucleus.
The main idea of the Independent Particle approach is that a nucleon moves inside a certain potential well (which keeps it bound to the nucleus) independently from the other nucleons. This amounts to replacing an N-body problem (N particles interacting) by N single-body problems. This essential simplification of the problem is the cornerstone of mean field theories. These are also widely used in atomic physics, where electrons move in a mean field due to the central nucleus and the electron cloud itself.
The independent particle model and mean field theories (we shall see that there exist several variants) have a great success in describing the properties of the nucleus starting from an effective interaction or an effective potential, thus are a basic part of atomic nucleus theory. One should also notice that they are modular enough, in that it is quite easy to extend the model to introduce effects such as nuclear pairing, or collective motions of the nucleon like rotation, or vibration, adding the corresponding energy terms in the formalism. This implies that in many representations, the mean field is only a starting point for a more complete description which introduces correlations reproducing properties like collective excitations and nucleon transfer.
Nuclear potential and effective interaction
A large part of the practical difficulties met in mean field theories is the definition (or calculation) of the potential of the mean field itself. One can very roughly distinguish between two approaches:
The phenomenological approach is a parameterization of the nuclear potential by an appropriate mathematical function. Historically, this procedure was applied with the greatest success by Sven Gösta Nilsson, who used as a potential a (deformed) harmonic oscillator potential. The most recent parameterizations are based on more realistic functions, which account more accurately for scattering experiments, for example. In particular the form known as the Woods–Saxon potential can be mentioned.
The self-consistent or Hartree–Fock approach aims to deduce mathematically the nuclear potential from an effective nucleon–nucleon interaction. This technique implies a solution of the Schrödinger equation in an iterative fashion, starting from an ansatz wavefunction and improving it variationally, since the potential depends there upon the wavefunctions to be determined. The latter are written as Slater determinants.
In the case of the Hartree–Fock approaches, the trouble is not to find the mathematical function which describes best the nuclear potential, but that which describes best the nucleon–nucleon interaction. Indeed, in contrast with atomic physics where the interaction is known (it is the Coulomb interaction), the nucleon–nucleon interaction within the nucleus is not known analytically.
There are two main reasons for this fact. First, the strong interaction acts essentially among the quarks forming the nucleons. The nucleon–nucleon interaction in vacuum is a mere consequence of the quark–quark interaction. While the latter is well understood in the framework of the Standard Model at high energies, it is much more complicated in low energies due to color confinement and asymptotic freedom. Thus there is yet no fundamental theory allowing one to deduce the nucleon–nucleon interaction from the quark–quark interaction. Furthermore, even if this problem were solved, there would remain a large difference between the ideal (and conceptually simpler) case of two nucleons interacting in vacuum, and that of these nucleons interacting in the nuclear matter. To go further, it was necessary to invent the concept of effective interaction. The latter is basically a mathematical function with several arbitrary parameters, which are adjusted to agree with experimental data.
Most modern interaction are zero-range so they act only when the two nucleons are in contact, as introduced by Tony Skyrme. In a seminal paper by Dominique Vautherin and David M. Brink it was demonstrated that a Skyrme force that is density dependent can reproduce basic properties of atomic nuclei. Other commonly used interaction is the finite range Gogny force,
The self-consistent approaches of the Hartree–Fock type
In the Hartree–Fock approach of the n-body problem, the starting point is a Hamiltonian containing n kinetic energy terms, and potential terms. As mentioned before, one of the mean field theory hypotheses is that only the two-body interaction is to be taken into account. The potential term of the Hamiltonian represents all possible two-body interactions in the set of n fermions. It is the first hypothesis.
The second step consists in assuming that the wavefunction of the system can be written as a Slater determinant of one-particle spin-orbitals. This statement is the mathematical translation of the independent-particle model. This is the second hypothesis.
There remains now to determine the components of this Slater determinant, that is, the individual wavefunctions of the nucleons. To this end, it is assumed that the total wavefunction (the Slater determinant) is such that the energy is minimum. This is the third hypothesis.
Technically, it means that one must compute the mean value of the (known) two-body Hamiltonian on the (unknown) Slater determinant, and impose that its mathematical variation vanishes. This leads to a set of equations where the unknowns are the individual wavefunctions: the Hartree–Fock equations. Solving these equations gives the wavefunctions and individual energy levels of nucleons, and so the total energy of the nucleus and its wavefunction.
This short account of the Hartree–Fock method explains why it is called also the variational approach. At the beginning of the calculation, the total energy is a "function of the individual wavefunctions" (a so-called functional), and everything is then made in order to optimize the choice of these wavefunctions so that the functional has a minimum – hopefully absolute, and not only local. To be more precise, there should be mentioned that the energy is a functional of the density, defined as the sum of the individual squared wavefunctions. The Hartree–Fock method is also used in atomic physics and condensed matter physics as Density Functional Theory, DFT.
The process of solving the Hartree–Fock equations can only be iterative, since these are in fact a Schrödinger equation in which the potential depends on the density, that is, precisely on the wavefunctions to be determined. Practically, the algorithm is started with a set of individual grossly reasonable wavefunctions (in general the eigenfunctions of a harmonic oscillator). These allow to compute the density, and therefrom the Hartree–Fock potential. Once this done, the Schrödinger equation is solved anew, and so on. The calculation stops – convergence is reached – when the difference among wavefunctions, or energy levels, for two successive iterations is less than a fixed value. Then the mean field potential is completely determined, and the Hartree–Fock equations become standard Schrödinger equations. The corresponding Hamiltonian is then called the Hartree–Fock Hamiltonian.
The relativistic mean field approaches
Born first in the 1970s with the works of John Dirk Walecka on quantum hadrodynamics, the relativistic models of the nucleus were sharpened up towards the end of the 1980s by P. Ring and coworkers. The starting point of these approaches is the relativistic quantum field theory. In this context, the nucleon interactions occur via the exchange of virtual particles called mesons. The idea is, in a first step, to build a Lagrangian containing these interaction terms. Second, by an application of the least action principle, one gets a set of equations of motion. The real particles (here the nucleons) obey the Dirac equation, whilst the virtual ones (here the mesons) obey the Klein–Gordon equations.
In view of the non-perturbative nature of strong interaction, and also since the exact potential form of this interaction between groups of nucleons is relatively badly known, the use of such an approach in the case of atomic nuclei requires drastic approximations. The main simplification consists in replacing in the equations all field terms (which are operators in the mathematical sense) by their mean value (which are functions). In this way, one gets a system of coupled integro-differential equations, which can be solved numerically, if not analytically.
The interacting boson model
The interacting boson model (IBM) is a model in nuclear physics in which nucleons are represented as pairs, each of them acting as a boson particle, with integral spin of 0, 2 or 4. This makes calculations feasible for larger nuclei.
There are several branches of this model - in one of them (IBM-1) one can group all types of nucleons in pairs, in others (for instance - IBM-2) one considers protons and neutrons in pairs separately.
Spontaneous breaking of symmetry in nuclear physics
One of the focal points of all physics is symmetry. The nucleon–nucleon interaction and all effective interactions used in practice have certain symmetries. They are invariant by translation (changing the frame of reference so that directions are not altered), by rotation (turning the frame of reference around some axis), or parity (changing the sense of axes) in the sense that the interaction does not change under any of these operations. Nevertheless, in the Hartree–Fock approach, solutions which are not invariant under such a symmetry can appear. One speaks then of spontaneous symmetry breaking.
Qualitatively, these spontaneous symmetry breakings can be explained in the following way: in the mean field theory, the nucleus is described as a set of independent particles. Most additional correlations among nucleons which do not enter the mean field are neglected. They can appear however by a breaking of the symmetry of the mean field Hamiltonian, which is only approximate. If the density used to start the iterations of the Hartree–Fock process breaks certain symmetries, the final Hartree–Fock Hamiltonian may break these symmetries, if it is advantageous to keep these broken from the point of view of the total energy.
It may also converge towards a symmetric solution. In any case, if the final solution breaks the symmetry, for example, the rotational symmetry, so that the nucleus appears not to be spherical, but elliptic, all configurations deduced from this deformed nucleus by a rotation are just as good solutions for the Hartree–Fock problem. The ground state of the nucleus is then degenerate.
A similar phenomenon happens with the nuclear pairing, which violates the conservation of the number of baryons (see below).
Extensions of the mean field theories
Nuclear pairing phenomenon
The most common extension to mean field theory is the nuclear pairing. Nuclei with an even number of nucleons are systematically more bound than those with an odd one. This implies that each nucleon binds with another one to form a pair, consequently the system cannot be described as independent particles subjected to a common mean field. When the nucleus has an even number of protons and neutrons, each one of them finds a partner. To excite such a system, one must at least use such an energy as to break a pair. Conversely, in the case of odd number of protons or neutrons, there exists an unpaired nucleon, which needs less energy to be excited.
This phenomenon is closely analogous to that of Type 1 superconductivity in solid state physics. The first theoretical description of nuclear pairing was proposed at the end of the 1950s by Aage Bohr, Ben Mottelson, and David Pines (which contributed to the reception of the Nobel Prize in Physics in 1975 by Bohr and Mottelson). It was close to the BCS theory of Bardeen, Cooper and Schrieffer, which accounts for metal superconductivity. Theoretically, the pairing phenomenon as described by the BCS theory combines with the mean field theory: nucleons are both subject to the mean field potential and to the pairing interaction.
The Hartree–Fock–Bogolyubov (HFB) method is a more sophisticated approach, enabling one to consider the pairing and mean field interactions consistently on equal footing. HFB is now the de facto standard in the mean field treatment of nuclear systems.
Symmetry restoration
Peculiarity of mean field methods is the calculation of nuclear property by explicit symmetry breaking. The calculation of the mean field with self-consistent methods (e.g. Hartree-Fock), breaks rotational symmetry, and the calculation of pairing property breaks particle-number.
Several techniques for symmetry restoration by projecting on good quantum numbers have been developed.
Particle vibration coupling
Mean field methods (eventually considering symmetry restoration) are a good approximation for the ground state of the system, even postulating a system of independent particles. Higher-order corrections consider the fact that the particles interact together by the means of correlation. These correlations can be introduced taking into account the coupling of independent particle degrees of freedom, low-energy collective excitation of systems with even number of protons and neutrons.
In this way, excited states can be reproduced by the means of random phase approximation (RPA), also eventually consistently calculating corrections to the ground state (e.g. by the means of nuclear field theory).
See also
Nuclear magnetic moment
CHARISSA, a nuclear structure research collaboration
Further reading
General audience
James M. Cork ; Radioactivité & physique nucléaire, Dunod (1949).
Introductory texts
Luc Valentin ; Le monde subatomique - Des quarks aux centrales nucléaires, Hermann (1986).
Luc Valentin ; Noyaux et particules - Modèles et symétries, Hermann (1997).
David Halliday ; Introductory Nuclear Physics, Wiley & Sons (1957).
Kenneth Krane ; Introductory Nuclear Physics, Wiley & Sons (1987).
Carlos Bertulani ; Nuclear Physics in a Nutshell, Princeton University Press (2007).
Fundamental texts
Peter E. Hodgson; Nuclear Reactions and Nuclear Structure. Oxford University Press (1971).Irving Kaplan; Nuclear physics, the Addison-Wesley Series in Nuclear Science & Engineering, Addison-Wesley (1956). 2nd edition (1962).
A. Bohr & B. Mottelson ; Nuclear Structure, 2 vol., Benjamin (1969–1975). Volume 1 : Single Particle Motion ; Volume 2 : Nuclear Deformations. Réédité par World Scientific Publishing Company (1998), .
P. Ring & P. Schuck; The nuclear many-body problem, Springer Verlag (1980),
A. de Shalit & H. Feshbach; Theoretical Nuclear Physics, 2 vol., John Wiley & Sons (1974). Volume 1: Nuclear Structure; Volume 2: Nuclear Reactions'',
References
External links
English
Institut de Physique Nucléaire (IPN), France
Facility for Antiproton and Ion Research (FAIR), Germany
Gesellschaft für Schwerionenforschung (GSI), Germany
Joint Institute for Nuclear Research (JINR), Russia
Argonne National Laboratory (ANL), USA
Riken, Japan
National Superconducting Cyclotron Laboratory, Michigan State University, USA
Facility for Rare Isotope Beams, Michigan State University, USA
French
Institut de Physique Nucléaire (IPN), France
Centre de Spectrométrie Nucléaire et de Spectrométrie de Masse (CSNSM), France
Service de Physique Nucléaire CEA/DAM, France
Institut National de Physique Nucléaire et de Physique des Particules (In2p3), France
Grand Accélérateur National d'Ions Lourds (GANIL), France
Commissariat à l'Energie Atomique (CEA), France
Centre Européen de Recherches Nucléaires, Suisse
The LIVEChart of Nuclides - IAEA
Nuclear physics
Quantum mechanics | Nuclear structure | [
"Physics"
] | 5,378 | [
"Theoretical physics",
"Quantum mechanics",
"Nuclear physics"
] |
8,752,878 | https://en.wikipedia.org/wiki/Bipolar%20disorder%20in%20children | Bipolar disorder in children, or pediatric bipolar disorder (PBD), is a rare mental disorder in children and adolescents. The diagnosis of bipolar disorder in children has been heavily debated for many reasons including the potential harmful effects of adult bipolar medication use for children. PBD is similar to bipolar disorder (BD) in adults, and has been proposed as an explanation for periods of extreme shifts in mood called mood episodes. These shifts alternate between periods of depressed or irritable moods and periods of abnormally elevated moods called manic or hypomanic episodes. Mixed mood episodes can occur when a child or adolescent with PBD experiences depressive and manic symptoms simultaneously. Mood episodes of children and adolescents with PBD are different from general shifts in mood experienced by children and adolescents because mood episodes last for long periods of time (i.e. days, weeks, or years) and cause severe disruptions to an individual's life. There are three known forms of PBD: Bipolar I, Bipolar II, and Bipolar Not Otherwise Specified (NOS). The average age of onset of PBD remains unclear, but reported age of onset ranges from 5 years of age to 19 years of age. PBD is typically more severe and has a poorer prognosis than bipolar disorder with onset in late-adolescence or adulthood.
Since 1980, the has specified that the criteria for bipolar disorder in adults can also be applied to children with some adjustments based on developmental differences. Genetics and environment are considered risk factors for the development of bipolar disorder with the exact cause unknown at this time. Therefore, diagnosis of bipolar disorder requires evaluation by a professional and diagnosis of PBD typically requires more in depth observation due to children's inability to properly report symptoms.
Causes
While there is limited understanding regarding the development of bipolar disorder, research shows that there are many environmental and biological risk factors. Family history is a strong predictor of childhood development of bipolar disorder, with genetics contributing to risk by up to 50%. With this in mind, it is important to understand that family history does not lead to absolute diagnosis of PBD in the child. Only 6% of children with parents diagnosed with bipolar disorder also have bipolar disorder. Still, children of parents with bipolar disorder should be monitored for possible development of bipolar disorder especially if they exhibit sleep disturbances and symptoms of anxiety disorders early on. Other factors that can contribute to pediatric bipolar disorder include substance use disorder and childhood adversity such as abuse or school trauma.
Diagnosis
Diagnosis is made based on a clinical interview by a licensed mental health professional. There are no blood tests or imaging to diagnose bipolar disorder. Pediatric bipolar disorder can be difficult to diagnose, especially in children under 11–12 years as they may be unable to properly self-assess and communicate any possible symptoms. Therefore, it is helpful to obtain information from multiple sources, such as family members and teachers, and use questionnaires and checklists for a more accurate diagnosis. Commonly used assessment tools include the K-SADS (Kiddie Schedule for Affective Disorders and Schizophrenia), the Diagnostic Interview Schedule for Children (DISC), and the Child Mania Rating Scale (CMRS). It is important to assess the child's baseline mood and behavior and determine if the symptoms present episodically. Often, parents are encouraged to keep mood logs to assist with this. Family history is also important to obtain as bipolar disorder is heritable. Medication, substance use, or other medical problems should be ruled out to appropriately diagnose bipolar disorder.
Early diagnosis is important for children to start treatment soon and leads to better outcomes. Often, anxiety disorders and sleep disturbances precede the mood symptoms of PBD. If a child presents with symptoms of anxiety and changes in sleep pattern with major changes in energy and deterioration of function, especially in school, this may warrant evaluation for PBD.
It can be difficult to distinguish pediatric bipolar disorder due to overlapping symptoms with other conditions such as ADHD, OCD, autism spectrum disorder, depression, anxiety, or conduct disorders. For example, irritability, distractibility, and poor judgment are symptoms commonly seen in pediatric bipolar disorder and ADHD. Elated mood and decreased need for sleep can be specifically diagnostic of PBD.
Signs and symptoms
The American Psychiatric Association's and the World Health Organization's , use the same criteria to diagnose bipolar disorder in adults and children with some adjustments to account for differences in age and developmental stage, particularly with depressive episodes. For example, the DSM-5 specifies that children may exhibit persistently irritable moods instead of a depressed mood. Additionally, children will more than likely fail to meet their expected body weight instead of presenting with weight loss.
In diagnosing manic episodes, it is important to compare the changes in mood and behavior to the child's normal mood and behaviors at baseline instead of to other children or adults. For example, grandiosity (i.e., unrealistic overestimation of one's intelligence, talent, or abilities) is normal at varying degrees during childhood and adolescence. Therefore, grandiosity is only considered symptomatic of mania in children when the beliefs are held despite being presented with concrete evidence otherwise or when they lead to a child attempting activities that are clearly dangerous, and most importantly, when the grandiose beliefs are an obvious change from that particular child's normal self-view in between episodes.
It is important to distinguish if irritability is related to bipolar disorder or another condition as it is commonly in other childhood disorders. If irritability is persistent, it is important to differentiate from chronic irritability seen in disruptive mood dysregulation disorder (DMDD).
In particular, PBD and ADHD have many overlapping symptoms at the surface, such as the hyperactivity characteristic of the manic episodes that occur in PBD. As a result, many children and adolescents with PBD are instead diagnosed with ADHD. Misdiagnosis of PBD can lead to complications in youth and adolescents as different disorders require different types of medications that may make symptoms of PBD more severe.
Manic episodes include
Elevated mood (or increased silliness in children)
Rapid speech that is difficult to interrupt
Decreased need for sleep
Racing thoughts
Increased interests/participation in activities (especially those considered more reckless)
Inflated sense of ability
Depressive episodes include
Frequent and unprovoked sadness
Physical pain (stomach aches, headaches)
Sleeping more
Difficulty concentrating
Worthlessness/hopelessness
Changes in eating habits
Subtypes
According to the DSM-5 there are 3 major categories of bipolar disorder: Bipolar I, Bipolar II, and Bipolar Not Otherwise Specified (NOS). Just as in adults, bipolar I is the most severe form of PBD in children and adolescents, and can impair sleep, general function, and lead to hospitalization. Bipolar NOS is the mildest form of PBD in children and adolescents. The criteria for distinguishing is the same as that of bipolar disorder (BD) in adults.
Controversy
The diagnosis of childhood bipolar disorder has been heavily debated. It is recognized that the typical symptoms of bipolar disorder are dysfunctional and have negative consequences for minors with the condition. The main discussion is centered on whether what is called bipolar disorder in children refers to the same disorder in adults, and the related question on whether the criteria for adult diagnosis is useful and accurate when applied to children. More specifically, regarding the symptomatology of mania and its differences between children and adults.
There are significant differences in how commonly PBD is diagnosed across clinics and in different countries. In the United States, there were concerns about over diagnosis and misdiagnosis of PBD. More understanding and research lead to a decrease in PBD diagnosis from mid-2000s to 2010. This is likely due to the various challenges that come with identifying bipolar disorder in youth. PBD has many overlapping symptoms with other childhood conditions.
Management
A combination of medication and psychosocial intervention is recommended for most pediatric populations with PBD and has been proven to lead to improved prognosis. In order to choose the best medication and therapy, it is important to consider the child's age, their psychosocial environment, presentation and severity of symptoms, and their family history.
Medication
Mood stabilizers, which help manage manic episodes, and atypical antipsychotics, which help manage both manic and depressive episodes, have been demonstrated to be the safest and most effective in pediatric populations for the treatment of PBD. Mood stabilizers used for the treatment of PBD include: lithium, valproic acid, divalproex sodium, carbamazepine, and lamotrigine. Lithium is FDA approved for those 12 years and older appears to be particularly effective in children with a family history of mood disorders especially if the family members have been successfully treated with lithium. Atypical antipsychotics that have been approved for use by the FDA for treatment of PBD include risperidone, cariprazine, lurasidone, olanzapine-fluoxetine combination, and quetiapine. Risperidone has been approved for use in children 10 and older. Medications have also been proven effective when used in combination whether that is multiple mood stabilizers or a mood stabilizer with an atypical antipsychotic.
Medications for the treatment of PBD can produce significant side effects, so it is recommended that families of patients be informed of the different possible issues that can arise. Although atypical antipsychotics are more effective in treating PBD than mood stabilizers, they can lead to more side effects. Typical antipsychotics may produce weight gains as well as other metabolic problems, including diabetes mellitus type 2 and hyperlipidemia. Extrapyramidal secondary effects may occur with the use of these medications, including tardive dyskinesia, a difficult-to-treat movement disorder. Liver and kidney damage may occur as a result of the use of mood stabilizers. Lithium overdose can also occur in individuals with low sodium levels. Pediatric populations often struggle with medication adherence for PBD, which can be improved with motivational interviewing techniques.
Psychotherapy
Psychological treatment for PBD can take on several different forms. One form of psychotherapy is psychoeducation, in which children with bipolar disorder and their families are informed, in ways accordingly to their age and family role, about the different aspects of bipolar disorder and its management including causes, signs and symptoms and treatments. Similarly, family-focused therapy (FFT) is therapy for both individuals with PBD and their caregivers, in which families take part in communication improvement training and problem-solving skills training. Group therapy aims to improve social skills and manage group conflicts, with role-playing as a critical tool. Another type of therapy used in individuals with PBD is chronotherapy, which helps children and adolescents form a healthy sleep pattern, as sleep is often disrupted by PBD symptoms. Finally, cognitive-behavioral therapy (CBT) aims to make participants have a better understanding and control over their emotions and behaviors.
Psychotherapy can be tailored to each individual and address needs that medication alone cannot to help improve lifestyle and functionality. Additionally, psychotherapy improves medication adherence.
Alternative treatments are currently being developed for pediatric populations with PBD in which medication and psychotherapy has proven to be ineffective. Currently, interventions involving dialectical behavioral therapy (DBT) are being explored due to the focus on mindfulness and distress tolerance skill building. According to the APA, studies have shown that DBT may lead to decreased suicidal ideation compared to typical psychosocial treatments. Nutritional interventions are also currently undergoing further research along with other lifestyle modifications including exercise and proper sleep habits.
Prognosis
Bipolar disorder is a chronic condition that requires lifelong care and treatment. Without proper treatment, PBD oftentimes has a poor prognosis in children and adolescents. Chronic adherence to medication is often needed, with relapses of individuals reaching rates over 90% in those not following medication indications and almost 40% in those complying with medication regimens in some studies. Other risk factors for poor outcomes of PBD and increased severity of symptoms are comorbid pathologies and early onset of disease.
Children with PBD, especially early onset, are more likely to commit suicide than other children, as well as misuse alcohol and/or other drugs. Studies have shown that among adolescents with PBD, 44% report a lifetime suicide rate, twice as much compared to teens diagnosed major depressive disorder. Children and adolescents with PBD are also at an increased risk for behavior that can result in incarceration.
Hypomanic episodes in adolescents have been shown to not always progress into adult bipolar disorder. However, research surrounding PBD emphasizes the importance of early diagnosis of PBD for improved prognosis.
Comorbid conditions
The most common comorbidities seen with PBD is ADHD (80%) and oppositional defiant disorder (47%). Anywhere between 13.2% and 29% of patients with bipolar disorder are diagnosed with conduct disorder, substance use disorders, anxiety disorders, or borderline personality disorder.
Comorbid ADHD can be diagnosed if symptoms such as hyperactivity and distractibility are present persistently. If they are purely related to mood episodes, this is likely a symptom of PBD. Therefore, it is important to carefully evaluate onset of symptoms and course of time present. While difficulty with sleep can be present in both, patients with bipolar disorder will typically have a decreased need for sleep during manic episodes while children with ADHD will have sleep problems with increased fatigue. Grandiosity is also a distinguishing factor as mania typically presents with increased self-esteem and in ADHD children may actually have lower self esteem.
Epidemiology
Globally, the prevalence of PBD in children and adolescents under the age of 18 is estimated at 3.9% as of 2019. However, 5 surveys (from Brazil, England, Turkey, and the United States) have reported pre-adolescence rates of PBD as zero or close to zero.
History
Descriptions of children with symptoms similar to contemporary concepts of mania date back to the 18th century. In 1898, a detailed psychiatric case history was published about a 13-year-old that met Jean-Pierre Falret and Jules Baillarger's criteria for , which is congruent to the modern conception of bipolar I disorder.
In Emil Kraepelin's descriptions of bipolar disorder in the 1920s, which he called "manic depressive insanity", he noted the rare possibility that it could occur in children. In addition to Kraepelin, Adolf Meyer, Karl Abraham, and Melanie Klein were some of the first to document bipolar disorder symptoms in children in the first half of the 20th century. It was not mentioned much in English literature until the 1970s when interest in researching the subject increased. It became more accepted as a diagnosis in children in the 1980s after the DSM-III (1980) specified that the same criteria for diagnosing bipolar disorder in adults could also be applied to children.
Recognition came twenty years after, with epidemiological studies showing that approximately 20% of adults with bipolar disorder already had symptoms in childhood or adolescence. Nevertheless, onset before age 10 was thought to be rare, below 0.5% of the cases. During the second half of the century misdiagnosis with schizophrenia was not rare in the non-adult population due to common co-occurrence of psychosis and mania, this issue diminishing with an increased following of the DSM criteria in the last part of the 20th century.
References
External links
International Society for Bipolar Disorders Task Force report on current knowledge in pediatric bipolar disorder and future directions
Bipolar disorder
Depression (mood)
Mental disorders
Mental disorders diagnosed in childhood
Mood disorders
Psychiatry controversies | Bipolar disorder in children | [
"Biology"
] | 3,278 | [
"Mental disorders",
"Behavior",
"Human behavior"
] |
8,753,589 | https://en.wikipedia.org/wiki/Steam%20digester | The steam digester or bone digester (also known as Papin’s digester) is a high-pressure cooker invented by French physicist Denis Papin in 1679. It is a device for extracting fats from bones in a high-pressure steam environment, which also renders them brittle enough to be easily ground into bone meal. It is the forerunner of the autoclave and the domestic pressure cooker.
The steam-release valve, which was invented for Papin's digester following various explosions of the earlier models, inspired the development of the piston-and-cylinder steam engine.
History
The artificial vacuum was first produced in 1643 by Italian scientist Evangelista Torricelli and further developed by German scientist Otto von Guericke with his Magdeburg hemispheres. Guerike's demonstration was documented by Gaspar Schott, in a book that was read by Robert Boyle. Boyle and his assistant Robert Hooke improved Guericke's air pump design and built their own. From this, through various experiments, they formulated what is called Boyle's law, which states that the volume of a body of an ideal gas is inversely proportional to its pressure. Soon Jacques Charles formulated Charles' Law, which states that the volume of a gas at a constant pressure is proportional to its temperature. Boyle's and Charles' Laws were combined into the ideal gas law.
Based on these concepts in 1679 Boyle's associate, Denis Papin, built a bone digester, which is a closed vessel with a tightly fitting lid that confines steam until a high pressure is generated. Later designs implemented a steam release valve to keep the machine from exploding. By watching the valve rhythmically moving up and down, Papin conceived the idea of a piston and cylinder engine. He did not, however, follow through with his design. In 1697, independent of Papin's designs, engineer Thomas Savery built the world's first steam engine. By 1712 an improved design based on Papin's ideas was developed by Thomas Newcomen.
Boyle speaks of Papin as having gone to England in the hope of finding a place in which he could satisfactorily pursue his favorite studies. Boyle himself had already been long engaged in the study of pneumatics, and had been especially interested in the investigations which had been original with Guericke. He admitted young Papin into his laboratory, and the two philosophers worked together at these attractive problems.
He probably invented his "Digester" while in England, and it was first described in a brochure written in English, under the title, "The New Digester." It was subsequently published in Paris.
This was a vessel with a safety valve, which can be tightly closed by a screw and a lid. Food can be cooked along with water in the vessel when the vessel is heated, and the vessel's internal temperature can be raised by as much as the pressure inside the vessel will permit safely. The maximum pressure is limited by a weight placed on the safety valve lever. If the pressure exceeds this limit, the safety valve will be forced open and steam will escape until the pressure drops sufficient for the weight to close the valve again.
It is probable that this essential attachment to the steam boiler had previously been used for other purposes; but Papin is given the credit of having first made use of it to control the pressure of steam. In 1787, Antoine Lavoisier, in his Elements of Chemistry, refers to "Papin's digester" as an example of an environment where high pressure prevents evaporation when he explains that the pressure caused by evaporation of fluid prevents further evaporation.
See also
Steam engine
History of thermodynamics
References
External links
Papin's Digester - Good Quality Image
Robert Boyle - has drawing of Papin's digester
French inventions
Steam power
Thermodynamics | Steam digester | [
"Physics",
"Chemistry",
"Mathematics"
] | 795 | [
"Physical quantities",
"Steam power",
"Power (physics)",
"Thermodynamics",
"Dynamical systems"
] |
8,753,939 | https://en.wikipedia.org/wiki/GF%20method | The GF method, sometimes referred to as FG method, is a classical mechanical method introduced by Edgar Bright Wilson to obtain certain internal coordinates for a vibrating semi-rigid molecule, the so-called normal coordinates Qk. Normal coordinates decouple the classical vibrational motions of the molecule and thus give an easy route to obtaining vibrational amplitudes of the atoms as a function of time. In Wilson's GF method it is assumed that the molecular kinetic energy consists only of harmonic vibrations of the atoms, i.e., overall rotational and translational energy is ignored. Normal coordinates appear also in a quantum mechanical description of the vibrational motions of the molecule and the Coriolis coupling between rotations and vibrations.
It follows from application of the Eckart conditions that the matrix G−1 gives the kinetic energy in terms of arbitrary linear internal coordinates, while F represents the (harmonic) potential energy in terms of these coordinates. The GF method gives the linear transformation from general internal coordinates to the special set of normal coordinates.
The GF method
A non-linear molecule consisting of N atoms has 3N − 6 internal degrees of freedom, because positioning a molecule in three-dimensional space requires three degrees of freedom, and the description of its orientation in space requires another three degree of freedom. These degrees of freedom must be subtracted from the 3N degrees of freedom of a system of N particles.
The interaction among atoms in a molecule is described by a potential energy surface (PES), which is a function of 3N − 6 coordinates. The internal degrees of freedom s1, ..., s3N−6 describing the PES in an optimal way are often non-linear; they are for instance valence coordinates, such as bending and torsion angles and bond stretches. It is possible to write the quantum mechanical kinetic energy operator for such curvilinear coordinates, but it is hard to formulate a general theory applicable to any molecule. This is why Wilson linearized the internal coordinates by assuming small displacements. The linearized version of the internal coordinate st is denoted by St.
The PES V can be Taylor expanded around its minimum in terms of the St. The third term (the Hessian of V) evaluated in the minimum is a force derivative matrix F. In the harmonic approximation the Taylor series is ended after this term. The second term, containing first derivatives, is zero because it is evaluated in the minimum of V. The first term can be included in the zero of energy.
Thus,
The classical vibrational kinetic energy has the form:
where gst is an element of the metric tensor of the internal (curvilinear) coordinates. The dots indicate time derivatives. Mixed terms generally present in curvilinear coordinates are not present here, because only linear coordinate transformations are used. Evaluation of the metric tensor g in the minimum s0 of V gives the positive definite and symmetric matrix G = g(s0)−1.
One can solve the two matrix problems
simultaneously, since they are equivalent to the generalized eigenvalue problem
where where fi is equal to ( is the frequency of normal mode i); is the unit matrix. The matrix L−1 contains the normal coordinates Qk in its rows:
Because of the form of the generalized eigenvalue problem, the method is called the GF method,
often with the name of its originator attached to it: Wilson's GF method. By matrix transposition in both sides of the equation and using the fact that both G and F are symmetric matrices, as are diagonal matrices, one can recast this equation into a very similar one for FG . This is why the method is also referred to as Wilson's FG method.
We introduce the vectors
which satisfy the relation
Upon use of the results of the generalized eigenvalue equation, the energy E = T + V (in the harmonic approximation) of the molecule becomes:
The Lagrangian L = T − V is
The corresponding Lagrange equations are identical to the Newton equations
for a set of uncoupled harmonic oscillators. These ordinary second-order differential equations are easily solved, yielding Qt as a function of time; see the article on harmonic oscillators.
Normal coordinates in terms of Cartesian displacement coordinates
Often the normal coordinates are expressed as linear combinations of Cartesian displacement coordinates.
Let RA be the position vector of nucleus A and RA0
the corresponding equilibrium position. Then
is by definition the Cartesian displacement coordinate of nucleus A.
Wilson's linearizing of the internal curvilinear coordinates qt expresses the coordinate St in terms of the displacement coordinates
where sAt is known as a Wilson s-vector.
If we put the into a (3N − 6) × 3N matrix B, this equation becomes in matrix language
The actual form of the matrix elements of B can be fairly complicated.
Especially for a torsion angle, which involves 4 atoms, it requires tedious vector algebra to derive the corresponding values of the . See for more details on this method, known as
the Wilson s-vector method, the book by Wilson et al., or molecular vibration. Now,
which can be inverted and put in summation language:
Here D is a (3N − 6) × 3N matrix, which is given by (i) the linearization of the internal coordinates s (an algebraic process) and (ii) solution of Wilson's GF equations (a numeric process).
Matrices involved in the analysis
There are several related coordinate systems commonly used in the GF matrix analysis. These quantities are related by a variety of matrices. For clarity, we provide the coordinate systems and their interrelations here.
The relevant coordinates are:
Cartesian coordinates for each atom
Internal coordinates for each atom
Mass-weighted Cartesian coordinates
Normal coordinates
These different coordinate systems are related to one another by:
, i.e. the matrix transforms the Cartesian coordinates to (linearized) internal coordinates.
i.e. the mass matrix transforms Cartesian coordinates to mass-weighted Cartesian coordinates.
i.e. the matrix transforms the normal coordinates to mass-weighted internal coordinates.
i.e. the matrix transforms the normal coordinates to internal coordinates.
Note the useful relationship:
These matrices allow one to construct the G matrix quite simply as
Relation to Eckart conditions
From the invariance of the internal coordinates St under overall rotation and translation
of the molecule, follows the same for the linearized coordinates stA.
It can be shown that this implies that the following 6 conditions are satisfied by the internal
coordinates,
These conditions follow from the Eckart conditions that hold for the displacement vectors,
References
Further references
Spectroscopy
Molecular physics
Quantum chemistry | GF method | [
"Physics",
"Chemistry"
] | 1,367 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Quantum chemistry",
"Instrumental analysis",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
8,754,508 | https://en.wikipedia.org/wiki/Tefilat%20HaDerech | Tefilat HaDerech () or the Traveler's Prayer or Wayfarer's Prayer in English, is a prayer for a safe journey recited by Jews, when they travel, by air, sea, and even on long car trips. It is recited at the onset of every journey, and preferably done standing but this is not necessary. It is often inscribed onto hamsas which sometimes contain the Sh'ma or Birkat HaBayit prayer instead.
Text
"May it be Your will, Lord, our God and the God of our ancestors, that You lead us toward peace, guide our footsteps toward peace, that we are supported in peace, and make us reach our desired destination for life, gladness, and peace. May You rescue us from the hand of every foe and ambush, from robbers and wild beasts on the trip, and from all manner of punishments that assemble to come to earth. May You send blessing in our handiwork, and grant us grace, kindness, and mercy in Your eyes and in the eyes of all who see us. May You hear the sound of our humble request because You are God Who hears prayer requests. Blessed are You, Lord, Who hears prayer."
Origins
A variant of the prayer can be found in the Babylonian Talmud (Berachot 29b-30a). The Bavli version is written for the protection of a single individual, but the sage Abaye counseled merging one's individual need with that of the community. The modern text is accordingly written in the plural; some, however, hold that "and give me favour, generosity and mercy in Your eyes and the eyes of all who see me" should be said in the singular.
"One who travels must recite tefilat haderech. What is tefilat haderech? "May it be Your will, my Gd, that You lead me towards peace, direct my steps toward peace, support me toward peace, and rescue me from the hand of any enemy or ambush on the way, and send blessing upon my handiwork, and give me favour, generosity and mercy in Your eyes and the eyes of all who see me. You are blessed, Gd, who listens to prayer." Abaye said: One should always merge himself with the community."
Laws
From Kitzur Shulchan Aruch 68:1
Tefilat HaDerech - the traveler's prayer - cannot be said before one has left the city limits; defined as 70 and ⅔ Amot (~35 meters / ~0.02 miles) after the last house.
Preferably it should be said one "Miel" (~1 km / ~0.6 miles) from the city limit.
When overnighting on a multi-day trip, one says Tefilat HaDerech before leaving for the day.
Media
YouTube Video - IDF Soldiers recite prayer for a safe journey (Tefilat HaDerech) in their tank.
In popular culture
An excerpted version of the prayer is recited in Series 1, Episode 1 of the series Away.
Character Marissa Gold recites the prayer at the end of Season 6, Episode 4, of the series The Good Fight.
References
Jewish prayer and ritual texts
Travel
Hebrew words and phrases in Jewish prayers and blessings | Tefilat HaDerech | [
"Physics"
] | 674 | [
"Physical systems",
"Transport",
"Travel"
] |
8,754,562 | https://en.wikipedia.org/wiki/Wide%20Area%20GPS%20Enhancement | Wide Area GPS Enhancement (WAGE) is a method to increase the horizontal accuracy of the GPS encrypted P(Y) Code by adding additional range correction data to the satellite broadcast navigation message.
Per a 1997 article, the navigation message for each satellite is updated once daily or as needed. This daily update of each satellite navigation message contains the range corrections for all the satellites in the constellation. Thus, more timely range correction information would be available for each satellite, resulting in increased horizontal accuracy. Potential improvements to the system include simplifying the upload procedure, uploading the data more often, and adding more monitor stations for better range correction.
WAGE is available only to the Precise Positioning Service (PPS) or P(Y) Code receivers. It requires at least 12.5 minutes to obtain the most recent WAGE data. After that, the process of using the corrections data is automatic and transparent to the operator. Any time the receiver is on, it continually collects WAGE data (whether the WAGE mode is on or off). The receiver always uses the most recent WAGE data available to calculate position and it will not use the data that is over 6 hours old.
A 1996 evaluation using a PLGR (a 5-channel L2 GPS receiver) found no clear advantage to using WAGE in its then-current configuration. Its overall average error of 9.1 meters was worse than when WAGE was not used.
However, the specifications information for the Defense Advanced GPS Receiver, which has replaced the PLGR, lists its WAGE accuracy as better than 4.82 m, 95% Horizontal. PPS accuracy has improved beyond WAGE specification and accuracy improvement from WAGE is now negligible. Modern receivers & atomic clocks on a chip will also outperform WAGE. Some theorize that restrictions imposed by WAGE may limit precision for both C/A, P(Y), & WAGE users more than what it provides to WAGE users only.
The capability of WAGE has been superseded by Talon NAMATH.
There is a push for WAGE users to upgrade to Talon NAMATH or move them to using P(Y) alone. This could lift WAGE restrictions & allow accuracy improvements for all users.
References
Sources
Talon NAMATH, Link 16, ZOAD, SBIR, and Other Code Words
Wide Area GPS Enhancement (WAGE) Evaluation
Orbit Determination and Satellite Navigation Crosslink Summer 2002.
Global Positioning System
Satellite-based augmentation systems | Wide Area GPS Enhancement | [
"Technology",
"Engineering"
] | 487 | [
"Global Positioning System",
"Aerospace engineering",
"Wireless locating",
"Aircraft instruments"
] |
8,755,061 | https://en.wikipedia.org/wiki/Landmark%20point | In morphometrics, landmark point or shortly landmark is a point in a shape object in which correspondences between and within the populations of the object are preserved. In other disciplines, landmarks may be known as vertices, anchor points, control points, sites, profile points, 'sampling' points, nodes, markers, fiducial markers, etc. Landmarks can be defined either manually by experts or automatically by a computer program. There are three basic types of landmarks: anatomical landmarks, mathematical landmarks or pseudo-landmarks.
An anatomical landmark is a biologically-meaningful point in an organism. Usually experts define anatomical points to ensure their correspondences within the same species. Examples of anatomical landmark in shape of a skull are the eye corner, tip of the nose, jaw, etc. Anatomical landmarks determine homologous parts of an organism, which share a common ancestry.
Mathematical landmarks are points in a shape that are located according to some mathematical or geometrical property, for instance, a high curvature point or an extreme point. A computer program usually determines mathematical landmarks used for an automatic pattern recognition.
Pseudo-landmarks are constructed points located between anatomical or mathematical landmarks. A typical example is an equally spaced set of points between two anatomical landmarks to get more sample points from a shape. Pseudo-landmarks are useful during shape matching, when the matching process requires a large number of points.
See also
Statistical shape analysis
References
Computer vision | Landmark point | [
"Engineering"
] | 284 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
8,756,160 | https://en.wikipedia.org/wiki/Westphalen%E2%80%93Lettr%C3%A9%20rearrangement | The Westphalen–Lettré rearrangement is a classic organic reaction in organic chemistry describing a rearrangement reaction of cholestane-3β,5α,6β-triol diacetate with acetic anhydride and sulfuric acid. In this reaction one equivalent of water is lost, a double bond is formed at C10–C11 and importantly the methyl group at the C10 position migrates to the C5 position.
The reaction is first-order in steroid in the presence of an excess of sulfuric acid and the first reaction step in the reaction mechanism is likely the formation of a sulfate ester followed by that of a carbocation at C5 after which the actual re-arrangement takes place.
References
Rearrangement reactions
Name reactions | Westphalen–Lettré rearrangement | [
"Chemistry"
] | 162 | [
"Name reactions",
"Rearrangement reactions",
"Organic reactions"
] |
8,756,738 | https://en.wikipedia.org/wiki/Resource%20leveling | In project management, resource leveling is defined by A Guide to the Project Management Body of Knowledge (PMBOK Guide) as "A technique in which start and finish dates are adjusted based on resource limitation with the goal of balancing demand for resources with the available supply." Resource leveling problem could be formulated as an optimization problem. The problem could be solved by different optimization algorithms such as exact algorithms or meta-heuristic methods.
When performing project planning activities, the manager will attempt to schedule certain tasks simultaneously. When more resources such as machines or people are needed than are available, or perhaps a specific person is needed in both tasks, the tasks will have to be rescheduled concurrently or even sequentially to manage the constraint. Project planning resource leveling is the process of resolving these conflicts. It can also be used to balance the workload of primary resources over the course of the project[s], usually at the expense of one of the traditional triple constraints (time, cost, scope).
When using specially designed project software, leveling typically means resolving conflicts or over allocations in the project plan by allowing the software to calculate delays and update tasks automatically. Project management software leveling requires delaying tasks until resources are available. In more complex environments, resources could be allocated across multiple, concurrent projects thus requiring the process of resource leveling to be performed at company level.
In either definition, leveling could result in a later project finish date if the tasks affected are in the critical path.
Resource leveling is also useful in the world of maintenance management. Many organizations have maintenance backlogs. These backlogs consist of work orders. In a "planned state" these work orders have estimates such as 2 electricians for 8 hours. These work orders have other attributes such as report date, priority, asset operational requirements, and safety concerns. These same organizations have a need to create weekly schedules. Resource-leveling can take the "work demand" and balance it against the resource pool availability for the given week. The goal is to create this weekly schedule in advance of performing the work. Without resource-leveling the organization (planner, scheduler, supervisor) is most likely performing subjective selection. For the most part, when it comes to maintenance scheduling, there is less, if any, task interdependence, and therefore less need to calculate critical path and total float.
See also
Resource allocation
References
External links
Project Management for Construction, by Chris Hendrickson
Resource-Constrained Project Scheduling: Past Work and New Directions, by Bibo Yang, Joseph Geunes, William J. O'Brien
Petri Nets for Project Management and Resource Levelling, by V. A. Jeetendra, O. V. Krishnaiah Chetty, J. Prashanth Reddy
Schedule (project management) | Resource leveling | [
"Physics"
] | 569 | [
"Spacetime",
"Physical quantities",
"Time",
"Schedule (project management)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.