id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
171,323 | https://en.wikipedia.org/wiki/Bandwagon%20effect | The bandwagon effect is a psychological phenomenon where people adopt certain behaviors, styles, or attitudes simply because others are doing so. More specifically, it is a cognitive bias by which public opinion or behaviours can alter due to particular actions and beliefs rallying amongst the public. It is a psychological phenomenon whereby the rate of uptake of beliefs, ideas, fads and trends increases with respect to the proportion of others who have already done so. As more people come to believe in something, others also "hop on the bandwagon" regardless of the underlying evidence.
Following others' actions or beliefs can occur because of conformism or deriving information from others. Much of the influence of the bandwagon effect comes from the desire to 'fit in' with peers; by making similar selections as other people, this is seen as a way to gain access to a particular social group. An example of this is fashion trends wherein the increasing popularity of a certain garment or style encourages more acceptance. When individuals make rational choices based on the information they receive from others, economists have proposed that information cascades can quickly form in which people ignore their personal information signals and follow the behaviour of others. Cascades explain why behaviour is fragile as people understand that their behaviour is based on a very limited amount of information. As a result, fads form easily but are also easily dislodged. The phenomenon is observed in various fields, such as economics, political science, medicine, and psychology. In social psychology, people's tendency to align their beliefs and behaviors with a group is known as 'herd mentality' or 'groupthink'. The reverse bandwagon effect (also known as the snob effect in certain contexts) is a cognitive bias that causes people to avoid doing something, because they believe that other people are doing it.
Origin
The phenomenon where ideas become adopted as a result of their popularity has been apparent for some time. However, the metaphorical use of the term bandwagon in reference to this phenomenon began in 1848. A literal "bandwagon" is a wagon that carries a musical ensemble, or band, during a parade, circus, or other entertainment event.
The phrase "jump on the bandwagon" first appeared in American politics in 1848 during the presidential campaign of Zachary Taylor. Dan Rice, a famous and popular circus clown of the time, invited Taylor to join his circus bandwagon. As Taylor gained more recognition and his campaign became more successful, people began saying that Taylor's political opponents ought to "jump on the bandwagon" themselves if they wanted to be associated with such success.
Later, during the time of William Jennings Bryan's 1900 presidential campaign, bandwagons had become standard in campaigns, and the phrase "jump on the bandwagon" was used as a derogatory term, implying that people were associating themselves with success without considering that with which they associated themselves.
Despite its emergence in the late 19th century, it was only rather recently that the theoretical background of bandwagon effects has been understood. One of the best-known experiments on the topic is the 1950s' Asch conformity experiment, which illustrates the individual variation in the bandwagon effect. Academic study of the bandwagon effect especially gained interest in the 1980s, as scholars studied the effect of public opinion polls on voter opinions.
Causes and factors
Individuals are highly influenced by the pressure and norms exerted by groups. As an idea or belief increases in popularity, people are more likely to adopt it; when seemingly everyone is doing something, there is an incredible pressure to conform. Individuals' impressions of public opinion or preference can originate from several sources.
Some individual reasons behind the bandwagon effect include:
Efficiency — Bandwagoning serves as a mental shortcut, or heuristic, allowing for decisions to be made quickly. It takes time for an individual to evaluate a behaviour or thought and decide upon it.
Normative social influence (belonging) — People have the tendency to conform with others out of a desire to fit in with the crowd and gain approval from others. As conformity ensures some level of social inclusion and acceptance, many people go along with the behaviours and/or ideas of their group in order to avoid being the odd one out. The 'spiral of silence' exemplifies this factor.
Informational social influence — People tend to conform with others out of a desire to be right, under the assumption that others may know something or may understand the situation better. In other words, people will support popular beliefs because they are seen as correct by the larger social group (the 'majority'). Moreover, when it seems as though the majority is doing a certain thing, not doing that thing becomes increasingly difficult. When individuals make rational choices based on the information they receive from others, economists have proposed that information cascades can quickly form in which people decide to ignore their personal information signals and follow the behaviour of others.
Fear of missing out — People who are anxious about 'missing out' on things that others are doing may be susceptible to the bandwagon effect.
Being on the 'winning side' — The desire to support a "winner" (or avoid supporting a "loser") can be what makes some susceptible to the bandwagon effect, such as in the case of voting for a candidate because they're in the lead.
In politics, bandwagon effects can also come as result of indirect processes that are mediated by political actors. Perceptions of popular support may affect the choice of activists about which parties or candidates to support by donations or voluntary work in campaigns.
Spread
The bandwagon effect works through a self-reinforcing mechanism, and can spread quickly and on a large-scale through a positive feedback loop, whereby the more who are affected by it, the more likely other people are to be affected by it too.
A new concept that is originally promoted by only a single advocate or a minimal group of advocates can quickly grow and become widely popular, even when sufficient supporting evidence is lacking. What happens is that a new concept gains a small following, which grows until it reaches a critical mass, until for example it begins being covered by mainstream media, at which point a large-scale bandwagon effect begins, which causes more people to support this concept, in increasingly large numbers. This can be seen as a result of the availability cascade, a self-reinforcing process through which a certain belief gains increasing prominence in public discourse.
Real-world examples
In politics
The bandwagon effect can take place in voting: it occurs on an individual scale where a voters opinion on vote preference can be altered due to the rising popularity of a candidate or a policy position. The aim for the change in preference is for the voter to end up picking the "winner's side" in the end. Voters are more so persuaded to do so in elections that are non-private or when the vote is highly publicised.
The bandwagon effect has been applied to situations involving majority opinion, such as political outcomes, where people alter their opinions to the majority view. Such a shift in opinion can occur because individuals from the decisions of others, as in an informational cascade.
Perceptions of popular support may affect the choice of activists about which parties or candidates to support by donations or voluntary work in campaigns. They may strategically funnel these resources to contenders perceived as well supported and thus electorally viable, thereby enabling them to run more powerful, and thus more influential campaigns.
In economics
American economist Gary Becker has argued that the bandwagon effect is powerful enough to flip the demand curve to be upward sloping. A typical demand curve is downward sloping—as prices rise, demand falls. However, according to Becker, an upward sloping would imply that even as prices rise, the demand rises.
Financial markets
The bandwagon effect comes about in two ways in financial markets.
First, through price bubbles: these bubbles often happen in financial markets in which the price for a particularly popular security keeps on rising. This occurs when many investors line up to buy a security bidding up the price, which in return attracts more investors. The price can rise beyond a certain point, causing the security to be highly overvalued.
Second is liquidity holes: when unexpected news or events occur, market participants will typically stop trading activity until the situation becomes clear. This reduces the number of buyers and sellers in the market, causing liquidity to decrease significantly. The lack of liquidity leaves price discovery distorted and causes massive shifts in asset prices, which can lead to increased panic, which further increases uncertainty, and the cycle continues.
Microeconomics
In microeconomics, bandwagon effects may play out in interactions of demand and preference. The bandwagon effect arises when people's preference for a commodity increases as the number of people buying it increases. Consumers may choose their product based on others' preferences believing that it is the superior product. This selection choice can be a result of directly observing the purchase choice of others or by observing the scarcity of a product compared to its competition as a result of the choice previous consumers have made. This scenario can also be seen in restaurants where the number of customers in a restaurant can persuade potential diners to eat there based on the perception that the food must be better than the competition due to its popularity. This interaction potentially disturbs the normal results of the theory of supply and demand, which assumes that consumers make buying decisions exclusively based on price and their own personal preference.
In medicine
Decisions made by medical professionals can also be influenced by the bandwagon effect. Particularly, the widespread use and support of now-disproven medical procedures throughout history can be attributed to their popularity at the time. Layton F. Rikkers (2002), professor emeritus of surgery at the University of Wisconsin–Madison, calls these prevailing practices medical bandwagons, which he defines as "the overwhelming acceptance of unproved but popular [medical] ideas."
Medical bandwagons have led to inappropriate therapies for numerous patients, and have impeded the development of more appropriate treatment.
One paper from 1979 on the topic of bandwagons of medicine describes how a new medical concept or treatment can gain momentum and become mainstream, as a result of a large-scale bandwagon effect:
The news media finds out about a new treatment and publicizes it, often by publishing pieces.
Various organizations, such as government agencies, research foundations, and private companies also promote the new treatment, typically because they have some vested interest in seeing it succeed.
The public picks up on the now-publicized treatment, and pressures medical practitioners to adopt it, especially when that treatment is perceived as being novel.
Doctors often want to accept the new treatment, because it offers a compelling solution to a difficult issue.
Since doctors have to consume large amounts of medical information in order to stay aware of the latest trends in their field, it is sometimes difficult for them to read new material in a sufficiently critical manner.
In sports
One who supports a particular sports team, despite having shown no interest in that team until it started gaining success, can be considered a "bandwagon fan".
In social networking
As an increasing number of people begin to use a specific social networking site or application, people are more likely to begin using those sites or applications. The bandwagon effect also
This research used bandwagon effects to examine the comparative impact of two separate bandwagon heuristic indicators (quantitative vs. qualitative) on changes in news readers' attitudes in an online comments section. Furthermore, Study 1 demonstrated that qualitative signals had a higher influence on news readers' judgments than quantitative clues. Additionally, Study 2 confirmed the results of Study 1 and showed that people's attitudes are influenced by apparent public opinion, offering concrete proof of the influence that digital bandwagons.
In fashion
The bandwagon effect can also affect the way the masses dress and can be responsible for clothing trends. People tend to want to dress in a manner that suits the current trend and will be influenced by those who they see often – normally celebrities. Such publicised figures will normally act as the catalyst for the style of the current period. Once a small group of consumers attempt to emulate a particular celebrity's dress choice more people tend to copy the style due to the pressure or want to fit in and be liked by their peers.
See also
References
Bibliography
External links
Definition at Investopedia
Cognitive biases
Conformity
Crowd psychology
Cultural trends
Economics effects
Political metaphors
Propaganda techniques
Revolution terminology | Bandwagon effect | [
"Biology"
] | 2,548 | [
"Behavior",
"Conformity",
"Human behavior"
] |
171,336 | https://en.wikipedia.org/wiki/Optical%20mouse | An optical mouse is a computer mouse which uses a light source, typically a light-emitting diode (LED), and a light detector, such as an array of photodiodes, to detect movement relative to a surface. Variations of the optical mouse have largely replaced the older mechanical mouse design, which uses moving parts to sense motion.
The earliest optical mice detected movement on pre-printed mousepad surfaces. Modern optical mice work on most opaque diffusely reflective surfaces like paper, but most of them do not work properly on specularly reflective surfaces like polished stone or transparent surfaces like glass. Optical mice that use dark field illumination can function reliably even on such surfaces.
Mechanical mice
Though not commonly referred to as optical mice, nearly all mechanical mice tracked movement using LEDs and photodiodes to detect when beams of infrared light did and didn't pass through holes in a pair of incremental rotary encoder wheels (one for left/right, another for forward/back), driven by a rubberized ball. Thus, the primary distinction of “optical mice” is not their use of optics, but their complete lack of moving parts to track mouse movement, instead employing an entirely solid-state system.
Early optical mice
The first two optical mice, first demonstrated by two independent inventors in December 1980, had different basic designs:
One of these, invented by Steve Kirsch of MIT and Mouse Systems Corporation, used an infrared LED and a four-quadrant infrared sensor to detect grid lines printed with infrared absorbing ink on a special metallic surface. Predictive algorithms in the CPU of the mouse calculated the speed and direction over the grid. The other type, invented by Richard F. Lyon of Xerox, used a 16-pixel visible-light image sensor with integrated motion detection on the same ntype (5µm) MOS integrated circuit chip, and tracked the motion of light dots in a dark field of a printed paper or similar mouse pad. The Kirsch and Lyon mouse types had very different behaviors, as the Kirsch mouse used an x-y coordinate system embedded in the pad, and would not work correctly when the pad was rotated, while the Lyon mouse used the x-y coordinate system of the mouse body, as mechanical mice do.
The optical mouse ultimately sold with the Xerox STAR office computer used an inverted sensor chip packaging approach patented by Lisa M. Williams and Robert S. Cherry of the Xerox Microelectronics Center.
The Mouse Systems (Kirsch) design was commercialised and sold in PC compatible form by the company itself alongside variants rebranded for OEM use with Sun Microsystems workstations and by Data General.
Modern optical mice
Optical sensor
Modern surface-independent optical mice work by using an optoelectronic sensor (essentially, a tiny low-resolution video camera) to take successive images of the surface on which the mouse operates. As computing power grew cheaper, it became possible to embed more powerful special-purpose image-processing chips in the mouse itself. This advance enabled the mouse to detect relative motion on a wide variety of surfaces, translating the movement of the mouse into the movement of the cursor and eliminating the need for a special mouse-pad. A surface-independent coherent light optical mouse design was patented by Stephen B. Jackson at Xerox in 1988.
Xerox's inventions were never massively commercially exploited, however, and optical mice would remain elusive in the personal computer market until Microsoft released the IntelliMouse with IntelliEye and IntelliMouse Explorer in 1999. These mice used technology developed by Hewlett-Packard under their Agilent Technologies subsidiary (see below). These mice worked on almost any surface, and represented a welcome improvement over mechanical mice, which would pick up dirt, track capriciously, invite rough handling, and need to be taken apart and cleaned frequently. Other manufacturers soon followed Microsoft's lead, including Apple for their Pro Mouse, using components manufactured by Agilent (once they spun off from HP), and over the next several years mechanical mice became obsolete.
The technology underlying the modern optical computer mouse is known as digital image correlation, a technology pioneered by the defense industry for tracking military targets. A simple binary-image version of digital image correlation was used in the 1980 Lyon optical mouse. Optical mice use image sensors to image naturally occurring texture in materials such as wood, cloth, mouse pads and Formica. These surfaces, when lit at a grazing angle by a light emitting diode, cast distinct shadows that resemble a hilly terrain lit at sunset. Images of these surfaces are captured in continuous succession and compared with each other to determine how far the mouse has moved.
Principle of operation
To understand how optical flow is used in optical mice, imagine two photographs of the same object except slightly offset from each other. Place both photographs on a light table to make them transparent, and slide one across the other until their images line up. The amount that the edges of one photograph overhang the other represents the offset between the images, and in the case of an optical computer mouse the distance it has moved.
Optical mice capture one thousand successive images or more per second. Depending on how fast the mouse is moving, each image will be offset from the previous one by a fraction of a pixel or as many as several pixels. Optical mice mathematically process these images using cross correlation to calculate how much each successive image is offset from the previous one. The output of the optical sensor is usually delta coordinates. Some optical ICs allow to get image data as well. Mice usually embeds some kind of Image Acquisition System and DSP processors for fast data processing.
An optical mouse might use an image sensor having an 18 × 18 pixel array of monochromatic pixels. Its sensor would normally share the same ASIC as that used for storing and processing the images. One refinement would be accelerating the correlation process by using information from previous motions, and another refinement would be preventing deadbands when moving slowly by adding interpolation or frame-skipping.
The development of the modern optical mouse at Hewlett-Packard Co. was supported by a succession of related projects during the 1990s at HP Laboratories. In 1992 William Holland was awarded US Patent 5,089,712 and John Ertel, William Holland, Kent Vincent, Rueiming Jamp, and Richard Baldwin were awarded US Patent 5,149,980 for measuring linear paper advance in a printer by correlating images of paper fibers. Ross R. Allen, David Beard, Mark T. Smith, and Barclay J. Tullis were awarded US Patents 5,578,813 (1996) and 5,644,139 (1997) for 2-dimensional optical navigational (i.e., position measurement) principles based on detecting and correlating microscopic, inherent features of the surface over which the navigation sensor travelled, and using position measurements of each end of a linear (document) image sensor to reconstruct an image of the document. This is the freehand scanning concept used in the HP CapShare 920 handheld scanner. By describing an optical means that explicitly overcame the limitations of wheels, balls, and rollers used in contemporary computer mice, the optical mouse was anticipated. These patents formed the basis for US Patent 5,729,008 (1998) awarded to Travis N. Blalock, Richard A. Baumgartner, Thomas Hornak, Mark T. Smith, and Barclay J. Tullis, where surface feature image sensing, image processing, and image correlation was realized by an integrated circuit to produce a position measurement. Improved precision of 2D optical navigation, needed for application of optical navigation to precise 2D measurement of media (paper) advance in HP DesignJet large format printers, was further refined in US Patent 6,195,475 awarded in 2001 to Raymond G. Beausoleil, Jr., and Ross R. Allen.
Light source
LED mice
Optical mice often used light-emitting diodes (LEDs) for illumination when first popularized. The color of the optical mouse's LEDs can vary, but red is most common, as red diodes are inexpensive and silicon photodetectors are very sensitive to red light. IR LEDs are also widely used. Other colors are sometimes used, such as the blue LED of the V-Mouse VM-101 illustrated at right.
Laser mice
A laser mouse uses an infrared laser diode instead of an LED to illuminate the surface beneath their sensor. As early as 1998, Sun Microsystems provided a laser mouse with their Sun SPARCstation servers and workstations.
However, laser mice did not enter the mainstream consumer market until 2004, following the development by a team at Agilent Laboratories, Palo Alto, led by Doug Baney of a laser-based mouse based on a 850 nm VCSEL that offered a 20X improvement in tracking performance. Tong Xie, Marshall T. Depue, and Douglas M. Baney were awarded US patents 7,116,427 and 7,321,359 for their work on low power consumption broad navigability VCSEL-based consumer mice. Paul Machin at Logitech, in partnership with Agilent Technologies introduced the new technology as the MX 1000 laser mouse. This mouse uses a small infrared laser (VCSEL) instead of an LED and significantly increased the resolution of the image taken by the mouse. The laser illumination enabled superior surface tracking compared to LED-illuminated optical mice.
In 2008, Avago Technologies introduced laser navigation sensors whose emitter was integrated into the IC using VCSEL technology.
In August 2009, Logitech introduced mice with two lasers, to track on glass and glossy surfaces better; they dubbed them a "Darkfield" laser sensor.
Power
Manufacturers often engineer their optical mice—especially battery-powered wireless models—to save power when possible. To do this, the mouse dims or blinks the laser or LED when in standby mode (each mouse has a different standby time). A typical implementation (by Logitech) has four power states, where the sensor is pulsed at different rates per second:
11500: full on, for accurate response while moving, illumination appears bright.
1100: fallback active condition while not moving, illumination appears dull.
110: standby
12: sleep state
Movement can be detected in any of these states; some mice turn the sensor fully off in the sleep state, requiring a button click to wake.
Optical mice utilizing infrared elements (LEDs or lasers) offer substantial increases in battery life over visible spectrum illumination. Some mice, such as the Logitech V450 848 nm laser mouse, are capable of functioning on two AA batteries for a full year, due to the low power requirements of the infrared laser.
Mice designed for use where low latency and high responsiveness are important, such as in playing video games, may omit power-saving features and require a wired connection to improve performance. Examples of mice which sacrifice power-saving in favor of performance are the Logitech G5 and the Razer Copperhead.
Optical versus mechanical mice
Unlike mechanical mice, whose tracking mechanisms can become clogged with lint, optical mice have no moving parts (besides buttons and scroll wheels); therefore, they do not require maintenance other than removing debris that might collect under the light emitter. However, they generally cannot track on glossy and transparent surfaces, including some mouse-pads, causing the cursor to drift unpredictably during operation. Mice with less image-processing power also have problems tracking fast movement, whereas some high-quality mice can track faster than 2 m/s.
Some models of laser mouse can track on glossy and transparent surfaces, and have a much higher sensitivity.
mechanical mice had lower average power requirements than their optical counterparts; the power used by mice is relatively small, and only an important consideration when the power is derived from batteries, with their limited capacity.
Optical models outperform mechanical mice on uneven, slick, soft, sticky, or loose surfaces, and generally in mobile situations lacking mouse pads. Because optical mice render movement based on an image which the LED (or infrared diode) illuminates, use with multicolored mouse pads may result in unreliable performance; however, laser mice do not suffer these problems and will track on such surfaces.
See also
Trackball
References
Computer mice
History of human–computer interaction
Video game control methods
American inventions | Optical mouse | [
"Technology"
] | 2,538 | [
"History of human–computer interaction",
"History of computing"
] |
171,396 | https://en.wikipedia.org/wiki/Zone%20melting | Zone melting (or zone refining, or floating-zone method, or floating-zone technique) is a group of similar methods of purifying crystals, in which a narrow region of a crystal is melted, and this molten zone is moved along the crystal. The molten region melts impure solid at its forward edge and leaves a wake of purer material solidified behind it as it moves through the ingot. The impurities concentrate in the melt, and are moved to one end of the ingot. Zone refining was invented by John Desmond Bernal and further developed by William G. Pfann in Bell Labs as a method to prepare high-purity materials, mainly semiconductors, for manufacturing transistors. Its first commercial use was in germanium, refined to one atom of impurity per ten billion, but the process can be extended to virtually any solute–solvent system having an appreciable concentration difference between solid and liquid phases at equilibrium. This process is also known as the float zone process, particularly in semiconductor materials processing.
Process details
The principle is that the segregation coefficient k (the ratio at equilibrium of an impurity in the solid phase to that in the liquid phase) is usually less than one. Therefore, at the solid/liquid boundary, the impurity atoms will diffuse to the liquid region. Thus, by passing a crystal boule through a thin section of furnace very slowly, such that only a small region of the boule is molten at any time, the impurities will be segregated at the end of the crystal. Because of the lack of impurities in the leftover regions which solidify, the boule can grow as a perfect single crystal if a seed crystal is placed at the base to initiate a chosen direction of crystal growth. When high purity is required, such as in semiconductor industry, the impure end of the boule is cut off, and the refining is repeated.
In zone refining, solutes are segregated at one end of the ingot in order to purify the remainder, or to concentrate the impurities. In zone leveling, the objective is to distribute solute evenly throughout the purified material, which may be sought in the form of a single crystal. For example, in the preparation of a transistor or diode semiconductor, an ingot of germanium is first purified by zone refining. Then a small amount of antimony is placed in the molten zone, which is passed through the pure germanium. With the proper choice of rate of heating and other variables, the antimony can be spread evenly through the germanium. This technique is also used for the preparation of silicon for use in integrated circuits ("chips").
Heaters
A variety of heaters can be used for zone melting, with their most important characteristic being the ability to form short molten zones that move slowly and uniformly through the ingot. Induction coils, ring-wound resistance heaters, or gas flames are common methods. Another method is to pass an electric current directly through the ingot while it is in a magnetic field, with the resulting magnetomotive force carefully set to be just equal to the weight in order to hold the liquid suspended. Optical heaters using high-powered halogen or xenon lamps are used extensively in research facilities particularly for the production of insulators, but their use in industry is limited by the relatively low power of the lamps, which limits the size of crystals produced by this method. Zone melting can be done as a batch process, or it can be done continuously, with fresh impure material being continually added at one end and purer material being removed from the other, with impure zone melt being removed at whatever rate is dictated by the impurity of the feed stock.
Indirect-heating floating zone methods use an induction-heated tungsten ring to heat the ingot radiatively, and are useful when the ingot is of a high-resistivity semiconductor on which classical induction heating is ineffective.
Mathematical expression of impurity concentration
When the liquid zone moves by a distance , the number of impurities in the liquid change. Impurities are incorporated in the melting liquid and freezing solid.
: segregation coefficient
: zone length
: initial uniform impurity concentration of the solidified rod
: concentration of impurities in the liquid melt per length
: number of impurities in the liquid
: number of impurities in zone when first formed at bottom
: concentration of impurities in the solid rod
The number of impurities in the liquid changes in accordance with the expression below during the movement of the molten zone
Applications
Solar cells
In solar cells, float zone processing is particularly useful because the single-crystal silicon grown has desirable properties. The bulk charge carrier lifetime in float-zone silicon is the highest among various manufacturing processes. Float-zone carrier lifetimes are around 1000 microseconds compared to 20–200 microseconds with Czochralski method, and 1–30 microseconds with cast polycrystalline silicon. A longer bulk lifetime increases the efficiency of solar cells significantly.
High-resistivity devices
It's used for production of float-zone silicon-based high-power semiconductor devices.
Related processes
Zone remelting
Another related process is zone remelting, in which two solutes are distributed through a pure metal. This is important in the manufacture of semiconductors, where two solutes of opposite conductivity type are used. For example, in germanium, pentavalent elements of group V such as antimony and arsenic produce negative (n-type) conduction and the trivalent elements of group III such as aluminium and boron produce positive (p-type) conduction. By melting a portion of such an ingot and slowly refreezing it, solutes in the molten region become distributed to form the desired n-p and p-n junctions.
See also
Fractional freezing a.k.a. freeze distillation
Monocrystalline silicon
Wafer (electronics)
Further reading
References
Crystals
Industrial processes
Liquid-solid separation
Methods of crystal growth
Semiconductor growth | Zone melting | [
"Chemistry",
"Materials_science"
] | 1,237 | [
"Separation processes by phases",
"Methods of crystal growth",
"Crystallography",
"Crystals",
"Liquid-solid separation"
] |
171,414 | https://en.wikipedia.org/wiki/Engineering%20drawing | An engineering drawing is a type of technical drawing that is used to convey information about an object. A common use is to specify the geometry necessary for the construction of a component and is called a detail drawing. Usually, a number of drawings are necessary to completely specify even a simple component. These drawings are linked together by a "master drawing." This "master drawing" is more commonly known as an assembly drawing. The assembly drawing gives the drawing numbers of the subsequent detailed components, quantities required, construction materials and possibly 3D images that can be used to locate individual items. Although mostly consisting of pictographic representations, abbreviations and symbols are used for brevity and additional textual explanations may also be provided to convey the necessary information.
The process of producing engineering drawings is often referred to as technical drawing or drafting (draughting). Drawings typically contain multiple views of a component, although additional scratch views may be added of details for further explanation. Only the information that is a requirement is typically specified. Key information such as dimensions is usually only specified in one place on a drawing, avoiding redundancy and the possibility of inconsistency. Suitable tolerances are given for critical dimensions to allow the component to be manufactured and function. More detailed production drawings may be produced based on the information given in an engineering drawing. Drawings have an information box or title block containing who drew the drawing, who approved it, units of dimensions, meaning of views, the title of the drawing and the drawing number.
History
As a necessary means for visually conveying ideas, technical drawing has been in one form or another a part of human history since antiquity. The use of these early drawings was to express architectural and engineering concepts for large cultural structures: the temples, monuments, and public infrastructure. Basic forms of technical drawing were used by the Egyptians and Mesopotamians to create highly detailed irrigation systems, pyramids, and other such sophisticated structures. But their methods were, comparatively easy, yet needed a great deal of skill and accuracy. Even in their primitive form, they gave the construction a drawing for structures that would stand the test of time.
With the invention of technical drawing in ancient Greece and Rome technical drawing, they have further evolved. Works by Vitruvius and other engineers and architects such as Vitruvius used drawings as a medium for the transmission of construction techniques, and the illustration of the basic principles of balance and proportion in architecture. Early examples of what would lead to more formal technical drawing practices included the drawings and geometric calculations used to construct aqueducts, bridges, and fortresses. Technical drawings also figured in the 12th-century design of cathedrals and castles, albeit such drawings were more typically produced by artisans and stonemasons, not formally trained engineers.
The Renaissance was a period of great success for technical drawing. These inventive artists and inventors were starting to use sophisticated methods of visual representation within their work as well as a methodical adherence to accuracy. His notebooks contained drawings of mechanical devices anatomical studies, and engineering projects that demonstrated his advanced understanding of form, function, and proportion, as elucidated by his notebooks. Perhaps he was the first of the pioneers who combined the arts with engineering ability to produce technical drawings at once imaginative and instructive. It was an important foundation for future developments in technical drawing work.
As the Industrial Revolution took hold, modern engineering drawing took shape with the emergence of strictly specified conventions like drawing in orthographic projection, exploding, and standard scales. Part of the movement towards standardization was somewhat triggered by the development of engineering education and uniform drawing techniques in France. During the same period, the French mathematician Gaspard Monge developed descriptive geometry, a means of representing three-dimensional objects in two-dimensional space, and contributed to technical drawing in a major way. His work set the ground for orthographic projection which is one of the core techniques to be used in technical drawing today. Monge's methods were disseminated initially as a military secret, then far and wide, and his methods shaped the future of engineering education, and also the engineering practice.
Further contributions to the craft of technical drawing were made by pioneers like Marc Isambard Brunel. L. T. C. Rolt's biography of Isambard Kingdom Brunel, to whom Marc contributed in 1799 with his detailed drawings of block-making machinery, testified to the developing nature of British engineering methods. By applying what we now call mechanical drawing techniques to depict three-dimensional machinery on a two-dimensional plane more efficient manufacturing processes as well as greater precision were enabled. These innovations were essential as the world began to move toward mechanized production, and complex engineering projects, such as bridges, railways, and ships, required highly detailed and accurate technical representations to succeed.
This increasing need for a degree of precision in technical drawings during the 19th century was a direct result of the Industrial Revolution. In this era, we have seen the development of large-scale engineering projects such as railways, steam engines, and iron structures which require a heightened degree of accuracy and standardization. New conventions and symbols were created by engineers; the use of which became standardized throughout industries, so that any person who could read a technical drawing could know the specifications of a component or structure. The standardization process helped engineer practices to become standardized, making it easier for engineers, manufacturers, and builders to work together.
In the 20th century, technical drawing underwent yet another transformation with the introduction of drafting tools such as the T-square, compasses, and protractors. These tools helped drafters achieve the high degree of precision necessary for increasingly complex projects, such as skyscrapers, airplanes, and automobiles. The establishment of standards such as the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) further formalized technical drawing conventions, ensuring consistency in engineering practices around the world.
Today, technical drawing has largely transitioned from manual drafting to computer-aided design (CAD). CAD software has revolutionized the way technical drawings are created, allowing for faster, more precise, and easily modifiable drawings. Engineers can now visualize designs in three dimensions, simulate performance, and make adjustments before any physical prototype is built. This digital transformation has not only increased efficiency but also broadened the possibilities for innovation, enabling engineers to tackle challenges that were previously unimaginable.
However, despite the advent of digital tools, the fundamental principles of technical drawing remain rooted in its history. Precision, clarity, and the ability to convey complex information visually are still at the core of technical drawing. The conventions established over centuries—from orthographic projection to the use of scale and dimension lines—continue to be essential in modern engineering and architectural practice. The evolution of technical drawing is a testament to human ingenuity, demonstrating how the ability to convey complex ideas visually has been pivotal in the advancement of civilization.
Standardization and disambiguation
Engineering drawings specify the requirements of a component or assembly which can be complicated. Standards provide rules for their specification and interpretation. Standardization also aids internationalization, because people from different countries who speak different languages can read the same engineering drawing, and interpret it the same way.
One major set of engineering drawing standards is ASME Y14.5 and Y14.5M (most recently revised in 2018). These apply widely in the United States, although ISO 8015 (Geometrical product specifications (GPS) — Fundamentals — Concepts, principles and rules) is now also important. In 2018, ASME AED-1 was created to develop advanced practices unique to aerospace and other industries and supplement to Y14.5 Standards.
In 2011, a new revision of ISO 8015 (Geometrical product specifications (GPS) — Fundamentals — Concepts, principles and rules) was published containing the Invocation Principle. This states that, "Once a portion of the ISO geometric product specification (GPS) system is invoked in a mechanical engineering product documentation, the entire ISO GPS system is invoked." It also goes on to state that marking a drawing "Tolerancing ISO 8015" is optional. The implication of this is that any drawing using ISO symbols can only be interpreted to ISO GPS rules. The only way not to invoke the ISO GPS system is to invoke a national or other standard. Britain, BS 8888 (Technical Product Specification) has undergone important updates in the 2010s.
Media
For centuries, until the 1970s, all engineering drawing was done manually by using pencil and pen on paper or other substrate (e.g., vellum, mylar). Since the advent of computer-aided design (CAD), engineering drawing has been done more and more in the electronic medium with each passing decade. Today most engineering drawing is done with CAD, but pencil and paper have not entirely disappeared.
Some of the tools of manual drafting include pencils, pens and their ink, straightedges, T-squares, French curves, triangles, rulers, protractors, dividers, compasses, scales, erasers, and tacks or push pins. (Slide rules used to number among the supplies, too, but nowadays even manual drafting, when it occurs, benefits from a pocket calculator or its onscreen equivalent.) And of course the tools also include drawing boards (drafting boards) or tables. The English idiom "to go back to the drawing board", which is a figurative phrase meaning to rethink something altogether, was inspired by the literal act of discovering design errors during production and returning to a drawing board to revise the engineering drawing. Drafting machines are devices that aid manual drafting by combining drawing boards, straightedges, pantographs, and other tools into one integrated drawing environment. CAD provides their virtual equivalents.
Producing drawings usually involves creating an original that is then reproduced, generating multiple copies to be distributed to the shop floor, vendors, company archives, and so on. The classic reproduction methods involved blue and white appearances (whether white-on-blue or blue-on-white), which is why engineering drawings were long called, and even today are still often called, "blueprints" or "bluelines", even though those terms are anachronistic from a literal perspective, since most copies of engineering drawings today are made by more modern methods (often inkjet or laser printing) that yield black or multicolour lines on white paper. The more generic term "print" is now in common usage in the US to mean any paper copy of an engineering drawing. In the case of CAD drawings, the original is the CAD file, and the printouts of that file are the "prints".
Systems of dimensioning and tolerancing
Almost all engineering drawings (except perhaps reference-only views or initial sketches) communicate not only geometry (shape and location) but also dimensions and tolerances for those characteristics. Several systems of dimensioning and tolerancing have evolved. The simplest dimensioning system just specifies distances between points (such as an object's length or width, or hole center locations). Since the advent of well-developed interchangeable manufacture, these distances have been accompanied by tolerances of the plus-or-minus or min-and-max-limit types. Coordinate dimensioning involves defining all points, lines, planes, and profiles in terms of Cartesian coordinates, with a common origin. Coordinate dimensioning was the sole best option until the post-World War II era saw the development of geometric dimensioning and tolerancing (GD&T), which departs from the limitations of coordinate dimensioning (e.g., rectangular-only tolerance zones, tolerance stacking) to allow the most logical tolerancing of both geometry and dimensions (that is, both form [shapes/locations] and sizes).
Common features
Drawings convey the following critical information:
Geometry – the shape of the object; represented as views; how the object will look when it is viewed from various angles, such as front, top, side, etc.
Dimensions – the size of the object is captured in accepted units.
Tolerances – the allowable variations for each dimension.
Material – represents what the item is made of.
Finish – specifies the surface quality of the item, functional or cosmetic. For example, a mass-marketed product usually requires a much higher surface quality than, say, a component that goes inside industrial machinery.
Line styles and types
A variety of line styles graphically represent physical objects. Types of lines include the following:
visible – are continuous lines used to depict edges directly visible from a particular angle.
hidden – are short-dashed lines that may be used to represent edges that are not directly visible.
center – are alternately long- and short-dashed lines that may be used to represent the axes of circular features.
cutting plane – are thin, medium-dashed lines, or thick alternately long- and double short-dashed that may be used to define sections for section views.
section – are thin lines in a pattern (pattern determined by the material being "cut" or "sectioned") used to indicate surfaces in section views resulting from "cutting". Section lines are commonly referred to as "cross-hatching".
phantom – (not shown) are alternately long- and double short-dashed thin lines used to represent a feature or component that is not part of the specified part or assembly. E.g. billet ends that may be used for testing, or the machined product that is the focus of a tooling drawing.
Lines can also be classified by a letter classification in which each line is given a letter.
Type A lines show the outline of the feature of an object. They are the thickest lines on a drawing and done with a pencil softer than HB.
Type B lines are dimension lines and are used for dimensioning, projecting, extending, or leaders. A harder pencil should be used, such as a 2H pencil.
Type C lines are used for breaks when the whole object is not shown. These are freehand drawn and only for short breaks. 2H pencil
Type D lines are similar to Type C, except these are zigzagged and only for longer breaks. 2H pencil
Type E lines indicate hidden outlines of internal features of an object. These are dotted lines. 2H pencil
Type F lines are Type E lines, except these are used for drawings in electrotechnology. 2H pencil
Type G lines are used for centre lines. These are dotted lines, but a long line of 10–20 mm, then a 1 mm gap, then a small line of 2 mm. 2H pencil
Type H lines are the same as type G, except that every second long line is thicker. These indicate the cutting plane of an object. 2H pencil
Type K lines indicate the alternate positions of an object and the line taken by that object. These are drawn with a long line of 10–20 mm, then a small gap, then a small line of 2 mm, then a gap, then another small line. 2H pencil.
Multiple views and projections
In most cases, a single view is not sufficient to show all necessary features, and several views are used. Types of views include the following:
Multiview projection
A multiview projection is a type of orthographic projection that shows the object as it looks from the front, right, left, top, bottom, or back (e.g. the primary views), and is typically positioned relative to each other according to the rules of either first-angle or third-angle projection. The origin and vector direction of the projectors (also called projection lines) differs, as explained below.
In first-angle projection, the parallel projectors originate as if radiated from behind the viewer and pass through the 3D object to project a 2D image onto the orthogonal plane behind it. The 3D object is projected into 2D "paper" space as if you were looking at a radiograph of the object: the top view is under the front view, the right view is at the left of the front view. First-angle projection is the ISO standard and is primarily used in Europe.
In third-angle projection, the parallel projectors originate as if radiated from the far side of the object and pass through the 3D object to project a 2D image onto the orthogonal plane in front of it. The views of the 3D object are like the panels of a box that envelopes the object, and the panels pivot as they open up flat into the plane of the drawing. Thus the left view is placed on the left and the top view on the top; and the features closest to the front of the 3D object will appear closest to the front view in the drawing. Third-angle projection is primarily used in the United States and Canada, where it is the default projection system according to ASME standard ASME Y14.3M.
Until the late 19th century, first-angle projection was the norm in North America as well as Europe; but circa the 1890s, third-angle projection spread throughout the North American engineering and manufacturing communities to the point of becoming a widely followed convention, and it was an ASA standard by the 1950s. Circa World War I, British practice was frequently mixing the use of both projection methods.
As shown above, the determination of what surface constitutes the front, back, top, and bottom varies depending on the projection method used.
Not all views are necessarily used. Generally only as many views are used as are necessary to convey all needed information clearly and economically. The front, top, and right-side views are commonly considered the core group of views included by default, but any combination of views may be used depending on the needs of the particular design. In addition to the six principal views (front, back, top, bottom, right side, left side), any auxiliary views or sections may be included as serve the purposes of part definition and its communication. View lines or section lines (lines with arrows marked "A-A", "B-B", etc.) define the direction and location of viewing or sectioning. Sometimes a note tells the reader in which zone(s) of the drawing to find the view or section.
Auxiliary views
An auxiliary view is an orthographic view that is projected into any plane other than one of the six primary views. These views are typically used when an object contains some sort of inclined plane. Using the auxiliary view allows for that inclined plane (and any other significant features) to be projected in their true size and shape. The true size and shape of any feature in an engineering drawing can only be known when the Line of Sight (LOS) is perpendicular to the plane being referenced.
It is shown like a three-dimensional object. Auxiliary views tend to make use of axonometric projection. When existing all by themselves, auxiliary views are sometimes known as pictorials.
Isometric projection
An isometric projection shows the object from angles in which the scales along each axis of the object are equal. Isometric projection corresponds to rotation of the object by ± 45° about the vertical axis, followed by rotation of approximately ± 35.264° [= arcsin(tan(30°))] about the horizontal axis starting from an orthographic projection view. "Isometric" comes from the Greek for "same measure". One of the things that makes isometric drawings so attractive is the ease with which 60° angles can be constructed with only a compass and straightedge.
Isometric projection is a type of axonometric projection. The other two types of axonometric projection are:
Dimetric projection
Trimetric projection
Oblique projection
An oblique projection is a simple type of graphical projection used for producing pictorial, two-dimensional images of three-dimensional objects:
it projects an image by intersecting parallel rays (projectors)
from the three-dimensional source object with the drawing surface (projection plan).
In both oblique projection and orthographic projection, parallel lines of the source object produce parallel lines in the projected image.
Perspective projection
Perspective is an approximate representation on a flat surface, of an image as it is perceived by the eye. The two most characteristic features of perspective are that objects are drawn:
Smaller as their distance from the observer increases
Foreshortened: the size of an object's dimensions along the line of sight are relatively shorter than dimensions across the line of sight.
Section Views
Projected views (either Auxiliary or Multi view) which show a cross section of the source object along the specified cut plane. These views are commonly used to show internal features with more clarity than regular projections or hidden lines, it also helps reducing number of hidden lines.In assembly drawings, hardware components (e.g. nuts, screws, washers) are typically not sectioned. Section view is a half side view of object.
Scale
Plans are usually "scale drawings", meaning that the plans are drawn at specific ratio relative to the actual size of the place or object. Various scales may be used for different drawings in a set. For example, a floor plan may be drawn at 1:50 (1:48 or ″ = 1′ 0″) whereas a detailed view may be drawn at 1:25 (1:24 or ″ = 1′ 0″). Site plans are often drawn at 1:200 or 1:100.
Scale is a nuanced subject in the use of engineering drawings. On one hand, it is a general principle of engineering drawings that they are projected using standardized, mathematically certain projection methods and rules. Thus, great effort is put into having an engineering drawing accurately depict size, shape, form, aspect ratios between features, and so on. And yet, on the other hand, there is another general principle of engineering drawing that nearly diametrically opposes all this effort and intent—that is, the principle that users are not to scale the drawing to infer a dimension not labeled. This stern admonition is often repeated on drawings, via a boilerplate note in the title block telling the user, "DO NOT SCALE DRAWING."
The explanation for why these two nearly opposite principles can coexist is as follows. The first principle—that drawings will be made so carefully and accurately—serves the prime goal of why engineering drawing even exists, which is successfully communicating part definition and acceptance criteria—including "what the part should look like if you've made it correctly." The service of this goal is what creates a drawing that one even could scale and get an accurate dimension thereby. And thus the great temptation to do so, when a dimension is wanted but was not labeled. The second principle—that even though scaling the drawing will usually work, one should nevertheless never do it—serves several goals, such as enforcing total clarity regarding who has authority to discern design intent, and preventing erroneous scaling of a drawing that was never drawn to scale to begin with (which is typically labeled "drawing not to scale" or "scale: NTS"). When a user is forbidden from scaling the drawing, they must turn instead to the engineer (for the answers that the scaling would seek), and they will never erroneously scale something that is inherently unable to be accurately scaled.
But in some ways, the advent of the CAD and MBD era challenges these assumptions that were formed many decades ago. When part definition is defined mathematically via a solid model, the assertion that one cannot interrogate the model—the direct analog of "scaling the drawing"—becomes ridiculous; because when part definition is defined this way, it is not possible for a drawing or model to be "not to scale". A 2D pencil drawing can be inaccurately foreshortened and skewed (and thus not to scale), yet still be a completely valid part definition as long as the labeled dimensions are the only dimensions used, and no scaling of the drawing by the user occurs. This is because what the drawing and labels convey is in reality a symbol of what is wanted, rather than a true replica of it. (For example, a sketch of a hole that is clearly not round still accurately defines the part as having a true round hole, as long as the label says "10mm DIA", because the "DIA" implicitly but objectively tells the user that the skewed drawn circle is a symbol representing a perfect circle.) But if a mathematical model—essentially, a vector graphic—is declared to be the official definition of the part, then any amount of "scaling the drawing" can make sense; there may still be an error in the model, in the sense that what was intended is not depicted (modeled); but there can be no error of the "not to scale" type—because the mathematical vectors and curves are replicas, not symbols, of the part features.
Even in dealing with 2D drawings, the manufacturing world has changed since the days when people paid attention to the scale ratio claimed on the print, or counted on its accuracy. In the past, prints were plotted on a plotter to exact scale ratios, and the user could know that a line on the drawing 15 mm long corresponded to a 30 mm part dimension because the drawing said "1:2" in the "scale" box of the title block. Today, in the era of ubiquitous desktop printing, where original drawings or scaled prints are often scanned on a scanner and saved as a PDF file, which is then printed at any percent magnification that the user deems handy (such as "fit to paper size"), users have pretty much given up caring what scale ratio is claimed in the "scale" box of the title block. Which, under the rule of "do not scale drawing", never really did that much for them anyway.
Showing dimensions
The required sizes of features are conveyed through use of dimensions. Distances may be indicated with either of two standardized forms of dimension: linear and ordinate.
With linear dimensions, two parallel lines, called "extension lines," spaced at the distance between two features, are shown at each of the features. A line perpendicular to the extension lines, called a "dimension line," with arrows at its endpoints, is shown between, and terminating at, the extension lines. The distance is indicated numerically at the midpoint of the dimension line, either adjacent to it, or in a gap provided for it.
With ordinate dimensions, one horizontal and one vertical extension line establish an origin for the entire view. The origin is identified with zeroes placed at the ends of these extension lines. Distances along the x- and y-axes to other features are specified using other extension lines, with the distances indicated numerically at their ends.
Sizes of circular features are indicated using either diametral or radial dimensions. Radial dimensions use an "R" followed by the value for the radius; Diametral dimensions use a circle with forward-leaning diagonal line through it, called the diameter symbol, followed by the value for the diameter. A radially-aligned line with arrowhead pointing to the circular feature, called a leader, is used in conjunction with both diametral and radial dimensions.
All types of dimensions are typically composed of two parts: the nominal value, which is the "ideal" size of the feature, and the tolerance, which specifies the amount that the value may vary above and below the nominal.
Geometric dimensioning and tolerancing is a method of specifying the functional geometry of an object.
Sizes of drawings
Sizes of drawings typically comply with either of two different standards, ISO (World Standard) or ANSI/ASME Y14.1 (American).
The metric drawing sizes correspond to international paper sizes. These developed further refinements in the second half of the twentieth century, when photocopying became cheap. Engineering drawings could be readily doubled (or halved) in size and put on the next larger (or, respectively, smaller) size of paper with no waste of space. And the metric technical pens were chosen in sizes so that one could add detail or drafting changes with a pen width changing by approximately a factor of the square root of 2. A full set of pens would have the following nib sizes: 0.13, 0.18, 0.25, 0.35, 0.5, 0.7, 1.0, 1.5, and 2.0 mm. However, the International Organization for Standardization (ISO) called for four pen widths and set a colour code for each: 0.25 (white), 0.35 (yellow), 0.5 (brown), 0.7 (blue); these nibs produced lines that related to various text character heights and the ISO paper sizes.
All ISO paper sizes have the same aspect ratio, one to the square root of 2, meaning that a document designed for any given size can be enlarged or reduced to any other size and will fit perfectly. Given this ease of changing sizes, it is of course common to copy or print a given document on different sizes of paper, especially within a series, e.g. a drawing on A3 may be enlarged to A2 or reduced to A4.
The US customary "A-size" corresponds to "letter" size, and "B-size" corresponds to "ledger" or "tabloid" size. There were also once British paper sizes, which went by names rather than alphanumeric designations.
American Society of Mechanical Engineers (ASME) ANSI/ASME Y14.1, Y14.2, Y14.3, and Y14.5 are commonly referenced standards in the US.
Technical lettering
Technical lettering is the process of forming letters, numerals, and other characters in technical drawing. It is used to describe, or provide detailed specifications for an object. With the goals of legibility and uniformity, styles are standardized and lettering ability has little relationship to normal writing ability. Engineering drawings use a Gothic sans-serif script, formed by a series of short strokes. Lower case letters are rare in most drawings of machines. ISO Lettering templates, designed for use with technical pens and pencils, and to suit ISO paper sizes, produce lettering characters to an international standard. The stroke thickness is related to the character height (for example, 2.5 mm high characters would have a stroke thickness - pen nib size - of 0.25 mm, 3.5 would use a 0.35 mm pen and so forth). The ISO character set (font) has a seriffed one, a barred seven, an open four, six, and nine, and a round topped three, that improves legibility when, for example, an A0 drawing has been reduced to A1 or even A3 (and perhaps enlarged back or reproduced/faxed/ microfilmed &c). When CAD drawings became more popular, especially using US software, such as AutoCAD, the nearest font to this ISO standard font was Romantic Simplex (RomanS) - a proprietary shx font) with a manually adjusted width factor (override) to make it look as near to the ISO lettering for the drawing board. However, with the closed four, and arced six and nine, romans.shx typeface could be difficult to read in reductions. In more recent revisions of software packages, the TrueType font ISOCPEUR reliably reproduces the original drawing board lettering stencil style, however, many drawings have switched to the ubiquitous Arial.ttf.
Conventional parts (areas)
Title block
Every engineering drawing must have a title block.
The title block (T/B, TB) is an area of the drawing that conveys header-type information about the drawing, such as:
Drawing title (hence the name "title block")
Drawing number
Part number(s)
Name of the design activity (corporation, government agency, etc.)
Identifying code of the design activity (such as a CAGE code)
Address of the design activity (such as city, state/province, country)
Measurement units of the drawing (for example, inches, millimeters)
Default tolerances for dimension callouts where no tolerance is specified
Boilerplate callouts of general specs
Intellectual property rights warning
ISO 7200 specifies the data fields used in title blocks.
It standardizes eight mandatory data fields:
Title (hence the name "title block")
Created by (name of drafter)
Approved by
Legal owner (name of company or organization)
Document type
Drawing number (same for every sheet of this document, unique for each technical document of the organization)
Sheet number and number of sheets (for example, "Sheet 5/7")
Date of issue (when the drawing was made)
Traditional locations for the title block are the bottom right (most commonly) or the top right or center.
Revisions block
The revisions block (rev block) is a tabulated list of the revisions (versions) of the drawing, documenting the revision control.
Traditional locations for the revisions block are the top right (most commonly) or adjoining the title block in some way.
Next assembly
The next assembly block, often also referred to as "where used" or sometimes "effectivity block", is a list of higher assemblies where the product on the current drawing is used. This block is commonly found adjacent to the title block.
Notes list
The notes list provides notes to the user of the drawing, conveying any information that the callouts within the field of the drawing did not. It may include general notes, flagnotes, or a mixture of both.
Traditional locations for the notes list are anywhere along the edges of the field of the drawing.
General notes
General notes (G/N, GN) apply generally to the contents of the drawing, as opposed to applying only to certain part numbers or certain surfaces or features.
Flagnotes
Flagnotes or flag notes (FL, F/N) are notes that apply only where a flagged callout points, such as to particular surfaces, features, or part numbers. Typically the callout includes a flag icon. Some companies call such notes "delta notes", and the note number is enclosed inside a triangular symbol (similar to capital letter delta, Δ). "FL5" (flagnote 5) and "D5" (delta note 5) are typical ways to abbreviate in ASCII-only contexts.
Field of the drawing
The field of the drawing (F/D, FD) is the main body or main area of the drawing, excluding the title block, rev block, P/L and so on
List of materials, bill of materials, parts list
The list of materials (L/M, LM, LoM), bill of materials (B/M, BM, BoM), or parts list (P/L, PL) is a (usually tabular) list of the materials used to make a part, and/or the parts used to make an assembly. It may contain instructions for heat treatment, finishing, and other processes, for each part number. Sometimes such LoMs or PLs are separate documents from the drawing itself.
Traditional locations for the LoM/BoM are above the title block, or in a separate document.
Parameter tabulations
Some drawings call out dimensions with parameter names (that is, variables, such a "A", "B", "C"), then tabulate rows of parameter values for each part number.
Traditional locations for parameter tables, when such tables are used, are floating near the edges of the field of the drawing, either near the title block or elsewhere along the edges of the field.
Views and sections
Each view or section is a separate set of projections, occupying a contiguous portion of the field of the drawing. Usually views and sections are called out with cross-references to specific zones of the field.
Zones
Often a drawing is divided into zones by an alphanumeric grid, with zone labels along the margins, such as A, B, C, D up the sides and 1,2,3,4,5,6 along the top and bottom.
Names of zones are thus, for example, A5, D2, or B1. This feature greatly eases discussion of, and reference to, particular areas of the drawing.
Abbreviations and symbols
As in many technical fields, a wide array of abbreviations and symbols have been developed in engineering drawing during the 20th and 21st centuries. For example, cold rolled steel is often abbreviated as CRS, and diameter is often abbreviated as DIA, D, or ⌀.
Most engineering drawings are language-independent—words are confined to the title block; symbols are used in place of words elsewhere.
With the advent of computer generated drawings for manufacturing and machining, many symbols have fallen out of common use. This poses a problem when attempting to interpret an older hand-drawn document that contains obscure elements that cannot be readily referenced in standard teaching text or control documents such as ASME and ANSI standards. For example, ASME Y14.5M 1994 excludes a few elements that convey critical information as contained in older US Navy drawings and aircraft manufacturing drawings of World War 2 vintage. Researching the intent and meaning of some symbols can prove difficult.
Example
Here is an example of an engineering drawing (an isometric view of the same object is shown above). The different line types are colored for clarity.
Black = object line and hatching
Red = hidden line
Blue = center line of piece or opening
Magenta = phantom line or cutting plane line
Sectional views are indicated by the direction of arrows, as in the example right side.
Legal instruments
An engineering drawing is a legal document (that is, a legal instrument), because it communicates all the needed information about "what is wanted" to the people who will expend resources turning the idea into a reality. It is thus a part of a contract; the purchase order and the drawing together, as well as any ancillary documents (engineering change orders [ECOs], called-out specs), constitute the contract. Thus, if the resulting product is wrong, the worker or manufacturer are protected from liability as long as they have faithfully executed the instructions conveyed by the drawing. If those instructions were wrong, it is the fault of the engineer. Because manufacturing and construction are typically very expensive processes (involving large amounts of capital and payroll), the question of liability for errors has legal implications.
Relationship to model-based definition (MBD/DPD)
For centuries, engineering drawing was the sole method of transferring information from design into manufacture. In recent decades another method has arisen, called model-based definition (MBD) or digital product definition (DPD). In MBD, the information captured by the CAD software app is fed automatically into a CAM app (computer-aided manufacturing), which (with or without postprocessing apps) creates code in other languages such as G-code to be executed by a CNC machine tool (computer numerical control), 3D printer, or (increasingly) a hybrid machine tool that uses both. Thus today it is often the case that the information travels from the mind of the designer into the manufactured component without having ever been codified by an engineering drawing. In MBD, the dataset, not a drawing, is the legal instrument. The term "technical data package" (TDP) is now used to refer to the complete package of information (in one medium or another) that communicates information from design to production (such as 3D-model datasets, engineering drawings, engineering change orders (ECOs), spec revisions and addenda, and so on).
It still takes CAD/CAM programmers, CNC setup workers, and CNC operators to do manufacturing, as well as other people such as quality assurance staff (inspectors) and logistics staff (for materials handling, shipping-and-receiving, and front office functions). These workers often use drawings in the course of their work that have been produced from the MBD dataset. When proper procedures are being followed, a clear chain of precedence is always documented, such that when a person looks at a drawing, they are told by a note thereon that this drawing is not the governing instrument (because the MBD dataset is). In these cases, the drawing is still a useful document, although legally it is classified as "for reference only", meaning that if any controversies or discrepancies arise, it is the MBD dataset, not the drawing, that governs.
See also
Architectural drawing
ASME AED-1 Aerospace and Advanced Engineering Drawings
B. Hick and Sons – Notable collection of early locomotive and steam engine drawings
CAD standards
Descriptive geometry
Document management system
Engineering drawing symbols
Geometric tolerance
ISO 128 Technical drawings – General principles of presentation
light plot
Linear scale
Patent drawing
Scale rulers: architect's scale and engineer's scale
Specification (technical standard)
Structural drawing
References
Bibliography
: Engineering Drawing (book)
: Engineering Drawing (book)
Further reading
Basant Agrawal and C M Agrawal (2013). Engineering Drawing. Second Edition, McGraw Hill Education India Pvt. Ltd., New Delhi.
Paige Davis, Karen Renee Juneau (2000). Engineering Drawing
David A. Madsen, Karen Schertz, (2001) Engineering Drawing & Design. Delmar Thomson Learning.
Cecil Howard Jensen, Jay D. Helsel, Donald D. Voisinet Computer-aided engineering drawing using AutoCAD.
Warren Jacob Luzadder (1959). Fundamentals of engineering drawing for technical students and professional.
M.A. Parker, F. Pickup (1990) Engineering Drawing with Worked Examples.
Colin H. Simmons, Dennis E. Maguire Manual of engineering drawing. Elsevier.
Cecil Howard Jensen (2001). Interpreting Engineering Drawings.
B. Leighton Wellman (1948). Technical Descriptive Geometry. McGraw-Hill Book Company, Inc.
External links
Examples of cubes drawn in different projections
Animated presentation of drawing systems used in technical drawing (Flash animation)
Design Handbook: Engineering Drawing and Sketching, by MIT OpenCourseWare
Engineering concepts
Technical drawing
Infographics | Engineering drawing | [
"Engineering"
] | 8,553 | [
"Design engineering",
"Technical drawing",
"Civil engineering",
"nan"
] |
171,484 | https://en.wikipedia.org/wiki/Chinese%20social%20relations | Chinese social relations are typified by a reciprocal social network. Often social obligations within the network are characterized in familial terms. The individual link within the social network is known by guanxi (关系/關係) and the feeling within the link is known by the term ganqing (感情). An important concept within Chinese social relations is the concept of face, as in many other Asian cultures. A Buddhist-related concept is yuanfen (缘分/緣分).
As articulated in the sociological works of leading Chinese academic Fei Xiaotong, the Chinese—in contrast to other societies—tend to see social relations in terms of networks rather than boxes. Hence, people are perceived as being "near" or "far" rather than "in" or "out".
See also
Culture of China
Chinese tea culture
Kowtow
Red envelope
Chinese marriage
Sifu
References
Chinese culture
Culture of Hong Kong
Culture of Taiwan
Society of China
Reputation management
Information society
Social influence
Social information processing
Social networks
Social status | Chinese social relations | [
"Technology"
] | 208 | [
"Computing and society",
"Information society"
] |
171,488 | https://en.wikipedia.org/wiki/Ganqing | Ganqing () literally means "feel" (Gǎn, 感) "affection" (Qíng, 情) and together the term is often translated as "feelings" or "emotional attachment". Ganqing refers to a friendship-like feeling that develops between two people, groups, or business partners as their relationship deepens. Ganqing is an important concept in social relations in Chinese culture that has roots in Confucianism, and is a sub-dimension to the concept of guanxi (a person's relationship network). Developing good ganqing is a critical aspect of building guanxi relationships. Good ganqing means that two people have developed a rapport, while deep ganqing means there is a considerable emotional bond within the relationship. Ganqing can also refer as "love affair" in Chinese.
The term ganqing is often used in comments by the government of the People's Republic of China, for example statements that an action "hurts the feelings of the Chinese people" which some people interpret to mean harm the relationship with the Chinese government.
References
Chinese culture
Interpersonal relationships | Ganqing | [
"Biology"
] | 222 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
171,513 | https://en.wikipedia.org/wiki/Jetboat | A jetboat is a boat propelled by a jet of water ejected from the back of the craft. Unlike a powerboat or motorboat that uses an external propeller in the water below or behind the boat, a jetboat draws the water from under the boat through an intake and into a pump-jet inside the boat, before expelling it through a nozzle at the stern.
The modern jetboat was developed by New Zealand engineer Sir William Hamilton in the mid-1950s. His goal was a boat to run up the fast-flowing rivers of New Zealand that were too shallow for propellers.
Previous attempts at waterjet propulsion had very short lifetimes, generally due to the inefficient design of the units and the fact that they offered few advantages over conventional propellers. Unlike these previous waterjet developments, such as Campini's and the Hanley Hydrojet, Hamilton had a specific need for a propulsion system to operate in very shallow water, and the waterjet proved to be the ideal solution. The popularity of the jet unit and jetboat increased rapidly. It was found the waterjet was better than propellers for a wide range of vessel types, and waterjets are now used widely for many high-speed vessels including passenger ferries, rescue craft, patrol boats and offshore supply vessels.
Jetboats are highly manoeuvrable, and many can be reversed from full speed and brought to a stop within little more than their own length, in a manoeuvre known as a "crash stop". The well known Hamilton turn or "jet spin" is a high-speed manoeuvre where the boat's engine throttle is cut, the steering is turned sharply and the throttle opened again, causing the boat to spin quickly around with a large spray of water.
There is no engineering limit to the size of jetboats, though whether they are useful depends on the type of application. Classic prop-drives are generally more efficient and economical at low speeds, up to about , but as boat speed increases, the extra hull resistance generated by struts, rudders, shafts and so on means waterjets are more efficient up to . For very large propellers turning at slow speeds, such as in tugboats, the equivalent size waterjet would be too big to be practical. The vast majority of waterjet units are therefore installed in high-speed vessels and in situations where shallow draught, maneuverability, and load flexibility are the main concerns.
The biggest jet-driven vessels are found in military use and the high-speed passenger and car ferry industry. South Africa's s (approximately long) and the long United States Littoral Combat Ship are among the biggest jet-propelled vessels . Even these vessels are capable of performing "crash stops".
Function
A conventional screw propeller works within the body of water below a boat hull, effectively "screwing" through the water to drive a vessel forward by generating a difference in pressure between the forward and rear surfaces of the propeller blades and by accelerating a mass of water rearward. By contrast, a waterjet unit delivers a high-pressure "push" from the stern of a vessel by accelerating a volume of water as it passes through a specialised pump mounted above the waterline inside the boat hull. Both methods yield thrust due to Newton's third law— every action has an equal and opposite reaction.
In a jetboat, the waterjet draws water from beneath the hull, where it passes through a series of impellers and stators – known as stages – which increase the velocity of the waterflow. Most modern jets are single-stage, while older waterjets may have as many as three stages. The tail section of the waterjet unit extends out through the transom of the hull, above the waterline. This jetstream exits the unit through a small nozzle at high velocity to push the boat forward. Steering is accomplished by moving this nozzle to either side, or less commonly, by small gates on either side that deflect the jetstream. Because the jetboat relies on the flow of water through the nozzle for control, it is not possible to steer a conventional jetboat without the engine running.
Unlike conventional propeller systems where the rotation of the propeller is reversed to provide astern movement, a waterjet will continue to pump normally while a deflector is lowered into the jetstream after it leaves the outlet nozzle. This deflector redirects thrust forces forward to provide reverse thrust. Most highly developed reverse deflectors redirect the jetstream down and to each side to prevent recirculation of the water through the jet again, which may cause aeration problems, or increase reverse thrust. Steering is still available with the reverse deflector lowered so the vessel will have full maneuverability. With the deflector lowered about halfway into the jetstream, forward and reverse thrust are equal so the boat maintains a fixed position, but steering is still available to allow the vessel to turn on the spot – something which is impossible with a conventional single propeller.
Unlike hydrofoils, which use underwater wings or struts to lift the vessel clear of the water, standard jetboats use a conventional planing hull to ride across the water surface, with only the rear portion of the hull displacing any water. With the majority of the hull clear of the water, there is reduced drag, greatly enhancing speed and maneuverability, so jetboats are normally operated at planing speed. At slower speeds with less water pumping through the jet unit, the jetboat will lose some steering control and maneuverability and will quickly slow down as the hull comes off its planing state and hull resistance is increased. However, loss of steering control at low speeds can be overcome by lowering the reverse deflector slightly and increasing throttle – so an operator may increase thrust and thus control without increasing boat speed itself. A conventional river-going jetboat will have a shallow-angled (but not flat-bottomed) hull to improve its high-speed cornering control and stability, while also allowing it to traverse very shallow water. At speed, jetboats can be safely operated in less than 7.5 cm (3 inches) of water.
One of the most significant breakthroughs, in the development of the waterjet, was to change the design so it expelled the jetstream above the water line, contrary to many people's intuition. Hamilton discovered early on that this greatly improved performance, compared to expelling below the waterline, while also providing a "clean" hull bottom (i.e. nothing protruding below the hull line) to allow the boat to skim through very shallow water. It makes no difference to the amount of thrust generated whether the outlet is above or below the waterline, but having it above the waterline reduces hull resistance and draught. Hamilton's first waterjet design had the outlet below the hull and actually in front of the inlet. This probably meant that disturbed water was entering the jet unit and reducing its performance, and the main reason why the change to above the waterline made such a difference.
Applications
Applications for jetboats include most activities where conventional propellers are also used, but in particular passenger ferry services, coastguard and police patrol, navy and military, adventure tourism (which is becoming increasingly popular around the globe), pilot boat operations, surf rescue, farming, fishing, exploration, pleasure boating, and other water activities where motor boats are used. Jetboats can also be raced for sport, both on rivers (World Champion Jet Boat Marathon held in Mexico, Canada, USA and New Zealand) and on specially designed racecourses known as sprint tracks. Recently there has been increasing use of jetboats in the form of rigid-hulled inflatable boats and as luxury yacht tenders. Many jetboats are small enough to be carried on a trailer and towed by car.
As jetboats have no external rotating parts they are safer for swimmers and marine life, though they can be struck by the hull. The safety benefit itself can sometimes be reason enough to use this type of propulsion.
In 1977, Sir Edmund Hillary led a jetboat expedition, titled "Ocean to Sky", from the mouth of the Ganges River to its source. One of the jetboats was sunk by a friend of Hillary.
Drawbacks
The fuel efficiency and performance of a jetboat can be affected by anything that disrupts the smooth flow of water through the jet unit. For example, a plastic bag sucked onto the jet unit's intake grill can have quite an adverse effect.
Another disadvantage of jetboats appears to be that they are more sensitive to engine/jet unit mismatch, compared with the problem of engine/propeller mismatch in propeller-driven craft. If the jet-propulsion unit is not well-matched to the engine performance, excessive fuel consumption and poor performance can result.
See also
Personal water craft
List of water sports
Jet sprint boat racing
References
External links
Hamilton waterjet history
Jet boat origins and history
Motorboats
Marine propulsion
New Zealand inventions | Jetboat | [
"Engineering"
] | 1,829 | [
"Marine propulsion",
"Marine engineering"
] |
171,526 | https://en.wikipedia.org/wiki/Subaru%20Telescope | is the telescope of the National Astronomical Observatory of Japan, located at the Mauna Kea Observatory on Hawaii. It is named after the open star cluster known in English as the Pleiades. It had the largest monolithic primary mirror in the world from its commissioning until the Large Binocular Telescope opened in 2005.
Overview
The Subaru Telescope is a Ritchey-Chretien reflecting telescope. Instruments can be mounted at a Cassegrain focus below the primary mirror; at either of two Nasmyth focal points in enclosures on the sides of the telescope mount, to which light can be directed with a tertiary mirror; or at the prime focus in lieu of a secondary mirror, an arrangement rare on large telescopes, to provide a wide field of view suited to deep wide-field surveys.
In 1984, the University of Tokyo formed an engineering working group to develop and study the concept of a telescope. In 1985, the astronomy committee of Japan's science council gave top priority to the development of a "Japan National Large Telescope" (JNLT), and in 1986, the University of Tokyo signed an agreement with the University of Hawaii to build the telescope in Hawaii. In 1988, the National Astronomical Observatory of Japan was formed through a reorganization of the University's Tokyo Astronomical Observatory, to oversee the JNLT and other large national astronomy projects.
Construction of the Subaru Telescope began in April 1991, and later that year, a public contest gave the telescope its official name, "Subaru Telescope". Construction was completed in 1998, and the first scientific images were taken in January 1999. In September 1999, Princess Sayako of Japan dedicated the telescope.
A number of state-of-the-art technologies were worked into the telescope design. For example, 261 computer-controlled actuators press the main mirror from underneath, which corrects for primary mirror distortion caused by changes in the telescope orientation. The telescope enclosure building is also shaped to improve the quality of astronomical images by minimizing the effects caused by atmospheric turbulence.
Subaru is one of the few state-of-the-art telescopes to have been used with the naked eye. For the dedication, an eyepiece was constructed so that Princess Sayako could look through it directly. It was enjoyed by the staff for a few nights until it was replaced with the much more sensitive working instruments.
Subaru is the primary tool in the search for Planet Nine. Its large field of view, 75 times that of the Keck telescopes, and strong light-gathering power are suited for deep wide-field sky surveys. The search, split between a research group led by Konstantin Batygin and Michael Brown and another led by Scott Sheppard and Chad Trujillo, is expected to take up to five years.
Accidents during construction
Two separate incidents claimed the lives of four workers during the construction of the telescope. On October 13, 1993, 42-year-old Paul F. Lawrence was fatally injured when a forklift tipped over onto him. On January 16, 1996, sparks from a welder ignited insulation which smoldered, generating noxious smoke that killed Marvin Arruda, 52, Ricky Del Rosario, 38, and Warren K. "Kip" Kaleo, 36, and sent twenty-six other workers to the hospital in Hilo. All four workers are memorialized by a plaque outside the base of the telescope dome and a sign posted temporarily each January along the Mauna Kea access road.
Mishap in 2011
On July 2, 2011, the telescope operator in Hilo noted an anomaly from the top unit of the telescope. Upon further examination, coolant from the top unit was found to have leaked over the primary mirror and other parts of the telescope.
Observation using Nasmyth foci resumed on July 22, and Cassegrain focus resumed on August 26.
Mishap in 2023
On September 15, 2023, an abnormal load sensor value of the primary mirror fixed point was observed during a maintenance operational test. Later, a part fell onto the primary mirror during repair work of the mirror cover. Science observation was suspended.
After the replacement of sensor and the repair work of the primary mirror damage, it returned to observation on 3 March 2024.
Instruments
Several cameras and spectrographs can be mounted at Subaru Telescope's four focal points for observations in visible and infrared wavelengths.
Multi-Object Infrared Camera and Spectrograph (MOIRCS) Wide-field camera and spectrograph with the ability to take spectra of multiple objects simultaneously, mounts at the Cassegrain focus.
Infrared Camera and Spectrograph (IRCS) Used in conjunction with the new 188-element adaptive optics unit (AO188), mounted at the infrared Nasmyth focus.
Cooled Mid Infrared Camera and Spectrometer (COMICS) Mid-infrared camera and spectrometer with the ability to study cool interstellar dust, mounts on the Cassegrain focus. Decommissioned in 2020.
Faint Object Camera And Spectrograph (FOCAS) Visible-light camera and spectrograph with the ability to take spectra of up to 100 objects simultaneously, mounts on the Cassegrain focus.
Subaru Prime Focus Camera (Suprime-Cam) 80-megapixel wide-field visible-light camera, mounts at the prime focus. Superseded by the Hyper Suprime-Cam in 2012, decommissioned in May 2017.
High Dispersion Spectrograph (HDS) Visible-light spectrograph mounted at the optical Nasmyth focus.
Fiber Multi Object Spectrograph (FMOS) Infrared spectrograph using movable fiber optics to take spectra of up to 400 objects simultaneously. Mounts at the prime focus.
High-Contrast Coronographic Imager for Adaptive Optics (HiCIAO) Infrared camera for hunting planets around other stars. Used with AO188, mounted at the infrared Nasmyth focus.
Hyper Suprime-Cam (HSC) This 900-megapixel ultra-wide-field (1.5° field of view) camera saw first light in 2012, and was offered for open-use in 2014. The extremely large wide-field correction optics (a seven-element lens with some elements up to a meter in diameter) was manufactured by Canon and delivered March 29, 2011. It will be used for surveys of weak lensing to determine dark matter distribution.
Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) The Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument is a high-contrast imaging system for directly imaging exoplanets. The coronagraph uses a Phase Induced Amplitude Apodization (PIAA) design which means it will be able to image planets closer to their stars than conventional Lyot type coronagraph designs. For example, at a distance of 100 pc, the PIAA coronagraph on SCExAO would be able to image from 4 AU outwards while Gemini Planet Imager and VLT-SPHERE from 12 AU outwards. The system also has several other types of coronagraph: Vortex, Four-Quadrant Phase Mask and 8-Octant Phase Mask versions, and a shaped pupil coronagraph. The phase I of construction is complete and phase II construction to be complete by end of 2014 for science operations in 2015. SCExAO will initially use the HiCIAO camera but this will be replaced by CHARIS, an integral field spectrograph, around 2016.
See also
Adaptive optics, a technique of compensating for aberrations in optical systems
Apodization, a signal processing technique
Coronagraph, an astronomical device for masking the direct light of a star
List of largest optical reflecting telescopes
Pleiades, the English name of the asterism for which the Subaru Telescope is named
Yūko Kakazu
References
External links
National Astronomical Observatory of Japan
Micro-lens for Subaru Telescope
Buildings and structures completed in 1998
1998 establishments in Hawaii
Astronomical observatories in Hawaii
Buildings and structures in Hawaii County, Hawaii
Exoplanet search projects
Telescopes | Subaru Telescope | [
"Astronomy"
] | 1,629 | [
"Astronomy projects",
"Exoplanet search projects",
"Telescopes",
"Astronomical instruments"
] |
171,552 | https://en.wikipedia.org/wiki/Collision%20detection | Collision detection is the computational problem of detecting an intersection of two or more objects in virtual space. More precisely, it deals with the questions of if, when and where two or more objects intersect. Collision detection is a classic problem of computational geometry with applications in computer graphics, physical simulation, video games, robotics (including autonomous driving) and computational physics. Collision detection algorithms can be divided into operating on 2D or 3D spatial objects.
Overview
Collision detection is closely linked to calculating the distance between objects, as two objects (or more) intersect when the distance between them reaches zero or even becomes negative. Negative distance indicates that one object has penetrated another. Performing collision detection requires more context than just the distance between the objects.
Accurately identifying the points of contact on both objects' surfaces is also essential for the computation of a physically accurate collision response. The complexity of this task increases with the level of detail in the objects' representations: the more intricate the model, the greater the computational cost.
Collision detection frequently involves dynamic objects, adding a temporal dimension to distance calculations. Instead of simply measuring distance between static objects, collision detection algorithms often aim to determine whether the objects’ motion will bring them to a point in time when their distance is zero—an operation that adds significant computational overhead.
In collision detection involving multiple objects, a naive approach would require detecting collisions for all pairwise combinations of objects. As the number of objects increases, the number of required comparisons grows rapidly: for objects, intersection tests are needed with a naive approach. This quadratic growth makes such an approach computationally expensive as increases.
Due to the complexity mentioned above, collision detection is computationally intensive process. Nevertheless, it is essential for interactive applications like video games, robotics, and real-time physics engines. To manage these computational demands, extensive efforts have gone into optimizing collision detection algorithms.
A commonly used approach towards accelerating the required computations is to divide the process into two phases: the broad phase and the narrow phase. The broad phase aims to answer the question of whether objects might collide, using a conservative but efficient approach to rule out pairs that clearly do not intersect, thus avoiding unnecessary calculations.
Objects that cannot be definitively separated in the broad phase are passed to the narrow phase. Here, more precise algorithms determine whether these objects actually intersect. If they do, the narrow phase often calculates the exact time and location of the intersection.
Broad phase
This phase aims at quickly finding objects or parts of objects for which it can be quickly determined that no further collision test is needed. A useful property of such approach is that it is output sensitive. In the context of collision detection this means that the time complexity of the collision detection is proportional to the number of objects that are close to each other. An early example of that is the I-COLLIDE where the number of required narrow phase collision tests was where is the number of objects and is the number of objects at close proximity. This is a significant improvement over the quadratic complexity of the naive approach.
Spatial partitioning
Several approaches can grouped under the spatial partitioning umbrella, which includes octrees (for 3D), quadtrees (for 2D) binary space partitioning (or BSP trees) and other, similar approaches. If one splits space into a number of simple cells, and if two objects can be shown not to be in the same cell, then they need not be checked for intersection. Dynamic scenes and deformable objects require updating the partitioning which can add overhead.
Bounding volume hierarchy
Bounding Volume Hierarchy (BVH) a tree structure over a set of bounding volumes. Collision is determined by doing a tree traversal starting from the root. If the bounding volume of the root doesn't intersect with the object of interest, the traversal can be stopped. If, however there is an intersection, the traversal proceeds and checks the branches for each there is an intersection. Branches for which there is no intersection with the bounding volume can be culled from further intersection test. Therefore, multiple objects can be determined to not intersect at once. BVH can be used with deformable objects such as cloth or soft-bodies but the volume hierarchy has to be adjusted as the shape deforms. For deformable objects we need to be concerned about self-collisions or self intersections. BVH can be used for that end as well. Collision between two objects is computed by computing intersection between the bounding volumes of the root of the tree as there are collision we dive into the sub-trees that intersect. Exact collisions between the actual objects, or its parts (often triangles of a triangle mesh) need to be computed only between intersecting leaves. The same approach works for pair wise collision and self-collisions.
Exploiting temporal coherence
During the broad-phase, when the objects in the world move or deform, the data-structures used to cull collisions have to be updated. In cases where the changes between two frames or time-steps are small and the objects can be approximated well with axis-aligned bounding boxes, the sweep and prune algorithm can be a suitable approach.
Several key observation make the implementation efficient: Two bounding-boxes intersect if, and only if, there is overlap along all three axes; overlap can be determined, for each axis separately, by sorting the intervals for all the boxes; and lastly, between two frames updates are typically small (making sorting algorithms optimized for almost-sorted lists suitable for this application). The algorithm keeps track of currently intersecting boxes, and as objects moves, re-sorting the intervals helps keep track of the status.
Pairwise pruning
Once we've selected a pair of physical bodies for further investigation, we need to check for collisions more carefully. However, in many applications, individual objects (if they are not too deformable) are described by a set of smaller primitives, mainly triangles. So now, we have two sets of triangles, and (for simplicity, we will assume that each set has the same number of triangles.)
The obvious thing to do is to check all triangles against all triangles for collisions, but this involves comparisons, which is highly inefficient. If possible, it is desirable to use a pruning algorithm to reduce the number of pairs of triangles we need to check.
The most widely used family of algorithms is known as the hierarchical bounding volumes method. As a preprocessing step, for each object (in our example, and ) we will calculate a hierarchy of bounding volumes. Then, at each time step, when we need to check for collisions between and , the hierarchical bounding volumes are used to reduce the number of pairs of triangles under consideration. For simplicity, we will give an example using bounding spheres, although it has been noted that spheres are undesirable in many cases.
If is a set of triangles, we can pre-calculate a bounding sphere . There are many ways of choosing , we only assume that is a sphere that completely contains and is as small as possible.
Ahead of time, we can compute and . Clearly, if these two spheres do not intersect (and that is very easy to test), then neither do and . This is not much better than an n-body pruning algorithm, however.
If is a set of triangles, then we can split it into two halves and . We can do this to and , and we can calculate (ahead of time) the bounding spheres and . The hope here is that these bounding spheres are much smaller than and . And, if, for instance, and do not intersect, then there is no sense in checking any triangle in against any triangle in .
As a precomputation, we can take each physical body (represented by a set of triangles) and recursively decompose it into a binary tree, where each node represents a set of triangles, and its two children represent and . At each node in the tree, we can pre-compute the bounding sphere .
When the time comes for testing a pair of objects for collision, their bounding sphere tree can be used to eliminate many pairs of triangles.
Many variants of the algorithms are obtained by choosing something other than a sphere for . If one chooses axis-aligned bounding boxes, one gets AABBTrees. Oriented bounding box trees are called OBBTrees. Some trees are easier to update if the underlying object changes. Some trees can accommodate higher order primitives such as splines instead of simple triangles.
Narrow phase
Objects that cannot be definitively separated in the broad phase are passed to the narrow phase. In this phase, the objects under consideration are relatively close to each other. Still, attempts to quickly determine if a full intersection is needed are employed first. This step is sometimes referred to as mid-phase. Once these tests passed (e.g. the pair of objects may be colliding) more precise algorithms determine whether these objects actually intersect. If they do, the narrow phase often calculates the exact time and location of the intersection.
Bounding volumes
A quick way to potentially avoid a needless expensive computation is to check if the bounding volume enclosing the two objects intersect. If they don't, there is not need to check the actual objects. However, if the bounding volume intersect, the more expensive computation has to be performed. In order for the bounding-volume test to add value, two properties need to be balanced: a) the cost of intersecting the bounding volume needs to be low and b) the bounding volume needs to be tight enough so that the number of 'false positive' intersection will be low. A false positive intersection in this case means that the bounding volume intersect but the actual objects do not. Different bounding volume types offer different trade-offs for these properties.
Axis-Align Bounding Boxes (AABB) and cuboids are popular due to their simplicity and quick intersection tests. Bounding volumes such as Oriented Bounding Boxes (OBB), K-DOPs and Convex-hulls offer a tighter approximation of the enclosed shape at the expense of a more elaborate intersection test.
Bounding volumes are typically used in the early (pruning) stage of collision detection, so that only objects with overlapping bounding volumes need be compared in detail. Computing collision or overlap between bounding volumes involves additional computations, therefore, in order for it to beneficial we need the bounding volume to be relatively tight and the computation overhead to due the collisions to be low.
Exact pairwise collision detection
Objects for which pruning approaches could not rule out the possibility of a collision have to undergo an exact collision detection computation.
Collision detection between convex objects
According to the separating planes theorem, for any two disjoint convex objects, there exists a plane so that one object lies completely on one side of that plane, and the other object lies on the opposite side of that plane. This property allows the development of efficient collision detection algorithms between convex objects. Several algorithms are available for finding the closest points on the surface of two convex polyhedral objects - and determining collision. Early work by Ming C. Lin that used a variation on the simplex algorithm from linear programming and the Gilbert-Johnson-Keerthi distance algorithm are two such examples. These algorithms approach constant time when applied repeatedly to pairs of stationary or slow-moving objects, and every step is initialized from the previous collision check.
The result of all this algorithmic work is that collision detection can be done efficiently for thousands of moving objects in real time on typical personal computers and game consoles.
A priori pruning
Where most of the objects involved are fixed, as is typical of video games, a priori methods using precomputation can be used to speed up execution.
Pruning is also desirable here, both n-body pruning and pairwise pruning, but the algorithms must take time and the types of motions used in the underlying physical system into consideration.
When it comes to the exact pairwise collision detection, this is highly trajectory dependent, and one almost has to use a numerical root-finding algorithm to compute the instant of impact.
As an example, consider two triangles moving in time and . At any point in time, the two triangles can be checked for intersection using the twenty planes previously mentioned. However, we can do better, since these twenty planes can all be tracked in time. If is the plane going through points in then there are twenty planes to track. Each plane needs to be tracked against three vertices, this gives sixty values to track. Using a root finder on these sixty functions produces the exact collision times for the two given triangles and the two given trajectory. We note here that if the trajectories of the vertices are assumed to be linear polynomials in then the final sixty functions are in fact cubic polynomials, and in this exceptional case, it is possible to locate the exact collision time using the formula for the roots of the cubic. Some numerical analysts suggest that using the formula for the roots of the cubic is not as numerically stable as using a root finder for polynomials.
Triangle centroid segments
A triangle mesh object is commonly used in 3D body modeling. Normally the collision function is a triangle to triangle intercept or a bounding shape associated with the mesh. A triangle centroid is a center of mass location such that it would balance on a pencil tip. The simulation need only add a centroid dimension to the physics parameters. Given centroid points in both object and target it is possible to define the line segment connecting these two points.
The position vector of the centroid of a triangle is the average of the position vectors of its vertices. So if its vertices have Cartesian coordinates , and then the centroid is .
Here is the function for a line segment distance between two 3D points.
Here the length/distance of the segment is an adjustable "hit" criteria size of segment. As the objects approach the length decreases to the threshold value. A triangle sphere becomes the effective geometry test. A sphere centered at the centroid can be sized to encompass all the triangle's vertices.
Usage
Collision detection in computer simulation
Physical simulators differ in the way they react on a collision. Some use the softness of the material to calculate a force, which will resolve the collision in the following time steps like it is in reality. This is very CPU intensive for low softness materials. Some simulators estimate the time of collision by linear interpolation, roll back the simulation, and calculate the collision by the more abstract methods of conservation laws.
Some iterate the linear interpolation (Newton's method) to calculate the time of collision with a much higher precision than the rest of the simulation. Collision detection utilizes time coherence to allow even finer time steps without much increasing CPU demand, such as in air traffic control.
After an inelastic collision, special states of sliding and resting can occur and, for example, the Open Dynamics Engine uses constraints to simulate them. Constraints avoid inertia and thus instability. Implementation of rest by means of a scene graph avoids drift.
In other words, physical simulators usually function one of two ways: where the collision is detected a posteriori (after the collision occurs) or a priori (before the collision occurs). In addition to the a posteriori and a priori distinction, almost all modern collision detection algorithms are broken into a hierarchy of algorithms. Often the terms "discrete" and "continuous" are used rather than a posteriori and a priori.
A posteriori (discrete) versus a priori (continuous)
In the a posteriori case, the physical simulation is advanced by a small step, then checked to see if any objects are intersecting or visibly considered intersecting. At each simulation step, a list of all intersecting bodies is created, and the positions and trajectories of these objects are "fixed" to account for the collision. This method is called a posteriori because it typically misses the actual instant of collision, and only catches the collision after it has actually happened.
In the a priori methods, there is a collision detection algorithm which will be able to predict very precisely the trajectories of the physical bodies. The instants of collision are calculated with high precision, and the physical bodies never actually interpenetrate. This is called a priori because the collision detection algorithm calculates the instants of collision before it updates the configuration of the physical bodies.
The main benefits of the a posteriori methods are as follows. In this case, the collision detection algorithm need not be aware of the myriad of physical variables; a simple list of physical bodies is fed to the algorithm, and the program returns a list of intersecting bodies. The collision detection algorithm doesn't need to understand friction, elastic collisions, or worse, nonelastic collisions and deformable bodies. In addition, the a posteriori algorithms are in effect one dimension simpler than the a priori algorithms. An a priori algorithm must deal with the time variable, which is absent from the a posteriori problem.
On the other hand, a posteriori algorithms cause problems in the "fixing" step, where intersections (which aren't physically correct) need to be corrected. Moreover, if the discrete step is too large, the collision could go undetected, resulting in an object which passes through another if it is sufficiently fast or small.
The benefits of the a priori algorithms are increased fidelity and stability. It is difficult (but not completely impossible) to separate the physical simulation from the collision detection algorithm. However, in all but the simplest cases, the problem of determining ahead of time when two bodies will collide (given some initial data) has no closed form solution—a numerical root finder is usually involved.
Some objects are in resting contact, that is, in collision, but neither bouncing off, nor interpenetrating, such as a vase resting on a table. In all cases, resting contact requires special treatment: If two objects collide (a posteriori) or slide (a priori) and their relative motion is below a threshold, friction becomes stiction and both objects are arranged in the same branch of the scene graph.
Video games
Video games have to split their very limited computing time between several tasks. Despite this resource limit, and the use of relatively primitive collision detection algorithms, programmers have been able to create believable, if inexact, systems for use in games.
For a long time, video games had a very limited number of objects to treat, and so checking all pairs was not a problem. In two-dimensional games, in some cases, the hardware was able to efficiently detect and report overlapping pixels between sprites on the screen. In other cases, simply tiling the screen and binding each sprite into the tiles it overlaps provides sufficient pruning, and for pairwise checks, bounding rectangles or circles called hitboxes are used and deemed sufficiently accurate.
Three-dimensional games have used spatial partitioning methods for -body pruning, and for a long time used one or a few spheres per actual 3D object for pairwise checks. Exact checks are very rare, except in games attempting to simulate reality closely. Even then, exact checks are not necessarily used in all cases.
Because games do not need to mimic actual physics, stability is not as much of an issue. Almost all games use a posteriori collision detection, and collisions are often resolved using very simple rules. For instance, if a character becomes embedded in a wall, they might be simply moved back to their last known good location. Some games will calculate the distance the character can move before getting embedded into a wall, and only allow them to move that far.
In many cases for video games, approximating the characters by a point is sufficient for the purpose of collision detection with the environment. In this case, binary space partitioning trees provide a viable, efficient and simple algorithm for checking if a point is embedded in the scenery or not. Such a data structure can also be used to handle "resting position" situation gracefully when a character is running along the ground. Collisions between characters, and collisions with projectiles and hazards, are treated separately.
A robust simulator is one that will react to any input in a reasonable way. For instance, if we imagine a high speed racecar video game, from one simulation step to the next, it is conceivable that the cars would advance a substantial distance along the race track. If there is a shallow obstacle on the track (such as a brick wall), it is not entirely unlikely that the car will completely leap over it, and this is very undesirable. In other instances, the "fixing" that posteriori algorithms require isn't implemented correctly, resulting in bugs that can trap characters in walls or allow them to pass through them and fall into an endless void where there may or may not be a deadly bottomless pit, sometimes referred to as "black hell", "blue hell", or "green hell", depending on the predominant color. These are the hallmarks of a failing collision detection and physical simulation system. Big Rigs: Over the Road Racing is an infamous example of a game with a failing or possibly missing collision detection system.
Hitbox
A hitbox is an invisible shape commonly used in video games for real-time collision detection; it is a type of bounding box. It is often a rectangle (in 2D games) or cuboid (in 3D) that is attached to and follows a point on a visible object (such as a model or a sprite). Circular or spheroidial shapes are also common, though they are still most often called "boxes". It is common for animated objects to have hitboxes attached to each moving part to ensure accuracy during motion.
Hitboxes are used to detect "one-way" collisions such as a character being hit by a punch or a bullet. They are unsuitable for the detection of collisions with feedback (e.g. bumping into a wall) due to the difficulty experienced by both humans and AI in managing a hitbox's ever-changing locations; these sorts of collisions are typically handled with much simpler axis-aligned bounding boxes instead. Players may use the term "hitbox" to refer to these types of interactions regardless.
A hurtbox is a hitbox used to detect incoming sources of damage. In this context, the term hitbox is typically reserved for those which deal damage. For example, an attack may only land if the hitbox around an attacker's punch connects with one of the opponent's hurtboxes on their body, while opposing hitboxes colliding may result in the players trading or cancelling blows, and opposing hurtboxes do not interact with each other. The term is not standardized across the industry; some games reverse their definitions of hitbox and hurtbox, while others only use "hitbox" for both sides.
See also
Collision response
Hit-testing
Bounding volume
Game physics
Gilbert–Johnson–Keerthi distance algorithm
Minkowski Portal Refinement
Physics engine
Lubachevsky–Stillinger algorithm
Ragdoll physics
References
External links
University of North Carolina at Chapel Hill collision detection research website
Prof. Steven Cameron (Oxford University) web site on collision detection
How to Avoid a Collision by George Beck, Wolfram Demonstrations Project.
Bounding boxes and their usage
Separating Axis Theorem
Unity 3D Collision
Godot Physics Collision
Computational geometry
Computer graphics
Video game development
Computer physics engines
Robotics engineering | Collision detection | [
"Mathematics",
"Technology",
"Engineering"
] | 4,792 | [
"Computational geometry",
"Computational mathematics",
"Computer engineering",
"Robotics engineering"
] |
171,560 | https://en.wikipedia.org/wiki/Metropolis%20light%20transport | Metropolis light transport (MLT) is a global illumination application of a Monte Carlo method called the Metropolis–Hastings algorithm to the rendering equation for generating images from detailed physical descriptions of three-dimensional scenes.
The procedure constructs paths from the eye to a light source using bidirectional path tracing, then constructs slight modifications to the path. Some careful statistical calculation (the Metropolis algorithm) is used to compute the appropriate distribution of brightness over the image. This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. In short, the algorithm generates a path and stores the path's 'nodes' in a list. It can then modify the path by adding extra nodes and creating a new light path. While creating this new path, the algorithm decides how many new 'nodes' to add and whether or not these new nodes will actually create a new path.
Metropolis light transport is an unbiased method that, in some cases (but not always), converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing.
Energy Redistribution Path Tracing (ERPT) uses Metropolis sampling-like mutation strategies instead of an intermediate probability distribution step.
See also
Nicholas Metropolis – The physicist after whom the algorithm is named
Renderers using MLT:
Arion – A commercial unbiased renderer based on path tracing and providing an MLT sampler
Nvidia Iray (external link) – An unbiased renderer that has an option for MLT
Kerkythea – A free unbiased 3D renderer that uses MLT
LuxCoreRender – An open source unbiased renderer that uses MLT
Mitsuba Renderer (web site) A research-oriented renderer which implements several MLT variants
Octane Render – A commercial unbiased renderer that uses MLT
Indigo Renderer (web site) – An unbiased, photorealistic GPU and CPU renderer that supports MLT and is aimed at ultimate image quality, by accurately simulating the physics of light.
References
External links
Metropolis project at Stanford
Homepage of the Mitsuba renderer
LuxRender - an open source render engine that supports MLT
Kerkythea 2008 - a freeware rendering system that uses MLT
A Practical Introduction to Metropolis Light Transport
Unbiased physically based rendering on the GPU
Monte Carlo methods
Global illumination algorithms | Metropolis light transport | [
"Physics",
"Technology"
] | 533 | [
"Monte Carlo methods",
"Computing stubs",
"Computational physics"
] |
171,589 | https://en.wikipedia.org/wiki/Z-transform | In mathematics and signal processing, the Z-transform converts a discrete-time signal, which is a sequence of real or complex numbers, into a complex valued frequency-domain (the z-domain or z-plane) representation.
It can be considered a discrete-time equivalent of the Laplace transform (the s-domain or s-plane). This similarity is explored in the theory of time-scale calculus.
While the continuous-time Fourier transform is evaluated on the s-domain's vertical axis (the imaginary axis), the discrete-time Fourier transform is evaluated along the z-domain's unit circle. The s-domain's left half-plane maps to the area inside the z-domain's unit circle, while the s-domain's right half-plane maps to the area outside of the z-domain's unit circle.
In signal processing, one of the means of designing digital filters is to take analog designs, subject them to a bilinear transform which maps them from the s-domain to the z-domain, and then produce the digital filter by inspection, manipulation, or numerical approximation. Such methods tend not to be accurate except in the vicinity of the complex unity, i.e. at low frequencies.
History
The foundational concept now recognized as the Z-transform, which is a cornerstone in the analysis and design of digital control systems, was not entirely novel when it emerged in the mid-20th century. Its embryonic principles can be traced back to the work of the French mathematician Pierre-Simon Laplace, who is better known for the Laplace transform, a closely related mathematical technique. However, the explicit formulation and application of what we now understand as the Z-transform were significantly advanced in 1947 by Witold Hurewicz and colleagues. Their work was motivated by the challenges presented by sampled-data control systems, which were becoming increasingly relevant in the context of radar technology during that period. The Z-transform provided a systematic and effective method for solving linear difference equations with constant coefficients, which are ubiquitous in the analysis of discrete-time signals and systems.
The method was further refined and gained its official nomenclature, "the Z-transform," in 1952, thanks to the efforts of John R. Ragazzini and Lotfi A. Zadeh, who were part of the sampled-data control group at Columbia University. Their work not only solidified the mathematical framework of the Z-transform but also expanded its application scope, particularly in the field of electrical engineering and control systems.
A notable extension, known as the modified or advanced Z-transform, was later introduced by Eliahu I. Jury. Jury's work extended the applicability and robustness of the Z-transform, especially in handling initial conditions and providing a more comprehensive framework for the analysis of digital control systems. This advanced formulation has played a pivotal role in the design and stability analysis of discrete-time control systems, contributing significantly to the field of digital signal processing.
Interestingly, the conceptual underpinnings of the Z-transform intersect with a broader mathematical concept known as the method of generating functions, a powerful tool in combinatorics and probability theory. This connection was hinted at as early as 1730 by Abraham de Moivre, a pioneering figure in the development of probability theory. De Moivre utilized generating functions to solve problems in probability, laying the groundwork for what would eventually evolve into the Z-transform. From a mathematical perspective, the Z-transform can be viewed as a specific instance of a Laurent series, where the sequence of numbers under investigation is interpreted as the coefficients in the (Laurent) expansion of an analytic function. This perspective not only highlights the deep mathematical roots of the Z-transform but also illustrates its versatility and broad applicability across different branches of mathematics and engineering.
Definition
The Z-transform can be defined as either a one-sided or two-sided transform. (Just like we have the one-sided Laplace transform and the two-sided Laplace transform.)
Bilateral Z-transform
The bilateral or two-sided Z-transform of a discrete-time signal is the formal power series defined as:
where is an integer and is, in general, a complex number. In polar form, may be written as:
where is the magnitude of , is the imaginary unit, and is the complex argument (also referred to as angle or phase) in radians.
Unilateral Z-transform
Alternatively, in cases where is defined only for , the single-sided or unilateral Z-transform is defined as:
In signal processing, this definition can be used to evaluate the Z-transform of the unit impulse response of a discrete-time causal system.
An important example of the unilateral Z-transform is the probability-generating function, where the component is the probability that a discrete random variable takes the value. The properties of Z-transforms (listed in ) have useful interpretations in the context of probability theory.
Inverse Z-transform
The inverse Z-transform is:
where is a counterclockwise closed path encircling the origin and entirely in the region of convergence (ROC). In the case where the ROC is causal (see Example 2), this means the path must encircle all of the poles of .
A special case of this contour integral occurs when is the unit circle. This contour can be used when the ROC includes the unit circle, which is always guaranteed when is stable, that is, when all the poles are inside the unit circle. With this contour, the inverse Z-transform simplifies to the inverse discrete-time Fourier transform, or Fourier series, of the periodic values of the Z-transform around the unit circle:
The Z-transform with a finite range of and a finite number of uniformly spaced values can be computed efficiently via Bluestein's FFT algorithm. The discrete-time Fourier transform (DTFT)—not to be confused with the discrete Fourier transform (DFT)—is a special case of such a Z-transform obtained by restricting to lie on the unit circle.
The following three methods are often used for the evaluation of the inverse -transform,
Direct Evaluation by Contour Integration
This method involves applying the Cauchy Residue Theorem to evaluate the inverse Z-transform. By integrating around a closed contour in the complex plane, the residues at the poles of the Z-transform function inside the ROC are summed. This technique is particularly useful when working with functions expressed in terms of complex variables.
Expansion into a Series of Terms in the Variables z and z
In this method, the Z-transform is expanded into a power series. This approach is useful when the Z-transform function is rational, allowing for the approximation of the inverse by expanding into a series and determining the signal coefficients term by term.
Partial-Fraction Expansion and Table Lookup
This technique decomposes the Z-transform into a sum of simpler fractions, each corresponding to known Z-transform pairs. The inverse Z-transform is then determined by looking up each term in a standard table of Z-transform pairs. This method is widely used for its efficiency and simplicity, especially when the original function can be easily broken down into recognizable components.
Example:
A) Determine the inverse Z-transform of the following by series expansion method,
Solution:
Case 1:
ROC:
Since the ROC is the exterior of a circle, is causal (signal existing for n≥0).
thus,
(arrow indicates term at x(0)=1)
Note that in each step of long division process we eliminate lowest power term of .
Case 2:
ROC:
Since the ROC is the interior of a circle, is anticausal (signal existing for n<0).
By performing long division we get,
(arrow indicates term at x(0)=0)
Note that in each step of long division process we eliminate lowest power term of .
Note:
When the signal is causal, we get positive powers of and when the signal is anticausal, we get negative powers of .
indicates term at and indicates term at .
B) Determine the inverse Z-transform of the following by series expansion method,
Eliminating negative powers if and dividing by ,
By Partial Fraction Expansion,
Case 1:
ROC:
Both the terms are causal, hence is causal.
Case 2:
ROC:
Both the terms are anticausal, hence is anticausal.
Case 3:
ROC:
One of the terms is causal (p=0.5 provides the causal part) and other is anticausal (p=1 provides the anticausal part), hence is both sided.
Region of convergence
The region of convergence (ROC) is the set of points in the complex plane for which the Z-transform summation converges (i.e. doesn't blow up in magnitude to infinity):
Example 1 (no ROC)
Let Expanding on the interval it becomes
Looking at the sum
Therefore, there are no values of that satisfy this condition.
Example 2 (causal ROC)
Let (where is the Heaviside step function). Expanding on the interval it becomes
Looking at the sum
The last equality arises from the infinite geometric series and the equality only holds if which can be rewritten in terms of as Thus, the ROC is In this case the ROC is the complex plane with a disc of radius 0.5 at the origin "punched out".
Example 3 (anti causal ROC)
Let (where is the Heaviside step function). Expanding on the interval it becomes
Looking at the sum
and using the infinite geometric series again, the equality only holds if which can be rewritten in terms of as Thus, the ROC is In this case the ROC is a disc centered at the origin and of radius 0.5.
What differentiates this example from the previous example is only the ROC. This is intentional to demonstrate that the transform result alone is insufficient.
Examples conclusion
Examples 2 & 3 clearly show that the Z-transform of is unique when and only when specifying the ROC. Creating the pole–zero plot for the causal and anticausal case show that the ROC for either case does not include the pole that is at 0.5. This extends to cases with multiple poles: the ROC will never contain poles.
In example 2, the causal system yields a ROC that includes while the anticausal system in example 3 yields an ROC that includes
In systems with multiple poles it is possible to have a ROC that includes neither nor The ROC creates a circular band. For example,
has poles at 0.5 and 0.75. The ROC will be 0.5 < < 0.75, which includes neither the origin nor infinity. Such a system is called a mixed-causality system as it contains a causal term and an anticausal term
The stability of a system can also be determined by knowing the ROC alone. If the ROC contains the unit circle (i.e., = 1) then the system is stable. In the above systems the causal system (Example 2) is stable because > 0.5 contains the unit circle.
Let us assume we are provided a Z-transform of a system without a ROC (i.e., an ambiguous ). We can determine a unique provided we desire the following:
Stability
Causality
For stability the ROC must contain the unit circle. If we need a causal system then the ROC must contain infinity and the system function will be a right-sided sequence. If we need an anticausal system then the ROC must contain the origin and the system function will be a left-sided sequence. If we need both stability and causality, all the poles of the system function must be inside the unit circle.
The unique can then be found.
Properties
Parseval's theorem
Initial value theorem: If is causal, then
Final value theorem: If the poles of are inside the unit circle, then
Table of common Z-transform pairs
Here:
is the unit (or Heaviside) step function and
is the discrete-time unit impulse function (cf Dirac delta function which is a continuous-time version). The two functions are chosen together so that the unit step function is the accumulation (running total) of the unit impulse function.
Relationship to Fourier series and Fourier transform
For values of in the region , known as the unit circle, we can express the transform as a function of a single real variable by defining And the bi-lateral transform reduces to a Fourier series:
which is also known as the discrete-time Fourier transform (DTFT) of the sequence. This -periodic function is the periodic summation of a Fourier transform, which makes it a widely used analysis tool. To understand this, let be the Fourier transform of any function, , whose samples at some interval equal the sequence. Then the DTFT of the sequence can be written as follows.
where has units of seconds, has units of hertz. Comparison of the two series reveals that is a normalized frequency with unit of radian per sample. The value corresponds to . And now, with the substitution can be expressed in terms of (a Fourier transform):
As parameter T changes, the individual terms of move farther apart or closer together along the f-axis. In however, the centers remain 2 apart, while their widths expand or contract. When sequence represents the impulse response of an LTI system, these functions are also known as its frequency response. When the sequence is periodic, its DTFT is divergent at one or more harmonic frequencies, and zero at all other frequencies. This is often represented by the use of amplitude-variant Dirac delta functions at the harmonic frequencies. Due to periodicity, there are only a finite number of unique amplitudes, which are readily computed by the much simpler discrete Fourier transform (DFT). (See .)
Relationship to Laplace transform
Bilinear transform
The bilinear transform can be used to convert continuous-time filters (represented in the Laplace domain) into discrete-time filters (represented in the Z-domain), and vice versa. The following substitution is used:
to convert some function in the Laplace domain to a function in the Z-domain (Tustin transformation), or
from the Z-domain to the Laplace domain. Through the bilinear transformation, the complex s-plane (of the Laplace transform) is mapped to the complex z-plane (of the z-transform). While this mapping is (necessarily) nonlinear, it is useful in that it maps the entire axis of the s-plane onto the unit circle in the z-plane. As such, the Fourier transform (which is the Laplace transform evaluated on the axis) becomes the discrete-time Fourier transform. This assumes that the Fourier transform exists; i.e., that the axis is in the region of convergence of the Laplace transform.
Starred transform
Given a one-sided Z-transform of a time-sampled function, the corresponding starred transform produces a Laplace transform and restores the dependence on (the sampling parameter):
The inverse Laplace transform is a mathematical abstraction known as an impulse-sampled function.
Linear constant-coefficient difference equation
The linear constant-coefficient difference (LCCD) equation is a representation for a linear system based on the autoregressive moving-average equation:
Both sides of the above equation can be divided by if it is not zero. By normalizing with the LCCD equation can be written
This form of the LCCD equation is favorable to make it more explicit that the "current" output is a function of past outputs current input and previous inputs
Transfer function
Taking the Z-transform of the above equation (using linearity and time-shifting laws) yields:
where and are the z-transform of and respectively. (Notation conventions typically use capitalized letters to refer to the z-transform of a signal denoted by a corresponding lower case letter, similar to the convention used for notating Laplace transforms.)
Rearranging results in the system's transfer function:
Zeros and poles
From the fundamental theorem of algebra the numerator has roots (corresponding to zeros of ) and the denominator has roots (corresponding to poles). Rewriting the transfer function in terms of zeros and poles
where is the zero and is the pole. The zeros and poles are commonly complex and when plotted on the complex plane (z-plane) it is called the pole–zero plot.
In addition, there may also exist zeros and poles at and If we take these poles and zeros as well as multiple-order zeros and poles into consideration, the number of zeros and poles are always equal.
By factoring the denominator, partial fraction decomposition can be used, which can then be transformed back to the time domain. Doing so would result in the impulse response and the linear constant coefficient difference equation of the system.
Output response
If such a system is driven by a signal then the output is By performing partial fraction decomposition on and then taking the inverse Z-transform the output can be found. In practice, it is often useful to fractionally decompose before multiplying that quantity by to generate a form of which has terms with easily computable inverse Z-transforms.
See also
Advanced Z-transform
Bilinear transform
Difference equation (recurrence relation)
Discrete convolution
Discrete-time Fourier transform
Finite impulse response
Formal power series
Generating function
Generating function transformation
Laplace transform
Laurent series
Least-squares spectral analysis
Probability-generating function
Star transform
Zak transform
Zeta function regularization
References
Further reading
Refaat El Attar, Lecture notes on Z-Transform, Lulu Press, Morrisville NC, 2005. .
Ogata, Katsuhiko, Discrete Time Control Systems 2nd Ed, Prentice-Hall Inc, 1995, 1987. .
Alan V. Oppenheim and Ronald W. Schafer (1999). Discrete-Time Signal Processing, 2nd Edition, Prentice Hall Signal Processing Series. .
External links
Z-Transform table of some common Laplace transforms
Mathworld's entry on the Z-transform
Z-Transform threads in Comp.DSP
A graphic of the relationship between Laplace transform s-plane to Z-plane of the Z transform
A video-based explanation of the Z-Transform for engineers
What is the z-Transform?
Transforms
Laplace transforms | Z-transform | [
"Mathematics"
] | 3,735 | [
"Mathematical objects",
"Functions and mappings",
"Mathematical relations",
"Transforms"
] |
171,616 | https://en.wikipedia.org/wiki/Judith%20Wright | Judith Arundell Wright (31 May 191525 June 2000) was an Australian poet, environmentalist and campaigner for Aboriginal land rights. She was a recipient of the Christopher Brennan Award and nominated for the Nobel Prize in Literature in 1964, 1965 and 1967.
Biography
Judith Wright was born in Armidale, New South Wales. The eldest child of Phillip Wright and his first wife, Ethel, she spent most of her formative years in Brisbane and Sydney. Wright was of Cornish ancestry. Following the early death of her mother, she lived with her aunt and then boarded at New England Girls' School after her father's remarriage in 1929. After graduating, Wright studied philosophy, English, psychology and history at the University of Sydney. At the beginning of World War II, she returned to her father's station (ranch) to help during the shortage of labour caused by the war.
Wright's first book of poetry, The Moving Image, was published in 1946 while she was working at the University of Queensland as a research officer. Then, she had also worked with Clem Christesen on the literary magazine Meanjin, the first edition of which was published in late 1947. In 1950 she moved to Mount Tamborine, Queensland, with the novelist and abstract philosopher Jack McKinney. Their daughter Meredith was born in the same year. They married in 1962, but Jack was to live only until 1966.
In 1966, she published The Nature of Love, her first collection of short stories, through Sun Press, Melbourne. Set mainly in Queensland, they include 'The Ant-lion', 'The Vineyard Woman', 'Eighty Acres', 'The Dugong', 'The Weeping Fig' and 'The Nature of Love', all first published in The Bulletin. Wright was nominated for the 1967 Nobel Prize for Literature.
With David Fleay, Kathleen McArthur and Brian Clouston, Wright was a founding member and, from 1964 to 1976, president, of the Wildlife Preservation Society of Queensland. In 1991, she was the second Australian to receive the Queen's Gold Medal for Poetry.
She was involved in the Poets Union.
For the last three decades of her life, Wright lived near the New South Wales town of Braidwood. She moved to the Braidwood area to be closer to H. C. "Nugget" Coombs, her lover of 25 years, who was based in Canberra.
Wright started to lose her hearing in her mid-20s and became completely deaf by 1992.
Poet and critic
Wright was the author of collections of poetry, including The Moving Image, Woman to Man, The Gateway, The Two Fires, Birds, The Other Half, Magpies, Shadow and Hunting Snake. Her work is noted for a keen focus on the Australian environment, which began to gain prominence in Australian art in the years following World War II. She deals with the relationship between settlers, Indigenous Australians and the bush, among other themes. Wright's aesthetic centres on the relationship between mankind and the environment, which she views as the catalyst for poetic creation. Her images characteristically draw from the Australian flora and fauna, yet contain a mythic substrata that probes at the poetic process, limitations of language, and the correspondence between inner existence and objective reality.
Wright's poems have been translated into a number of languages, including Italian, Japanese and Russian. Along with Brendan Kennelly, she is the most featured poet in The Green Book of Poetry, a large ecopoetry anthology by Ivo Mosley (Frontier Publishing 1993), which was published by Harper San Francisco in 1996 as Earth Poems: Poems from Around the World to Honor the Earth.
Birds
In 2003, the National Library of Australia published an expanded edition of Wright's collection titled Birds. Most of these poems were written in the 1950s when she was living on Tamborine Mountain in southeast Queensland. Meredith McKinney, Wright's daughter, writes that they were written at "a precious and dearly-won time of warmth and bounty to counterbalance at last what felt, in contrast, the chilly dearth and difficulty of her earlier years". McKinney goes on to say that "many of these poems have a newly relaxed, almost conversational tone and rhythm, an often humorous ease and an intimacy of voice that surely reflects the new intimacies and joys of her life". Despite the joy reflected in the poems, however, they also acknowledge "the experiences of cruelty, pain and death that are inseparable from the lives of birds as of humans ... and [turn] a sorrowing a clear-sighted gaze on the terrible damage we have done and continue to do to our world, even as we love it".
Environmentalism and social activism
Wright campaigned in support of the conservation of the Great Barrier Reef and Fraser Island. With some of her
friends, she helped found one of the earliest nature conservation movements.
She was also an advocate for Aboriginal land rights. Tom Shapcott, reviewing With Love and Fury, her posthumous collection of selected letters published in 2007, comments that her letter on this topic to the Australian prime minister John Howard was "almost brutal in its scorn". Shortly before her death, she attended a march in Canberra for reconciliation between non-indigenous Australians and the Aboriginal people.
Awards
1975 – Christopher Brennan Award
1991 – Queen's Gold Medal for Poetry
1994 – Human Rights and Equal Opportunity Commission Poetry Award for Collected Poems
In 2009 as part of the Q150 celebrations, Judith Wright was announced as one of the Q150 Icons of Queensland for her role as an "Influential Artists".
Death and legacy
Wright died in Canberra on 25 June 2000, aged 85.
In June 2006 the Australian Electoral Commission (AEC) announced that the new federal electorate in Queensland, which was to be created at the 2007 federal election, would be named Wright in honour of her accomplishments as a "poet and in the areas of arts, conservation and indigenous affairs in Queensland and Australia". However, in September 2006 the AEC announced it would name the seat after John Flynn, the founder of the Royal Flying Doctor Service, due to numerous objections from people fearing the name Wright may be linked to disgraced former Queensland ALP MP Keith Wright. Under the 2009 redistribution of Queensland, a new seat in southeast Queensland was created and named in Wright's honour; it was first contested in 2010.
The Judith Wright Arts Centre in Fortitude Valley, Brisbane, is named after her.
On 2 January 2008, it was announced that a future suburb in the district of Molonglo Valley, Canberra would be named "Wright". There is a street in the Canberra suburb of Franklin named after her, as well. Another of the Molonglo Valley suburbs was named after Wright's lover, "Nugget" Coombs.
The Judith Wright Award was awarded as part of the ACT Poetry Award by the ACT Government between 2005 and 2011, for a published book of poems by an Australian poet.
The Judith Wright Poetry Prize for New and Emerging Poets (worth ), was established in 2007 by Overland magazine.
The Judith Wright Calanthe Award has been awarded as part of the Queensland Premier's Literary Awards since 2004.
Bibliography
Poetry
Collections
The Moving Image (1946)
Woman to Man (1949)
The Gateway (1953)
The Two Fires (1955)
Australian Bird Poems (1961)
Birds: Poems, Angus and Robertson, 1962;
Five Senses: Selected Poems (1963)
Selected Poems (1963)
The Other Half (1966)
The Nature of Love(1966)
Collected Poems 1942-1970 (1971)
Alive: Poems 1971–72 (1973)
Poets On Record 9 (University of Queensland Press, 1973) Selected works, issued with a 7" record of Wright reading her own poems.
Fourth Quarter and Other Poems (1976)
The Double Tree: Selected Poems 1942–76 (1978)
Phantom Dwelling (1985)
Five Senses: Selected Poems (1989)
A Human Pattern: Selected Poems (1990)
The Flame Tree (1993)
Collected poems, 1942–1985, Angus & Robertson (1994)
Grace and Other Poems (2009)
Tamborine Mountain Poems of Judith Wright (2010)
Poemas escogidos, Pre-textos, 2020, (Spanish translation by José Luis Fernández Castillo)
Selected list of poems
Literary criticism
William Baylebridge and the modern problem (Canberra University College, 1955)
Charles Harpur (1963)
Preoccupations in Australian Poetry (1965)
The Poet's Pen (1965) (an anthology of poetry selected by Wright with A.K. Thomson)
Henry Lawson (Great Australians Series) (1967)
Because I Was Invited (1975)
Going on Talking (1991)
Other works
Kings of the Dingoes (1958) Oxford University Press, Melbourne
The Generations of Men, illustrated by Alison Forbes (1959)
Range the Mountains High (1962)
The Nature of Love (1966) Sun Books, Melbourne
"The Battle of the Biosphere" (Outlook magazine article 1970)
"'Witnesses of spring: unpublished poems of Shaw Neilson, edited by Wright, with poems selected by Wright and Val Vallis, from material selected by Ruth Harrison (1970)
The Coral Battleground (1977)
The Cry for the Dead (1981)
We Call for a Treaty (1985)
Half a Lifetime (Text, 2001)
Judith Wright: Selected Writings (2022) ed. Georgina Arnott, La Trobe University Press & Black Inc
Letters
The Equal Heart and Mind: Letters between Judith Wright and Jack McKinney. Edited by Patricia Clarke and Meredith McKinney (UQP, 2004)
With Love and Fury: Selected letters of Judith Wright, edited by Patricia Clarke and Meredith McKinney (National Library of Australia, 2006)
Portrait of a friendship: the letters of Barbara Blackman and Judith Wright, 1950–2000, edited by Bryony Cosgrove (Miegunyah Press, 2007) ,
See also
List of Australian poets
With Love and Fury 2016 album by Brodsky Quartet and Katie Noonan, setting words of Wright to music
References
Further reading
Arnott, Georgina (2016) The Unknown Judith Wright, UWAP
Brady, Veronica (1998) South of My Days: A Biography of Judith Wright'', Angus & Robertson
External links
Poems at Oldpoetry.com
Judith Wright digital story, educational interview and oral history. John Oxley Library, State Library of Queensland, 12 June 2013. 6min, 36min and 56min version available to view online.
Vale Judith Wright Interview at Radio National
Gardening at the 'Edge': Judith Wright's desert garden, Mongarlowe, New South Wales by Katie Holmes
Judith Wright's Biography: A Delicate Balance between Trespass and Honour by Veronica Brady
Uncertain Possession: The Politics and Poetry of Judith Wright by Gig Ryan
The Judith Wright Centre of Contemporary Arts Website
Two Fires: Festival of Arts and Activism Celebration of Judith Wright's legacy
Sue King-Smith 'Ancestral Echoes: Spectres of the Past in Judith Wright's Poetry' JASAL Special Issue 2007
1915 births
2000 deaths
Australian environmentalists
Australian women environmentalists
Australian human rights activists
Australian women human rights activists
Australian literary critics
Australian women literary critics
Australian nature writers
People from Armidale
Australian people of Cornish descent
Deaf poets
Women science writers
Writers from Brisbane
Writers from Sydney
Australian women poets
20th-century Australian women writers
20th-century Australian poets
Australian deaf people
Australian women artists
Deaf activists
Australian activists with disabilities
Q150 Icons | Judith Wright | [
"Technology"
] | 2,321 | [
"Women science writers",
"Women in science and technology"
] |
171,644 | https://en.wikipedia.org/wiki/Computational%20archaeology | Computational archaeology is a subfield of digital archeology that focuses on the analysis and interpretation of archaeological data using advanced computational techniques. This field employs data modeling, statistical analysis, and computer simulations to understand and reconstruct past human behaviors and societal developments. By leveraging Geographic Information Systems (GIS), predictive modeling, and various simulation tools, computational archaeology enhances the ability to process complex archaeological datasets, providing deeper insights into historical contexts and cultural heritage.
Computational archaeology may include the use of geographical information systems (GIS), especially when applied to spatial analyses such as viewshed analysis and least-cost path analysis as these approaches are sufficiently computationally complex that they are extremely difficult if not impossible to implement without the processing power of a computer. Likewise, some forms of statistical and mathematical modelling, and the computer simulation of human behaviour and behavioural evolution using software tools such as Swarm or Repast would also be impossible to calculate without computational aid. The application of a variety of other forms of complex and bespoke software to solve archaeological problems, such as human perception and movement within built environments using software such as University College London's Space Syntax program, also falls under the term 'computational archaeology'.
The acquisition, documentation and analysis of archaeological finds at excavations and in museums is an important field having pottery analysis as one of the major topics. In this area 3D-acquisition techniques like structured light scanning (SLS), photogrammetric methods like "structure from motion" (SfM), computed tomography as well as their combinations provide large data-sets of numerous objects for digital pottery research. These techniques are increasingly integrated into the in-situ workflow of excavations. The Austrian subproject of the Corpus vasorum antiquorum (CVA) is seminal for digital research on finds within museums.
Computational archaeology is also known as "archaeological informatics" (Burenhult 2002, Huggett and Ross 2004) or "archaeoinformatics" (sometimes abbreviated as "AI", but not to be confused with artificial intelligence).
Origins and objectives
In recent years, it has become clear that archaeologists will only be able to harvest the full potential of quantitative methods and computer technology if they become aware of the specific pitfalls and potentials inherent in the archaeological data and research process. AI science is an emerging discipline that attempts to uncover, quantitatively represent and explore specific properties and patterns of archaeological information. Fundamental research on data and methods for a self-sufficient archaeological approach to information processing produces quantitative methods and computer software specifically geared towards archaeological problem solving and understanding.
AI science is capable of complementing and enhancing almost any area of scientific archaeological research. It incorporates a large part of the methods and theories developed in quantitative archaeology since the 1960s but goes beyond former attempts at quantifying archaeology by exploring ways to represent general archaeological information and problem structures as computer algorithms and data structures. This opens archaeological analysis to a wide range of computer-based information processing methods fit to solve problems of great complexity. It also promotes a formalized understanding of the discipline's research objects and creates links between archaeology and other quantitative disciplines, both in methods and software technology. Its agenda can be split up in two major research themes that complement each other:
Fundamental research (theoretical AI science) on the structure, properties and possibilities of archaeological data, inference and knowledge building. This includes modeling and managing fuzziness and uncertainty in archaeological data, scale effects, optimal sampling strategies and spatio-temporal effects.
Development of computer algorithms and software (applied AI science) that make this theoretical knowledge available to the user.
There is already a large body of literature on the use of quantitative methods and computer-based analysis in archaeology. The development of methods and applications is best reflected in the annual publications of the CAA conference (see external links section at bottom). At least two journals, the Italian Archeologia e Calcolatori and the British Archaeological Computing Newsletter, are dedicated to archaeological computing methods. AI Science contributes to many fundamental research topics, including but not limited to:
advanced statistics in archaeology, spatial and temporal archaeological data analysis
bayesian analysis and advanced probability models, fuzziness and uncertainty in archaeological data
scale-related phenomena and scale transgressions
intrasite analysis (representations of stratigraphy, 3D analysis, artefact distributions)
landscape analysis (territorial modeling, visibility analysis)
optimal survey and sampling strategies
process-based modeling and simulation models
archaeological predictive modeling and heritage management applications
supervised and unsupervised classification and typology, artificial intelligence applications
digital excavations and virtual reality
computational reproducibility of archaeological research
archaeological software development, electronic data sharing and publishing
AI science advocates a formalized approach to archaeological inference and knowledge building. It is interdisciplinary in nature, borrowing, adapting and enhancing method and theory from numerous other disciplines such as computer science (e.g. algorithm and software design, database design and theory), geoinformation science (spatial statistics and modeling, geographic information systems), artificial intelligence research (supervised classification, fuzzy logic), ecology (point pattern analysis), applied mathematics (graph theory, probability theory) and statistics.
Training and research
Scientific progress in archaeology, as in any other discipline, requires building abstract, generalized and transferable knowledge about the processes that underlie past human actions and their manifestations. Quantification provides the ultimate known way of abstracting and extending our scientific abilities past the limits of intuitive cognition. Quantitative approaches to archaeological information handling and inference constitute a critical body of scientific methods in archaeological research. They provide the tools, algebra, statistics and computer algorithms, to process information too voluminous or complex for purely cognitive, informal inference. They also build a bridge between archaeology and numerous quantitative sciences such as geophysics, geoinformation sciences and applied statistics. And they allow archaeological scientists to design and carry out research in a formal, transparent and comprehensible way.
Being an emerging field of research, AI science is currently a rather dispersed discipline in need of stronger, well-funded and institutionalized embedding, especially in academic teaching. Despite its evident progress and usefulness, today's quantitative archaeology is often inadequately represented in archaeological training and education. Part of this problem may be misconceptions about the seeming conflict between mathematics and humanistic archaeology.
Nevertheless, digital excavation technology, modern heritage management and complex research issues require skilled students and researchers to develop new, efficient and reliable means of processing an ever-growing mass of untackled archaeological data and research problems. Thus, providing students of archaeology with a solid background in quantitative sciences such as mathematics, statistics and computer sciences seems today more important than ever.
Currently, universities based in the UK provide the largest share of study programmes for prospective quantitative archaeologists, with more institutes in Italy, Germany and the Netherlands developing a strong profile quickly. In Germany, the country's first lecturer's position in AI science ("Archäoinformatik") was established in 2005 at the University of Kiel. In April 2016 the first full professorship in Archaeoinformatics has been established at the University of Cologne (Institute of Archaeology).
The most important platform for students and researchers in quantitative archaeology and AI science is the international conference on Computer Applications and Quantitative Methods in Archaeology (CAA) which has been in existence for more than 30 years now and is held in a different city of Europe each year. Vienna's city archaeology unit also hosts an annual event that is quickly growing in international importance (see links at bottom).
References
Further reading
Roosevelt, Cobb, Moss, Olson, and Ünlüsoy 2015: "Excavation is Digitization: Advances in Archaeological Practice," Journal of Field Archaeology, Volume 40, Issue 3 (June 2015), pp. 325-346.
Burenhult 2002: Burenhult, G. (ed.): Archaeological Informatics: Pushing The Envelope. CAA2001. Computer Applications and Quantitative Methods in Archaeology. BAR International Series 1016, Archaeopress, Oxford.
Falser, Michael; Juneja, Monica (Eds.): 'Archaeologizing' Heritage? Transcultural Entanglements between Local Social Practices and Global Virtual Realities (Series: Transcultural Research – Heidelberg Studies on Asia and Europe in a Global Context). Springer: Heidelberg/New York, 2013, VIII, 287 p. 200 illus., 90 illus. in color.
Huggett and Ross 2004: J. Huggett, S. Ross (eds.): Archaeological Informatics. Beyond Technology. Internet Archaeology 15. http://intarch.ac.uk/journal/issue15/
Schlapke 2000: Schlapke, M. Die "Archäoinformatik" am Thüringischen Landesamt für Archäologische Denkmalpflege, Ausgrabungen und Funde im Freistaat Thüringen, 5, 2000, S. 1–5.
Zemanek 2004: Zemanek, H.: Archaeological Information - An information scientist looks on archaeology. In: Ausserer, K.F., Börner, w., Goriany, M. & Karlhuber-Vöckl, L. (eds) 2004. Enter the Past. The E-way into the four Dimensions of Cultural Heritage. CAA 2003, Computer Applications and Quantitative Methods in Archaeology. BAR International Series 1227, Archaeopress, Oxford, 16-26.
Archeologia e Calcolatori journal homepage
Archaeological Computing Newsletter homepage, now a supplement to Archeologia e Calcolatori
Computational archaeology
Computational Archaeology Blog
Computational fields of study
Archaeological sub-disciplines | Computational archaeology | [
"Technology"
] | 1,965 | [
"Computational fields of study",
"Computing and society"
] |
171,728 | https://en.wikipedia.org/wiki/Drag%20equation | In fluid dynamics, the drag equation is a formula used to calculate the force of drag experienced by an object due to movement through a fully enclosing fluid. The equation is:
where
is the drag force, which is by definition the force component in the direction of the flow velocity,
is the mass density of the fluid,
is the flow velocity relative to the object,
is the reference area, and
is the drag coefficient – a dimensionless coefficient related to the object's geometry and taking into account both skin friction and form drag. If the fluid is a liquid, depends on the Reynolds number; if the fluid is a gas, depends on both the Reynolds number and the Mach number.
The equation is attributed to Lord Rayleigh, who originally used L2 in place of A (with L being some linear dimension).
The reference area A is typically defined as the area of the orthographic projection of the object on a plane perpendicular to the direction of motion. For non-hollow objects with simple shape, such as a sphere, this is exactly the same as the maximal cross sectional area. For other objects (for instance, a rolling tube or the body of a cyclist), A may be significantly larger than the area of any cross section along any plane perpendicular to the direction of motion. Airfoils use the square of the chord length as the reference area; since airfoil chords are usually defined with a length of 1, the reference area is also 1. Aircraft use the wing area (or rotor-blade area) as the reference area, which makes for an easy comparison to lift. Airships and bodies of revolution use the volumetric coefficient of drag, in which the reference area is the square of the cube root of the airship's volume. Sometimes different reference areas are given for the same object in which case a drag coefficient corresponding to each of these different areas must be given.
For sharp-cornered bluff bodies, like square cylinders and plates held transverse to the flow direction, this equation is applicable with the drag coefficient as a constant value when the Reynolds number is greater than 1000. For smooth bodies, like a cylinder, the drag coefficient may vary significantly until Reynolds numbers up to 107 (ten million).
Discussion
The equation is easier understood for the idealized situation where all of the fluid impinges on the reference area and comes to a complete stop, building up stagnation pressure over the whole area. No real object exactly corresponds to this behavior. is the ratio of drag for any real object to that of the ideal object. In practice a rough un-streamlined body (a bluff body) will have a around 1, more or less. Smoother objects can have much lower values of . The equation is precise – it simply provides the definition of (drag coefficient), which varies with the Reynolds number and is found by experiment.
Of particular importance is the dependence on flow velocity, meaning that fluid drag increases with the square of flow velocity. When flow velocity is doubled, for example, not only does the fluid strike with twice the flow velocity, but twice the mass of fluid strikes per second. Therefore, the change of momentum per time, i.e. the force experienced, is multiplied by four. This is in contrast with solid-on-solid dynamic friction, which generally has very little velocity dependence.
Relation with dynamic pressure
The drag force can also be specified as
where PD is the pressure exerted by the fluid on area A. Here the pressure PD is referred to as dynamic pressure due to the kinetic energy of the fluid experiencing relative flow velocity u. This is defined in similar form as the kinetic energy equation:
Derivation
The drag equation may be derived to within a multiplicative constant by the method of dimensional analysis. If a moving fluid meets an object, it exerts a force on the object. Suppose that the fluid is a liquid, and the variables involved – under some conditions – are the:
speed u,
fluid density ρ,
kinematic viscosity ν of the fluid,
size of the body, expressed in terms of its wetted area A, and
drag force Fd.
Using the algorithm of the Buckingham π theorem, these five variables can be reduced to two dimensionless groups:
drag coefficient cd and
Reynolds number Re.
That this is so becomes apparent when the drag force Fd is expressed as part of a function of the other variables in the problem:
This rather odd form of expression is used because it does not assume a one-to-one relationship. Here, fa is some (as-yet-unknown) function that takes five arguments. Now the right-hand side is zero in any system of units; so it should be possible to express the relationship described by fa in terms of only dimensionless groups.
There are many ways of combining the five arguments of fa to form dimensionless groups, but the Buckingham π theorem states that there will be two such groups. The most appropriate are the Reynolds number, given by
and the drag coefficient, given by
Thus the function of five variables may be replaced by another function of only two variables:
where fb is some function of two arguments.
The original law is then reduced to a law involving only these two numbers.
Because the only unknown in the above equation is the drag force Fd, it is possible to express it as
Thus the force is simply ρ A u2 times some (as-yet-unknown) function fc of the Reynolds number Re – a considerably simpler system than the original five-argument function given above.
Dimensional analysis thus makes a very complex problem (trying to determine the behavior of a function of five variables) a much simpler one: the determination of the drag as a function of only one variable, the Reynolds number.
If the fluid is a gas, certain properties of the gas influence the drag and those properties must also be taken into account. Those properties are conventionally considered to be the absolute temperature of the gas, and the ratio of its specific heats. These two properties determine the speed of sound in the gas at its given temperature. The Buckingham pi theorem then leads to a third dimensionless group, the ratio of the relative velocity to the speed of sound, which is known as the Mach number. Consequently when a body is moving relative to a gas, the drag coefficient varies with the Mach number and the Reynolds number.
The analysis also gives other information for free, so to speak. The analysis shows that, other things being equal, the drag force will be proportional to the density of the fluid. This kind of information often proves to be extremely valuable, especially in the early stages of a research project.
Air viscosity in a rotating sphere
Air viscosity in a rotating sphere has a coefficient, similar to the drag coefficient in the drag equation.
Experimental methods
To empirically determine the Reynolds number dependence, instead of experimenting on a large body with fast-flowing fluids (such as real-size airplanes in wind tunnels), one may just as well experiment using a small model in a flow of higher velocity because these two systems deliver similitude by having the same Reynolds number. If the same Reynolds number and Mach number cannot be achieved just by using a flow of higher velocity it may be advantageous to use a fluid of greater density or lower viscosity.
See also
Aerodynamic drag
Angle of attack
Morison equation
Newton's sine-square law of air resistance
Stall (flight)
Terminal velocity
References
External links
Drag (physics)
Equations of fluid dynamics
Aircraft wing design | Drag equation | [
"Physics",
"Chemistry"
] | 1,506 | [
"Drag (physics)",
"Equations of fluid dynamics",
"Equations of physics",
"Fluid dynamics"
] |
171,816 | https://en.wikipedia.org/wiki/The%20Extended%20Phenotype | The Extended Phenotype is a 1982 book by the evolutionary biologist Richard Dawkins, in which the author introduced a biological concept of the same name. The book's main idea is that phenotype should not be limited to biological processes such as protein biosynthesis or tissue growth, but extended to include all effects that a gene has on its environment, inside or outside the body of the individual organism.
Dawkins considers The Extended Phenotype to be a sequel to The Selfish Gene (1976) aimed at professional biologists, and as his principal contribution to evolutionary theory.
Summary
Genes as the unit of selection in evolution
The central thesis of The Extended Phenotype, and of its predecessor by the same author, The Selfish Gene, is that individual organisms are not the true units of natural selection. Instead, the gene — or the 'active, germ-line replicator' — is the unit upon which the forces of evolutionary selection and adaptation act. It is genes that succeed or fail in evolution, meaning that they either succeed or fail in replicating themselves across multiple generations.
These replicators are not subject to natural selection directly, but indirectly through their "phenotypical effects". These effects are all the effects that the gene (or replicator) has on the world at large, not just in the body of the organism in which it is contained. In taking as its starting point the gene as the unit of selection, The Extended Phenotype is a direct extension of Dawkins' first book, The Selfish Gene.
Genes synthesise only proteins
Dawkins argues that the only thing that genes control directly is the synthesis of proteins; restricting the idea of the phenotype to apply only to the phenotypic expression of an organism's genes in its own body is an arbitrary limitation that ignores the effect a gene may have on an organism's environment through that organism's behaviour.
Genes may affect more than the organism's body
Dawkins proposes there are three forms of extended phenotype. The first is the capacity of animals to modify their environment using architectural constructions, for which Dawkins provides as examples caddis houses and beaver dams.
The second form is manipulation of other organisms: The morphology of a living organism, and possibly of that organism's behaviour, may influence not just the fitness of the organism itself, but that of other living organisms as well. One example of this is parasite manipulation. This refers to the capacity, found in some parasite-host interactions, for the parasite to modify the behaviour of the host in a way that enhances the parasite's own fitness. One well-known example of this second type of extended phenotype is the suicidal drowning of crickets infected by hairworm, a behaviour that is essential to the parasite's reproductive cycle. Another example is seen in female mosquitoes carrying malaria parasites. The mosquitoes infected with the parasites whose preferred hosts are humans have been shown in a field experiment to be significantly more attracted to human breath and odours than uninfected mosquitoes when the parasites are at a point in their life cycle where they can infect a human target.
The third form of extended phenotype is action at a distance of the parasite on its host. A common example is the manipulation of host behaviour by cuckoo chicks, which elicit intensive feeding by the host birds. Here the cuckoo does not interact directly with the host (which could be meadow pipits, dunnocks or reed warblers). The relevant adaptation lies in the cuckoo producing eggs and chicks that resemble sufficiently those of the host species so that they are not immediately ejected from the nest. These behavioural modifications are not physically associated with individuals of the host species but influence the expression of its behavioural phenotype.
Dawkins summarizes these ideas in what he terms the Central Theorem of the Extended Phenotype:
Gene-centred view of life
In developing this argument, Dawkins aims to strengthen the case for a gene-centric view of the evolution of life forms, to the point where it is recognized that the organism itself needs to be explained. This is the challenge which he takes up in the final chapter entitled "Rediscovering the Organism". The concept of extended phenotype has been generalized in an organism-centered view of evolution with the concept of niche construction, in the case where natural selection pressures can be modified by the organisms during the evolutionary process.
Reception
A technical review of The Extended Phenotype in the Quarterly Review of Biology states that, it is an "interesting and thought provoking book, once one gets to the last five chapters." In the reviewer's opinion, the book poses interesting questions, such as "What is the survival value of packaging life into discrete units called 'organisms' even though the units of selection appear to be individual 'replicators'?" The reviewer states that no "satisfactory answer is given" to this question in the book, though Dawkins suggests that replicators that "interact favorably to create 'vehicles' (organisms) may be at an advantage over those that do not (Chapter 14)." The reviewer takes issue with the first nine chapters as being essentially a defense of Dawkin's first book, The Selfish Gene.
Another review in American Scientist praises the book for convincingly promoting the idea of replication as being central to the evolutionary process. However, in the reviewer's opinion, "its main theme - that the gene is the only unit of selection - results from incorrectly interpreting the constraints on organismal adaptation and from too narrow an interpretation of replication, a process of more general relevance than the author is willing to allow."
Uses and limitations
The concept of extended phenotype has provided a useful frame for subsequent scientific work. For example, research into the relationship between "the bacterial flora of the gut and their mammalian hosts" which "has become a hot topic of late" makes use of this concept.
Subsequent proponents expand the theory and posit that many organisms within an ecosystem can alter the selective pressures on all of them by modifying their environment in various ways. Dawkins himself asserted, "Extended phenotypes are worthy of the name only if they are candidate adaptations for the benefit of alleles responsible for variations in them". As an illustration, one might ask: could an architect's buildings be considered part of his or her extended phenotype, much as a beaver's dam is part of its extended phenotype? Dawkins' answer is No: in humans, an "architect's specific alleles are neither more nor less likely to be selected based on the design of his or her latest building."
See also
Group selection
Inclusive fitness
Kin selection
References
External links
The Tactless Meme - by Jon Seger, New Scientist
Book profile - from The World of Richard Dawkins
1982 non-fiction books
Books about evolution
Books by Richard Dawkins
English-language non-fiction books
English non-fiction books
Evolutionary biology concepts
Modern synthesis (20th century)
Oxford University Press books
Sequel books | The Extended Phenotype | [
"Biology"
] | 1,438 | [
"Evolutionary biology concepts"
] |
171,829 | https://en.wikipedia.org/wiki/Skid-steer%20loader | A skid loader, skid-steer loader (SSL), or skidsteer is any of a class of compact heavy equipment with lift arms that can attach to a wide variety of buckets and other labor-saving tools or attachments.
The wheels typically have no separate steering mechanism and hold a fixed straight alignment on the body of the machine. Turning is accomplished by differential steering, in which the left and right wheel pairs are operated at different speeds, and the machine turns by skidding or dragging its fixed-orientation wheels across the ground. Skid-steer loaders are capable of zero-radius turning, by driving one set of wheels forward while simultaneously driving the opposite set of wheels in reverse. This "zero-turn" capability (the machine can turn around within its own length) makes them extremely maneuverable and valuable for applications that require a compact, powerful and agile loader or tool carrier in confined-space work areas.
Like other front loaders, they can push material from one location to another, carry material in the bucket, load material into a truck or trailer and perform a variety of digging and grading operations.
History
The first three-wheeled, front-end loader was invented by brothers Cyril and Louis Keller in Rothsay, Minnesota, in 1957. The Kellers built the loader to help a farmer, Eddie Velo, mechanize the process of cleaning turkey manure from his barn. The light and compact machine, with its rear caster wheel, was able to turn around within its own length while performing the same tasks as a conventional front-end loader, hence its name.
The Melroe brothers, of Melroe Manufacturing Company in Gwinner, North Dakota, purchased the rights to the Keller loader in 1958 and hired the Kellers to continue refining their invention. As a result of this partnership, the M-200 Melroe self-propelled loader was introduced at the end of 1958. It featured two independent front-drive wheels and a rear caster wheel, a engine and a lift capacity. Two years later they replaced the caster wheel with a rear axle and introduced the M-400, the first four-wheel, true skid-steer loader. The M-440 was powered by a engine and had an rated operating capacity. Skid-steer development continued into the mid-1960s with the M600 loader. Melroe adopted the well-known Bobcat trademark in 1962.
By the late 1960s, competing heavy equipment manufacturers were selling machines of this form factor.
Operation
Skid-steer loaders are typically four-wheeled or tracked vehicles with the front and back wheels on each side mechanically linked together to turn at the same speed, and where the left-side drive wheels can be driven independently of the right-side drive wheels. This is accomplished by having two separate and independent transmissions; one for the left side wheels and one for the right side wheels. Earliest versions of skid steer loaders used forward and reverse clutch drives. Virtually all modern skid steers designed and built since the mid-1970s use two separate hydrostatic transmissions (one for the left side and one for the right side).
The differential steering, zero-turn capabilities and lack of visibility often exacerbated by carrying loads with these machines means that their safe operation requires the operator have a good field of vision, good hand eye coordination, manual dexterity and the ability to remember and perform multiple actions at once. Before allowing anyone, including adults, to operate a skid steer, they should be assessed on their ability to safely operate the machine and trained in its safe operation. In the US, it is illegal for youth under age 18 employed in non-agricultural jobs to operate a skid steer. For youth hired to work in agriculture, it is recommended they be at least 16 years old and have an adult assess their abilities using the Agricultural Youth Work Guidelines before being allowed to operate a skid steer.
Another thing to consider are beacon lights and reverse signal alarms that offer a warning to co-workers about the skid steer’s movements. These alarms are not always standard equipment on all farm or landscape skid steer machines, depending on factors like the age of the machine. Use and continued maintenance of these alarms greatly reduce the risk of incidents involving running over and/or pinning co-workers between the machine and an obstacle. Construction sites and their business contract requirements often call for landscapers to have operational skid steer reverse signal alarms and beacon lights.
The extremely rigid frame and strong wheel bearings prevent the torsional forces caused by this dragging motion from damaging the machine. As with tracked treads, the high ground friction produced by skid steers can rip up soft or fragile road surfaces. They can be converted to low ground friction by using specially designed wheels such as the Mecanum wheel.
Skid-steer loaders are sometimes equipped with tracks instead of the wheels, and such a vehicle is known as a compact track loader.
Skid steer loaders, both wheel and track models, operate most efficiently when they are imbalanced – either the front wheels or the back wheels are more heavily loaded. When equipped with an empty bucket, skid steer loaders are all heavier in the rear and the rear wheels pivot in place while the front wheels slide around. When a bucket is fully loaded, the weight distribution reverses and the front wheels become significantly heavier than the rear wheels. When making a zero-turn while loaded, the front wheels pivot and the rear wheels slide.
Imbalanced operation reduces the amount of power required to turn the machine and minimizes tire wear. Skilled operators always try to keep the machine more heavily loaded on either the front or the rear of the machine. When the weight distribution is 50/50 (or close to it) neither the front set of wheels nor the rear set of wheels wants to pivot or slide and the machine starts to "buck" due to high friction, evenly divided between front and rear axles. Tire wear increases significantly in this condition.
Unlike in a conventional front loader, the lift arms in these machines are alongside the driver with the pivot points behind the driver's shoulders. Because of the operator's proximity to moving booms, early skid loaders were not as safe as conventional front loaders, particularly due to the lack of a rollover protection structure. Modern skid loaders have cabs, open or fully enclosed which can serve as rollover protective structures (ROPS) and falling object protective structures (FOPS). The ROPS, FOPS, side screens and operator restraints make up the “zone of protection” in a skid steer, and are designed to reduce the possibility of operator injury or death. The FOPS shields the operator's cab from falling debris, and the ROPS shields the operator in the case of an overturn. The side screens prevent the operator from becoming wedged between the lift arms and the skid steer frame as well as from being struck by protrusions (such as limbs). The operator is secured in the operator seat when the seat belt or seat-bar restraint is utilized, keeping them within the zone of protection. Safety features and safe operation are important because skid steer loaders are hazardous when safety practices are not observed. Rollover incidents and being crushed by moving parts are the most common causes of serious injuries and death associated with skid steer loaders.
Attachments
The conventional bucket of many skid loaders can be replaced with a variety of specialized buckets or attachments, many powered by the loader's hydraulic system. The list of attachments available is virtually endless. Some examples include Dura Graders, backhoe, hydraulic breaker, pallet forks, angle broom, sweeper, auger, mower, snow blower, stump grinder, tree spade, trencher, dumping hopper, pavement miller, ripper, tillers, grapple, tilt, roller, snow blade, wheel saw, cement mixer, and wood chipper machine.
Some models of skid steer now also have an automatic attachment changer mechanism. This allows a driver to change between a variety of terrain handling, shaping, and leveling tools without having to leave the machine, by using a hydraulic control mechanism to latch onto the attachments. Traditionally hydraulic supply lines to powered attachments may be routed so that the couplings are located near the cab, and the driver does not need to leave the machine to connect or disconnect those supply lines. Recently, manufacturers have also created automatic hydraulic connection systems that allow changing attachments without having to manually disconnect/connect hydraulic lines
Loader-arm design
Radial lift
The original skid-steer loader arms were designed using a hinge near the top of the loader frame towers at the rear of the machine. When the loader arms were raised the mechanism would pivot the loader arm up into the air in an arc that would swing up over the top of the operator. This is known as a radial lift loader. This design is simple to manufacture and lower cost. Radial lift loaders start with the bucket close to the machine when the arms are fully down and start moving up and forward away from the machine as the arms are raised. This provides greater forward reach at mid-point in the lift for dumping at around four to five feet, but less stability at the middle of their lift arc (because the bucket is so much further forward). As the loader arms continue to raise past mid-height the bucket begins to move back closer to the machine and becomes more stable at full lift height, but also has far less forward reach at full height.
Radial lift machines are lower cost and tend to be preferred for users who do a lot of work at lower height of lift arms, such as digging and spreading materials at low heights. Radial lift designs have very good lift capacity/stability when the loader arms are all the way down and become less stable (lower lift capacity) as the arms reach mid-point and the bucket is furthest forward. Static stability increases as the arms continue to rise, but raised loads are inherently less stable and safe for all machine types. One downside of radial lift design is that when fully-raised the bucket is back closer to the machine, so it has relatively poor reach when trying to load trucks or hoppers or spreaders. In addition, the bucket is almost over the operator's head and spillage over the back of the bucket can end up on top of the machine or in the operator's lap. Another downside of radial lift machines is that the large frame towers to which the loader arms are attached tend to restrict an operator's visibility to the rear and back corners of the machine. The radial arm is still the most common design and preferred by many users, but almost all manufacturers that started with radial lift designs began also producing "vertical lift" designs as well.
Vertical lift
"Vertical lift" designs use additional links and hinges on the loader arm, with the main pivot points towards the center or front of the machine. This allows the loader arm to have greater operating height and reach while retaining a compact design. There are no truly "vertical lift" designs in production. All loaders use multiple links (that all move in radial arcs) which aim to straighten the lift path of the bucket as it is raised. This allows close to vertical movement at points of the lift range, to keep the bucket forward of the operator's cab, allowing safe dumping into tall containers or vehicles. Some designs have more arc in the lowest part of the lift arc while other designs have more arc near the top of the lift arc.
One downside of vertical lift designs is somewhat higher cost and complexity of manufacturing. Some vertical lift designs may also have reduced rear or side visibility when the arms are down low, but superior visibility as the arms are raised (especially if the design does not require a large rear frame tower). Most Vertical lift machines provide more constant stability as the arms are raised from fully-lowered to fully-raised position since the bucket (load) has a similar distance from the machine from bottom to top of the lift path. As a side benefit to constant stability, most vertical lift machines have larger bucket capacities and longer, flatter low-profile buckets that can carry more material per cycle and tend to provide smoother excavating and grading than short-lip buckets. Vertical lift designs have grown rapidly in popularity in the past thirty years and now make up a significant proportion of new skid loader sales.
Loader arm safety precautions
When controls are activated, the loader or lift arm attachments can move and crush individuals who are within the range of the machinery. To prevent injuries, it is strongly advisable for operators to not start or operate controls from outside of the cab. When in the operator’s seat, the operator should always fasten the seatbelt and lower the safety bar to stay securely in the cab and avoid being crushed. Operators should also ensure that any helpers or bystanders are clear of the machine before starting it.
Applications
A skid-steer loader can sometimes be used in place of a large excavator by digging a hole from the inside. This is especially true for digging swimming pools in a back yard where a large excavator cannot fit. The skid loader first digs a ramp leading to the edge of the desired excavation. It then uses the ramp to carry material out of the hole. The skid loader reshapes the ramp making it steeper and longer as the excavation deepens. This method is also useful for digging under a structure where overhead clearance does not allow for the boom of a large excavator, such as digging a basement under an existing house. Several companies make backhoe attachments for skid-steers. These are more effective for digging in a small area than the method above and can work in the same environments.
Other applications may consist of transporting raw material around a job site, either in buckets or using pallet forks. Rough terrain forklifts have very poor maneuverability; and smaller "material handling" forklifts have good maneuverability but poor traction. Skid steer loaders have very good maneuverability and traction but typically lower lift capacity than forklifts.
Skid steer loaders excel at snow removal, especially in smaller parking lots where maneuverability around existing cars, light poles, and curbs is an issue with larger snow plows. Skid steers also have the ability to actually remove the snow rather than just plowing it and pushing snow into a pile.
Manufacturers
Bobcat
Case
Caterpillar
CNH Industrial N.V.
Dingo Australia
Gehl Company
Hyundai
JCB
John Deere
Komatsu
KOBELCO Construction Machinery Co. Ltd
Kubota
LiuGong
Manitou Group
New Holland
Sany Group Co. Ltd
Takeuchi
Toro
Toyota Industries
Volvo
Wacker Neuson
Yanmar
See also
Amphibious vehicle
Backhoe loader
Bulldozer
Challenger Tractor
Compact excavator
Continuous track
Crane
Excavator
Forestry mulcher
Forklift
Grader
Skid-to-turn
Telescopic handler
Tractor
References
Bibliography
External links
How Skid Steer Loaders and Multi Terrain Loaders work – from HowStuffWorks.com
Preventing Injuries and Deaths from Skid Steer Loaders. U.S. National Institute for Occupational Safety and Health Alert from December 2010.
Preventing Injuries and Deaths from Skid Steer Loaders. U.S. National Institute for Occupational Safety and Health Alert from February 1998.
Skid Steer Loader Safety from Kansas State University
"8th-grade grads invented, Ph.D.s explained their inspirations," by Edward Lotterman, St. Paul Pioneer Press, July 15, 2010
Agricultural machinery
Construction equipment
Engineering vehicles
American inventions | Skid-steer loader | [
"Engineering"
] | 3,231 | [
"Construction",
"Construction equipment",
"Engineering vehicles",
"Industrial machinery"
] |
8,807,453 | https://en.wikipedia.org/wiki/Chrysolaminarin | Chrysolaminarin is a linear polymer of β(1→3) and β(1→6) linked glucose units in a ratio of 11:1. It used to be known as leucosin.
Function
Chrysolaminarin is a storage polysaccharide typically found in photosynthetic heterokonts. It is used as a carbohydrate food reserve by phytoplankton such as Bacillariophyta (similar to the use of laminarin by brown algae).
Chrysolaminarin is stored inside the cells of these organisms dissolved in water and encapsuled in vacuoles whose refractive index increases with chrysolaminarin content. In addition, heterokont algae use oil as a storage compound. Besides energy reserve, oil helps the algae to control their buoyancy.
Chrysolaminarin is also the major storage polysaccharide of most haptophyte algae.
References
Polysaccharides | Chrysolaminarin | [
"Chemistry"
] | 212 | [
"Carbohydrates",
"Polysaccharides"
] |
8,807,652 | https://en.wikipedia.org/wiki/Timeline%20of%20architectural%20styles | This timeline shows the periods of various architectural styles in a graphical fashion.
6000 BC–present
8000 years – the last 1000 years (fine grid) is expanded in the timeline below
1000AD—present
1750–1900
1900–present
See also
Timeline of architecture
List of architectural styles
References
Voorthuis – Timelines (archived copy)
External links
Rndrd – a website documenting unbuilt architectural designs representative of the 20th century
+02
Architectural Styles 1750-1900
Architecture lists
+
+
Design-related lists
Architectural styles | Timeline of architectural styles | [
"Engineering"
] | 105 | [
"Architectural history",
"Architecture lists",
"Architectural design",
"Design-related lists",
"Design",
"Architecture"
] |
8,808,087 | https://en.wikipedia.org/wiki/Fatty%20acid%20synthase | Fatty acid synthase (FAS) is an enzyme that in humans is encoded by the FASN gene.
Fatty acid synthase is a multi-enzyme protein that catalyzes fatty acid synthesis. It is not a single enzyme but a whole enzymatic system composed of two identical 272 kDa multifunctional polypeptides, in which substrates are handed from one functional domain to the next.
Its main function is to catalyze the synthesis of palmitate (C16:0, a long-chain saturated fatty acid) from acetyl-CoA and malonyl-CoA, in the presence of NADPH.
The fatty acids are synthesized by a series of decarboxylative Claisen condensation reactions from acetyl-CoA and malonyl-CoA. Following each round of elongation the beta keto group is reduced to the fully saturated carbon chain by the sequential action of a ketoreductase (KR), dehydratase (DH), and enoyl reductase (ER). The growing fatty acid chain is carried between these active sites while attached covalently to the phosphopantetheine prosthetic group of an acyl carrier protein (ACP), and is released by the action of a thioesterase (TE) upon reaching a carbon chain length of 16 (palmitic acid).
Classes
There are two principal classes of fatty acid synthases.
Type I systems utilise a single large, multifunctional polypeptide and are common to both animals and fungi (although the structural arrangement of fungal and animal syntheses differ). A Type I fatty acid synthase system is also found in the CMN group of bacteria (corynebacteria, mycobacteria, and nocardia). In these bacteria, the FAS I system produces palmitic acid, and cooperates with the FAS II system to produce a greater diversity of lipid products.
Type II is found in archaea, bacteria and plant plastids, and is characterized by the use of discrete, monofunctional enzymes for fatty acid synthesis. Inhibitors of this pathway (FASII) are being investigated as possible antibiotics.
The mechanism of FAS I and FAS II elongation and reduction is the same, as the domains of the FAS II enzymes are largely homologous to their domain counterparts in FAS I multienzyme polypeptides. However, the differences in the organization of the enzymes - integrated in FAS I, discrete in FAS II - gives rise to many important biochemical differences.
The evolutionary history of fatty acid synthases are very much intertwined with that of polyketide synthases (PKS). Polyketide synthases use a similar mechanism and homologous domains to produce secondary metabolite lipids. Furthermore, polyketide synthases also exhibit a Type I and Type II organization. FAS I in animals is thought to have arisen through modification of PKS I in fungi, whereas FAS I in fungi and the CMN group of bacteria seem to have arisen separately through the fusion of FAS II genes.
Structure
Mammalian FAS consists of a homodimer of two identical protein subunits, in which three catalytic domains in the N-terminal section (-ketoacyl synthase (KS), malonyl/acetyltransferase (MAT), and dehydrase (DH)), are separated by a core region (known as the interdomain) of 600 residues from four C-terminal domains (enoyl reductase (ER), -ketoacyl reductase (KR), acyl carrier protein (ACP) and thioesterase (TE)). The interdomain region allows the two monomeric domains to form a dimer.
The conventional model for organization of FAS (see the 'head-to-tail' model on the right) is largely based on the observations that the bifunctional reagent 1,3-dibromopropanone (DBP) is able to crosslink the active site cysteine thiol of the KS domain in one FAS monomer with the phosphopantetheine prosthetic group of the ACP domain in the other monomer. Complementation analysis of FAS dimers carrying different mutations on each monomer has established that the KS and MAT domains can cooperate with the ACP of either monomer. and a reinvestigation of the DBP crosslinking experiments revealed that the KS active site Cys161 thiol could be crosslinked to the ACP 4'-phosphopantetheine thiol of either monomer. In addition, it has been recently reported that a heterodimeric FAS containing only one competent monomer is capable of palmitate synthesis.
The above observations seemed incompatible with the classical 'head-to-tail' model for FAS organization, and an alternative model has been proposed, predicting that the KS and MAT domains of both monomers lie closer to the center of the FAS dimer, where they can access the ACP of either subunit (see figure on the top right).
A low resolution X-ray crystallography structure of both pig (homodimer) and yeast FAS (heterododecamer) along with a ~6 Å resolution electron cryo-microscopy (cryo-EM) yeast FAS structure have been solved.
Substrate shuttling mechanism
The solved structures of yeast FAS and mammalian FAS show two distinct organizations of highly conserved catalytic domains/enzymes in this multi-enzyme cellular machine. Yeast FAS has a highly efficient rigid barrel-like structure with 6 reaction chambers which synthesize fatty acids independently, while the mammalian FAS has an open flexible structure with only two reaction chambers. However, in both cases the conserved ACP acts as the mobile domain responsible for shuttling the intermediate fatty acid substrates to various catalytic sites. A first direct structural insight into this substrate shuttling mechanism was obtained by cryo-EM analysis, where ACP is observed bound to the various catalytic domains in the barrel-shaped yeast fatty acid synthase. The cryo-EM results suggest that the binding of ACP to various sites is asymmetric and stochastic, as also indicated by computer-simulation studies
Regulation
Metabolism and homeostasis of fatty acid synthase is transcriptionally regulated by Upstream Stimulatory Factors (USF1 and USF2) and sterol regulatory element binding protein-1c (SREBP-1c) in response to feeding/insulin in living animals.
Although liver X receptors (LXRs) modulate the expression of sterol regulatory element binding protein-1c (SREBP-1c) in feeding, regulation of FAS by SREBP-1c is USF-dependent.
Acylphloroglucinols isolated from the fern Dryopteris crassirhizoma show a fatty acid synthase inhibitory activity.
Clinical significance
The FASN gene has been investigated as a possible oncogene. FAS is upregulated in breast and gastric cancers, as well as being an indicator of poor prognosis, and so may be worthwhile as a chemotherapeutic target. FAS inhibitors are therefore an active area of drug discovery research.
FAS may also be involved in the production of an endogenous ligand for the nuclear receptor PPARalpha, the target of the fibrate drugs for hyperlipidemia, and is being investigated as a possible drug target for treating the metabolic syndrome. Orlistat which is a gastrointestinal lipase inhibitor also inhibits FAS and has a potential as a medicine for cancer.
In some cancer cell lines, this protein has been found to be fused with estrogen receptor alpha (ER-alpha), in which the N-terminus of FAS is fused in-frame with the C-terminus of ER-alpha.
An association with uterine leiomyomata has been reported.
See also
Discovery and development of gastrointestinal lipase inhibitors
Fatty acid synthesis
Fatty acid metabolism
Fatty acid degradation
Enoyl-acyl carrier protein reductase
List of fatty acid metabolism disorders
References
Further reading
External links
Fatty Acid Synthesis: Rensselaer Polytechnic Institute
Fatty Acid Synthase: RCSB PDB Molecule of the Month
3D electron microscopy structures of fatty acid synthase from the EM Data Bank(EMDB)
PDBe-KB provides an overview of all the structure information available in the PDB for Human Fatty acid synthase
Transferases
EC 2.3.1
Metabolism
Fatty acids
NADPH-dependent enzymes
Enzymes of known structure | Fatty acid synthase | [
"Chemistry",
"Biology"
] | 1,816 | [
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
8,809,013 | https://en.wikipedia.org/wiki/Frit | A frit is a ceramic composition that has been fused, quenched, and granulated. Frits form an important part of the batches used in compounding enamels and ceramic glazes; the purpose of this pre-fusion is to render any soluble and/or toxic components insoluble by causing them to combine with silica and other added oxides.
However, not all glass that is fused and quenched in water is frit, as this method of cooling down very hot glass is also widely used in glass manufacture.
According to the OED, the origin of the word "frit" dates back to 1662 and is "a calcinated mixture of sand and fluxes ready to be melted in a crucible to make glass". Nowadays, the unheated raw materials of glass making are more commonly called "glass batch".
In antiquity, frit could be crushed to make pigments or shaped to create objects. It may also have served as an intermediate material in the manufacture of raw glass. The definition of frit tends to be variable and has proved a thorny issue for scholars. In recent centuries, frits have taken on a number of roles, such as biomaterials and additives to microwave dielectric ceramics. Frit in the form of alumino-silicate can be used in glaze-free continuous casting refractories.
Ancient frit
Archaeologists have found evidence of frit in Egypt, Mesopotamia, Europe, and the Mediterranean. The definition of frit as a sintered, polycrystalline, unglazed material can be applied to these archaeological contexts. It is typically colored blue or green.
Blue frit
Blue frit, also known as Egyptian blue, was made from quartz, lime, a copper compound, and an alkali flux, all heated to a temperature between 850 and 1000 °C. Quartz sand may have been used to contribute silica to the frit. The copper content must be greater than the lime content in order to create a blue frit. Ultimately the frit consists of cuprorivaite (CaCuSi4O10) crystals and "partially reacted quartz particles bonded together" by interstitial glass. Despite an argument to the contrary, scientists have found that, regardless of alkali content, the cuprorivaite crystals develop by "nucleation or growth within a liquid or glass phase". However, alkali content—and the coarseness of the cuprorivaite crystals—contribute to the shade of blue in the frit. High alkali content will yield "a large proportion of glass". thereby diluting the cuprorivaite crystals and producing lighter shades of blue. Regrinding and resintering the frit will create finer cuprorivaite crystals, also producing lighter shades.
The earliest appearance of blue frit is as a pigment on a tomb painting at Saqqara dated to 2900 BC, though its use became more popular in Egypt around 2600 BC. Blue frit has also been uncovered in the royal tombs at Ur from the Early Dynastic III period. Its use in the Mediterranean dates to the Thera frescoes from the Late Middle Bronze Age.
While the glass phase is present in blue frits from Egypt, scientists have not detected it in blue frits from the Near East, Europe, and the Aegean. Natural weathering, which is also responsible for the corrosion of glasses and glazes from these three regions, is the likely reason for this absence.
At Amarna, archaeologists have found blue frit in the form of circular cakes, powder residues, and vessel fragments. Analysis of the microstructures and crystal sizes of these frits has allowed Hatton, Shortland, and Tite to deduce the connection among the three materials. The cakes were produced by heating the raw materials for frit, then they were ground to make powders, and finally, the powders were molded and refired to create vessels.
In On Architecture, the first century BC writer Vitruvius reports the production of 'caeruleum' (a blue pigment) at Pozzuoli, made by a method used in Alexandria, Egypt. Vitruvius lists the raw materials for caeruleum as sand, copper filings, and 'nitrum' (soda). In fact, analysis of some frits that date to the time of Thutmose III and later show the use of bronze filings instead of copper ore.
Stocks suggests that waste powders from the drilling of limestone, combined with a minor concentration of alkali, may have been used to produce blue frits. The powders owe their copper content to the erosion of the copper tubular drills used in the drilling process. However, the archaeological record has not yet confirmed such a relationship between these two technologies.
Green frit
Evidence of the use of green frit is so far confined to Egypt. Alongside malachite, green frit was usually employed as a green pigment. Its earliest occurrence is in tomb paintings of the 18th dynasty, but its use extends at least to the Roman period. The manufacture of green and blue frit relies on the same raw materials, but in different proportions. To produce green frit, the lime concentration must outweigh the copper concentration. The firing temperature required for green frit may be slightly higher than that of blue frit, in the range of 950 to 1100 °C. The ultimate product is composed of copper-wollastonite ([Ca,Cu]3Si3O9) crystals and a "glassy phase rich in copper, sodium, and potassium chlorides". In certain circumstances (the use of a two-step heating process, the presence of hematite), scientists were able to make a cuprorivaite-based blue frit that later became a copper-wollastonite-based green frit at a temperature of 1050 °C. On some ancient Egyptian wall paintings, pigments that were originally blue are now green: the blue frit can "devitrify" so that the "copper wollastonite predominates over the lesser component of cuprorivaite". As with blue frit, Hatton, Shortland, and Tite have analyzed evidence for green frit at Amarna in the form of cakes, powders, and one vessel fragment and inferred the sequential production of the three types of artifacts.
Relationships with glass and faience
An Akkadian text from Assurbanipal's library at Nineveh suggests that a frit-like substance was an intermediate material in the production of raw glass. This intermediate step would have followed the grinding and mixing of the raw materials used to make glass. An excerpt of Oppenheim's translation of Tablet A, Section 1 of the Nineveh text reads:
The steps that follow involve reheating, regrinding and finally gathering the powder in a pan. Following the Nineveh recipe, Brill was able to produce a "high quality" glass. He deduced that the frit intermediate is necessary so that gases will evolve during this stage and the end product will be virtually free of bubbles. Furthermore, grinding the frit actually expedites the "second part of the process, which is to ... reduce the system to a glass".
Moorey has defined this intermediate step as "fritting", "a process in which the soluble salts are made insoluble by breaking down the carbonates, etc. and forming a complex mass of sintered silicates". A frit preserved in a "fritting pan fragment" kept in the Petrie Museum "shows numerous white flecks of unreacted silica and a large number of vesicles where gases had formed". The process was known to ancient writers Pliny and Theophilus.
But whether this "fritting" was done in antiquity as a deliberate step in the manufacture of raw glass remains questionable. The compositions of frits and glasses recovered from Amarna do not agree in a way that would imply frits were the immediate precursors of glasses: the frits have lower concentrations of soda and lime and higher concentrations of cobalt and alumina than the glasses have.
Scholars have suggested several potential connections between frit and faience. Kühne proposes that frit may have acted as the "binding agent for faience" and suggests that this binder was composed predominantly of silica, alkali and copper with minor concentrations of alkali earths and tin. But analysis of a wide array of Egyptian frits contradicts the binder composition that Kühne offers. Vandiver and Kingery argue that one method of producing a faience glaze was to "frit or melt the glaze constituents to form a glass", then grind the glass and form a slurry in water, and finally apply the glaze "by dipping or painting." However, their use of "frit" as virtually synonymous with "melt" represents yet another unique take on what a "frit" would constitute. Finally, Tite et al. report that frits, unusually colored blue by cobalt, found in "fritting pans" at Amarna have compositions and microstructures similar to that of vitreous faience, a higher-temperature form of Egyptian faience that incorporated cobalt into its body. In their reconstruction of the manufacture of vitreous faience, Tite et al. propose that the initial firing of raw materials at 1100–1200 °C produces a cobalt-blue frit, which is then ground, molded, and glazed.
In general, frits, glasses and faience are similar materials: they are all silica-based but have different concentrations of alkali, copper and lime. However, as Nicholson states, they are distinct materials because "it would not be possible to turn faience into frit or frit into glass simply by further, or higher temperature, heating".
The use of frit as pigments and as entire objects does give credence to the idea that frit-making was, to some extent, a "specialized" industry. Indeed, scientists have determined that frit objects, such as amulets, beads and vessels, have chemical compositions similar to those of powder frits designed for use as pigments. Nevertheless, determining the exact technical relationships among the frit, glass and faience industries is an area of current and, likely, future scholarly interest. The excavations at Amarna offer a spatial confirmation of these potential relationships, as the frit, glass and faience industries there were located "in close proximity" to one another.
Fritware
Fritware refers to a type of pottery which was first developed in the Near East, where production is dated to the late first millennium AD through the second millennium AD. Frit was a significant ingredient. A recipe for "fritware" dating to c. 1300 AD written by Abu’l Qasim reports that the ratio of quartz to "frit-glass" to white clay is 10:1:1. This type of pottery has also been referred to as "stonepaste" and "faience" among other names. A ninth-century corpus of "proto-stonepaste" from Baghdad has "relict glass fragments" in its fabric. The glass is alkali-lime-lead-silica and, when the paste was fired or cooled, wollastonite and diopside crystals formed within the glass fragments. The lack of "inclusions of crushed pottery" suggests these fragments did not come from a glaze. The reason for their addition would have been to release alkali into the matrix on firing, which would "accelerate vitrification at a relatively low firing temperature, and thus increase the hardness and density of the [ceramic] body". Whether these "relict glass fragments" are actually "frit" in the more ancient sense remains to be seen.
Iznik pottery was produced in Ottoman Turkey as early as the 15th century AD. It consists of a body, slip, and glaze, where the body and glaze are "quartz-frit". The "frits" in both cases "are unusual in that they contain lead oxide as well as soda"; the lead oxide would help reduce the thermal expansion coefficient of the ceramic. Microscopic analysis reveals that the material that has been labeled "frit" is "interstitial glass" which serves to connect the quartz particles. Tite argues that this glass was added as frit and that the interstitial glass formed on firing.
Frit was also a significant component in some early European porcelains. Famous manufacturers of the 18th century included Sèvres in France, and at Chelsea, Derby, Bow, Worcester and Longton Hall in England. Frit porcelain is produced at Belleek, County Fermanagh, Northern Ireland. This factory, established in 1857, produces ware that is characterised by its thinness, slightly iridescent surface and that the body is formulated with a significant proportion of frit.
A small manufacturing cluster of fritware exists around Jaipur, Rajasthan in India, where it is known as 'Blue Pottery' due its most popular glaze. The technique may have arrived in India with the Mughals, with production in Jaipur dating to at least as early as the 17th century.
Modern frit
Frits are indispensable constituents of most industrial ceramic glazes which mature at temperatures below 1150 °C. Frits are typically intermediates in the production of raw glass, as opposed to pigments and shaped objects, but they can be used as laboratory equipment in a number of high-tech contexts.
Frits made predominantly of silica, diboron trioxide (B2O3) and soda are used as enamels on steel pipes. Another type of frit can be used as a biomaterial, which is a material made to become a part of, or to come into intimate contact with, one or more living organisms. Molten soda-lime-silica glass can be "poured into water to obtain a frit", which is then ground to a powder. These powders can be used as "scaffolds for bone substitutions". Also, certain frits can be added to high-tech ceramics: such frits are made by milling zinc oxide (ZnO) and boric acid (H3BO3) with zirconium (Zr) beads, then heating this mixture to 1100 °C, quenching it, and grinding it. This frit is then added to a lithium titanate (Li2TiO3) ceramic powder, which enables the ceramic to sinter at a lower temperature while still keeping its “microwave dielectric properties."
In laboratory and industrial chemical process equipment, the term frit denotes a filter made by the sintering-together of glass particles to produce a piece of known porosity referred to as fritted glass.
Automotive windshields incorporate a dark band of ceramic dots around the edges called a frit.
In 2008, the Spanish ceramic frit, glaze and colours industry included 27 companies employing nearly 4,000 people with a combined annual turnover of approximately €1 billion. In 2022, the global market for ceramic frits was estimated to be worth a total of US$ 1.67 billion.
See also
References
External links
Blue frit from Amarna now in the Petrie Museum
Fritting pan with green frit now in the Petrie Museum
An Archaeometallurgical Explanation for the Disappearance of Egyptian and Near Eastern Cobalt-Blue Glass at the end of the Late Bronze Age Internet Archaeology
Frit Consortium a trade body for 34 European frit manufacturers
Ceramic materials
Warm glass | Frit | [
"Engineering"
] | 3,235 | [
"Ceramic engineering",
"Ceramic materials"
] |
8,809,081 | https://en.wikipedia.org/wiki/Flock%20%28birds%29 | A flock is a gathering of individual birds to forage or travel collectively. Avian flocks are typically associated with migration. Flocking also offers foraging benefits and protection from predators, although flocking can have costs for individual members.
Flocks are often defined as groups consisting of individuals from the same species. However, mixed flocks consisting of two or more species are also common. Avian species that tend to flock together are typically similar in taxonomy and share morphological characteristics such as size and shape. Mixed flocks offer increased protection against predators, which is particularly important in closed habitats such as forests where early warning calls play a vital importance in the early recognition of danger. The result is the formation of many mixed-species feeding flocks.
Mixed flocks
While mixed flocks are typically thought to comprise two different species, it is specifically the two different behaviours of the species that compose a mixed flock. Within a mixed flock there can be two different behavioural characteristics: sally and gleaner. Sallies are individuals that act as guards of the flock and consume prey in the air during flight. On the other hand, gleaners are those that consume prey living within vegetation.
Studies have shown that as resources in the aerial environment increase, the flock will possess more sallies than gleaners. This has been shown to occur during forest fires in which insects have been flushed from vegetation, however this can also be done by the gleaners. When gleaners obtain meals from vegetation it causes the other prey within the vegetation to be flushed out into the aerial environment. It is through this specific behaviour of feeding among vegetation that the gleaners indirectly increase the foraging rate of the sallies.
Those birds that are more rare and therefore less abundant in an environment are more likely to perform in this mixed flock behaviour. Despite the fact that this bird is more likely to be a subordinate, its ability to obtain food increases substantially. As well this bird is now less likely to be attacked by a predator because predators have a lower success rate when attacking large flocks.
Safety from predation
The ability to avoid predation is one of the most important skills necessary in order to increase one's fitness. It can be seen that by ground squirrels living in colonies, the ability to recognize a predator is rapid. The squirrel is then able to use vocalizations to warn conspecifics of the possible threat. This simple example demonstrates that flocks are not only seen in bird species or a herd of sheep, but it is also apparent in other animals such as rodents. This alarm call of the ground squirrel requires the ability of the animal to first recognize that there is danger present and then to react. This type of behaviour is also seen in some birds. It is important to note that by making an alarm call to signal members of the flock one is providing the predator with an acoustical cue to the location of a possible prey. The benefit here is if the members of the flock are genetically related to one another. If this is true, even if the bird that signalled the flock were to die its fitness would not decrease according to Hamilton's Rule. However another study involving thick-knees challenged whether or not an animal had to recognize the presence of a predator for protection against it.
Thick-knees are birds that are seen in large flocks during particular seasons in various regions of the world. During the nonbreeding season, Peruvian thick-knees in Chile are reported to have an average of 22.5 birds — a mixture of adults and youngsters — in their flocks. Young birds were observed learning anti-predator behaviour strategies from adults during this time. Researchers believe that the flocking behaviour may help to decrease a predator's success rate when attacking the flock, rather than increasing the ability of the flock to spot an approaching predator.
By birds co-existing with one another in a flock, less time and energy is spent searching for predators. This mutual protection of one another within the flock is one of the benefits to living within a group. However, as flock numbers increase the more aggressive individuals within the flock become towards one another. This is one of the costs to living within a flock. It is often seen that flocks are dynamic and thus fluctuate in size depending on the needs of individuals in order the maximize benefits without incurring a large amount of costs.
By living in a large flock, birds can to attack the predator with a stronger force compared to if the bird was on its own. Flocks of black-capped chickadees have shown the ability to produce a mobbing call when they visualize a possible predator. In response, the individual black-capped chickadees surround the predator and attack it in a mob-like fashion to force the predator to leave. This is known as mobbing. This mobbing behaviour is quickly learned by the juveniles within a flock meaning that these individuals will be better equipped as adults to ward off predators and respond rapidly when a predator is in sight.
Foraging in flocks
Bird species living in a flock may capture prey, likely injured, from an unsuccessful bird within its flock. This behavior is known as the beater effect and is one of the benefits of birds foraging in a flock with other birds.
It can be seen that birds in a flock may perform the information-sharing model. In this situation the entire flock would search for food and the first to find a reliable food source will alert the flock and the entire group may benefit by this finding. While this is an obvious benefit of the information-sharing model, the cost is that the social hierarchy of the flock may result in subordinate birds being denied food by those that are dominant. Another cost is the possibility that some individuals may refuse to contribute in the search of food and instead simply wait for another member to find a food resource. These individuals are known as producers and scroungers, respectively.
An intricate hunting system can be seen in the Harris's hawk in which groups of 2–6 hunt a single prey together. The group splits into smaller groups in which it then encloses on a prey, such as a rabbit, before it attacks it. By hunting as a group the Harris's Hawk can hunt larger animals and decrease the amount of energy spent hunting while each hawk in the group is able to eat from the catch.
Black sun
In Denmark, there is a biannual phenomenon known as sort sol (Danish for "black sun"). This is when flocks of European starlings gather in vast numbers, creating complex shapes against the sky during the spring. It is during this time spent in Denmark that the European starlings spend time gathering food and resting as part of their migration journey. Collecting in groups this large enables the European starlings to decrease their risk of predation by hawks.
References
External links
Birds
Ethology
Group processes
Articles containing video clips | Flock (birds) | [
"Biology"
] | 1,387 | [
"Behavior",
"Animals",
"Ethology",
"Behavioural sciences",
"Birds"
] |
8,810,330 | https://en.wikipedia.org/wiki/Zonal%20and%20meridional%20flow | Zonal and meridional flow are directions and regions of fluid flow on a globe.
Zonal flow follows a pattern along latitudinal lines, latitudinal circles or in the west–east direction.
Meridional flow follows a pattern from north to south, or from south to north, along the Earth's longitude lines, longitudinal circles (meridian) or in the north–south direction.
These terms are often used in the atmospheric and earth sciences to describe global phenomena, such as "meridional wind", or "zonal average temperature".
In the context of physics, zonal flow connotes a tendency of flux to conform to a pattern parallel to the equator of a sphere. In meteorological term regarding atmospheric circulation, zonal flow brings a temperature contrast along the Earth's longitude. Extratropical cyclones in zonal flows tend to be weaker, moving faster and producing relatively little impact on local weather.
Extratropical cyclones in meridional flows tend to be stronger and move slower. This pattern is responsible for most instances of extreme weather, as not only are storms stronger in this type of flow regime, but temperatures can reach extremes as well, producing heat waves and cold waves depending on the equator-ward or poleward direction of the flow.
For vector fields (such as wind velocity), the zonal component (or x-coordinate) is denoted as u, while the meridional component (or y-coordinate) is denoted as v.
In plasma physics, "zonal flow" means poloidal, which is the opposite from the meaning in planetary atmospheres and weather/climate studies.
See also
Meridione
Zonal flow (plasma)
Zonal/poloidal
Notes
Orientation (geometry) | Zonal and meridional flow | [
"Physics",
"Mathematics"
] | 353 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
8,810,355 | https://en.wikipedia.org/wiki/Rugosity | Rugosity, fr, is a measure of small-scale variations of amplitude in the height of a surface,
where Ar is the real (true, actual) surface area and Ag is the geometric surface area.
Utility
Rugosity calculations are commonly used in materials science to characterize surfaces, amongst others, in marine science to characterize seafloor habitats. A common technique to measure seafloor rugosity is Risk's chain-and-tape method but with the advent of underwater photography less invasive quantitative methods have been developed. Some examples include measuring small-scale seafloor bottom roughness from microtopographic laser scanning (Du Preez and Tunnicliffe 2012), and deriving multi-scale measures of rugosity, slope and aspect from benthic stereo image reconstructions (Friedman et al. 2012).
Inconsistency
Despite the popularity of using rugosity for two- and three-dimensional surface analyses, methodological inconsistency has been problematic. Building off recent advances, the new arc-chord ratio (ACR) rugosity index is capable of measuring the rugosity of two-dimensional profiles and three-dimensional surfaces using a single method (Du Preez 2015). The ACR rugosity index is defined as the contoured (real) surface area divided by the area of the surface orthogonally projected onto a plane of best fit (POBF), where the POBF is a function (linear interpolation) of the boundary data only. Using a POBF, instead of an arbitrary horizontal geometric plane, results in an important advantage of the ACR rugosity index: unlike most rugosity indices ACR rugosity is not confounded by slope.
Ecology: As a measure of complexity, rugosity is presumed to be an indicator of the amount of available habitat available for colonization by benthic organisms (those attached to the seafloor), and shelter and foraging area for mobile organisms.
Geology: For marine geologists and geomorphologists, rugosity is a useful characteristic in distinguishing different types of seafloors in remote sensing applications (e.g., sonar and laser altimetry, based from ships, planes or satellites).
Oceanography: Among oceanographers, rugosity is recognized to influence small-scale hydrodynamics by converting organized laminar or oscillatory flow into energy-dissipating turbulence.
Coral biology: High rugosity is often an indication of the presence of coral, which creates a complex surface as it grows. A rugose seafloor's tendency to generate turbulence is understood to promote the growth of coral and coralline algae by delivering nutrient-rich water after the organisms have depleted the nutrients from the envelope of water immediately surrounding their tissues.
See also
Glossary of climbing terms: Rugosity
References
External links
Matlab code for calculating multi-scale rugosity, slope and aspect
Concrete
Coatings | Rugosity | [
"Chemistry",
"Engineering"
] | 591 | [
"Structural engineering",
"Concrete",
"Coatings"
] |
8,810,504 | https://en.wikipedia.org/wiki/Academy%20at%20Dundee%20Ranch | Academy at Dundee Ranch was a behavior modification facility for United States teenagers, founded in 1991 and located at La Ceiba Cascajal, west of Orotina, province of Alajuela, Costa Rica. It was promoted as a residential school offering a program of behavior modification, motivational "emotional growth seminars", a progressive academic curriculum, and a structured daily schedule for teenagers struggling in their homes, schools, or communities.
The facility was associated with World Wide Association of Specialty Programs and Schools (WWASP).
In May 2003, authorities in Costa Rica launched an investigation during a visit to Academy at Dundee Ranch. They informed students of rights, and Narvin Lichfield was taken into custody for a short time.
Upon returning he informed students that no one was leaving; the result was a full scale riot. Narvin Lichfield was taken into custody and computer files were seized.
A new WWASP facility called Pillars of Hope was opened at the site of Academy at Dundee Ranch in 2004. It is also marketed as Seneca Ranch Second Chance Youth Ranch.
The former director of Dundee Ranch said in a sworn statement in 2003 that WWASP took 75 percent of Dundee Ranch's income, leaving little money to care for its 200 children.
Controversy
During its operation, Dundee Ranch was the subject of multiple allegations of abuse. Parents and enrollees claimed that food was being withheld as punishment. Former students complained of emotional scars due to their stay there.
A judgment in Louisiana caused Costa Rican authorities to investigate the facilities. The investigation included allegations of emotional abuse, isolation, and physical restraint at the facility. A riot occurred at the facility in May 2003 leading to its closure. The Costa Rican immigration authorities found that 100 of the 193 children enrolled in the program did not have appropriate migration papers.
Because of the closure U.S. Representative George Miller asked U.S. Attorney General John Ashcroft to investigate WWASP.
Narvin Lichfield, who was the director at the time of the facility's closure, was jailed in Costa Rica for a brief period at the time of the closure. He was later tried in Costa Rica on charges of coercion, holding minors against their will, and "crimes of an international character" (violating a law based on international treaties, specifically referring to torture).
On February 21, 2007, a three-judge panel found Narvin Lichfield innocent of the charges of abuse. During the trial the prosecutor told the court that there was insufficient evidence and testimony to link Lichfield to the crimes of which he was accused. The Tico Times reported that the judges said they believed the students at Dundee had been abused, but there was no proof that Lichfield ordered the abuse. Three other Academy employees, all Jamaicans, had been wanted in connection with the same case, but they fled Costa Rica following the closure of the academy.
Following the acquittal, Lichfield claimed in an e-mail to A.M. Costa Rica that when the school was raided, police stood by and watched youths sexually assault each other, that police held parents and staff at gunpoint, that one parent was ordered at gunpoint to hang up the phone when she attempted to phone the U.S. Embassy for help, and that police left the school in a shambles.
References
External links
International survivors action committee on Dundee Ranch
Pillars of Hope homepage
Pillars of Hope alternate homepage
Secret prisons for teens about Dundee Ranch/Pillars of Hope
Education in Costa Rica
Educational organizations based in Costa Rica
2003 disestablishments in Costa Rica
Behavior modification
World Wide Association of Specialty Programs and Schools
Troubled teen programs | Academy at Dundee Ranch | [
"Biology"
] | 734 | [
"Behavior modification",
"Behavior",
"Human behavior",
"Behaviorism"
] |
8,810,538 | https://en.wikipedia.org/wiki/Ruffle%20%28sewing%29 | In sewing and dressmaking, a ruffle, frill, or furbelow is a strip of fabric, lace or ribbon tightly gathered or pleated on one edge and applied to a garment, bedding, or other textile as a form of trimming.
Ruffles can be made from a single layer of fabric (which may need a hem) or a doubled layer. Plain ruffles are usually cut on the straight grain.
Ruffles may be gathered by using a gathering stitch, or by passing the fabric through a mechanical ruffler, which is an attachment available for some sewing machines.
A flounce is a particular type of fabric manipulation that creates a similar look but with less bulk. The term derives from earlier terms of frounce or fronce. A wavy effect is achieved without gathers or pleats by cutting a curved (or even circular) strip of fabric and applying the inner or shorter edge to the garment. The depth of the curve as well as the width of the fabric determines the depth of the flounce. A godet is a circle wedge that can be inserted into a flounce to further deepen the outer floating wave without adding additional bulk at the point of attachment to the body of the garment, such as at the hemline, collar or sleeve.
Ruffles appeared at the draw-string necklines of full chemises in the 15th century and evolved into the separately-constructed ruff of the 16th century. Ruffles and flounces remained a fashionable form of trim, off-and-on, into modern times. In the 21st century, ruffles have made a significant comeback as a trendy design element in fashion, particularly in prom and wedding dresses. This resurgence can be attributed to a growing appreciation for romantic and feminine aesthetics, as ruffles add an enchanting flair to garments. Ruffles are versatile and can be incorporated into dresses of all styles, from elegant gowns to playful party dresses, making them appealing to women of all ages. Many renowned fashion brands have embraced this trend, showcasing ruffles as a key feature in their collections. High-end designers and fast-fashion labels alike produce chic items with ruffled details, highlighting their popularity in contemporary fashion. As a result, ruffles have become synonymous with elegance and celebration, allowing wearers to express their personal style while embracing this classic design element.
See also
Nun's veiling, a lightweight sheer woolen cloth, was used in flounce in the 19th century.
Citations
General and cited references
Arnold, Janet: Patterns of Fashion: the cut and construction of clothes for men and women 1560–1620, Macmillan, 1985. Revised edition 1986. ()
Baumgarten, Linda: What Clothes Reveal: The Language of Clothing in Colonial and Federal America, Yale University Press, 2002.
Oxford English Dictionary
Picken, Mary Brooks: The Fashion Dictionary, Funk and Wagnalls, 1957. (1973 edition, )
Smith, Alison: The Sewing Book, Dorling Kindersley Press,
Tozer, Jane and Sarah Levitt, Fabric of Society: A Century of People and their Clothes 1770–1870, Laura Ashley Press,
External links
15th-century fashion
16th-century fashion
Fashion design
Parts of clothing
Sewing | Ruffle (sewing) | [
"Technology",
"Engineering"
] | 667 | [
"Design",
"Fashion design",
"Components",
"Parts of clothing"
] |
8,810,591 | https://en.wikipedia.org/wiki/Formula%20409 | Formula 409 or 409 is an American brand of home and industrial cleaning products well known in the United States, but virtually unknown in other places. It includes Formula 409 All-Purpose Cleaner, Formula 409 Glass and Surface Cleaner, Formula 409 Carpet Cleaner, and many others. The brand is currently owned by Clorox.
The flagship product was invented in 1957 by Morris D. Rouff, whose Michigan company manufactured industrial cleaning supplies. Formula 409’s original application was as a commercial solvent and degreaser for industries that struggled with particularly difficult cleaning problems.
The inventor's son has stated that it was named for the birthday of the inventor's wife, Ruth, on April 9th (409). The company, however, claims that it was named as the 409th compound tested by the two young inventors. Other claimed origins for the name include 409 being the telephone area code where it was invented (area code 409, which serves southeastern Texas, was not introduced until 1983); the birthday of some other person, such as the inventor's daughter; or a reference to a powerful Chevrolet car engine used in the 1960s.
In 1960, Rouff sold Formula 409 to Chemzol, a New York firm, for an amount in the low six-figure range. In the mid-1960s, entrepreneur Wilson Harrell, along with longtime friend David Woodcock and television personality Art Linkletter, bought Formula 409. Harrell, Woodcock & Linkletter bought it for $30,000 and took it national. Linkletter also promoted the product in television commercials. The company eventually took Formula 409 to a 55 percent share of the spray-cleaner market, and six years later, Harrell, Woodcock & Linkletter sold the company to Clorox for $7 million.
In early 2020 Formula 409 became impossible to find in stores and disappeared from the "products" listing at the Clorox website. Some websites say Clorox has discontinued the product. There has been no announcement or news release, and the website www.formula409.com is still active. Formula 409 is currently (November 7, 2024) sold at Walmart, Target, grocery and numerous other stores. The packaging has been changed.
Advertising
During the period when Art Linkletter was a part-owner of the Formula 409 brand, he was the commercial spokesperson.
Throughout the early 1970s, commercials featured Betty Boop.
In the late 1990s to the early 2000s, a cover of The Beach Boys' 1962 song (surfin' safari) "409" was used. The song's title refers to the name of the Chevrolet engine.
One commercial from 2005 shows a fictional Formula 410. As a character hits the trigger, electricity shoots out instead of spray. The announcer says, "Because the world is not ready for Formula 410, there's Formula 409".
In popular culture
Detroit rock group Electric Six released a song named for the product on their 2008 album Flashy. The song references how the product might be used to "clean your kitchen". The music video features band members using spray bottles of Formula 409 Glass and Surface Cleaner to clean the windows of the Lafayette Coney Island restaurant in downtown Detroit. The band Death Cab for Cutie's song 'What Sarah Said' mentions the product.
In the season 5 episode Mind's Eye of the TV-show The X-Files from 1998, a detective claims that Marty Glen (played by guest star Lili Taylor), a suspect and the Villain of the week of the episode, had been found "at the scene doing a Formula 409.", i.e. cleaning a bathroom that had been the crime scene of a murder. It is also the title of Green Day's song "409 in Your Coffeemaker", appearing on their 1990 EP Slappy.
References
External links
Official site
Cleaning products
Clorox brands
Products introduced in 1957 | Formula 409 | [
"Chemistry"
] | 814 | [
"Cleaning products",
"Products of chemical industry"
] |
8,810,974 | https://en.wikipedia.org/wiki/Onverwacht%20Group | The Onverwacht Group or Onverwacht Series is a series of greenstone belts and volcanic rock formations from the Archean Eon in the Kaapvaal Craton in South Africa and Eswatini.
A well known part of the Onverwacht Series is visible in the Komati valley, located in the east of the Transvaal region.
Subdivision
The Onverwacht Group can be divided into two subgroups with six formations:
Geluk Subgroup
Swartkoppie Formation –
Kromberg Formation –
Hooggenoeg Formation –
Tjakastad Subgroup
Komatii Formation –
Theespruit Formation –
Sandspruit Formation –
The fossils found in the Onverwacht Series are among the oldest found on Earth.
See also
Archean life in the Barberton Greenstone Belt
Warrawoona Group
References
Bibliography
External links
Der Barberton Greenstone Belt und Komatiite
Geologic groups of Africa
Geologic formations of South Africa
Geology of Eswatini
Archean Africa
Fossiliferous stratigraphic units of Africa
Paleontology in South Africa
Origin of life | Onverwacht Group | [
"Biology"
] | 222 | [
"Biological hypotheses",
"Origin of life"
] |
8,811,781 | https://en.wikipedia.org/wiki/Remontoire | In mechanical horology, a remontoire (from the French remonter, meaning 'to wind') is a small secondary source of power, a weight or spring, which runs the timekeeping mechanism and is itself periodically rewound by the timepiece's main power source, such as a mainspring. It was used in a few precision clocks and watches to place the source of power closer to the escapement, thereby increasing the accuracy by evening out variations in drive force caused by unevenness of the friction in the geartrain. In spring-driven precision clocks, a gravity remontoire is sometimes used to replace the uneven force delivered by the mainspring running down by the more constant force of gravity acting on a weight. In turret clocks, it serves to separate the large forces needed to drive the hands from the modest forces needed to drive the escapement which keeps the pendulum swinging. A remontoire should not be confused with a maintaining power spring, which is used only to keep the timepiece going while it is being wound.
How it works
Remontoires are used because the timekeeping mechanism in clocks and watches, the pendulum or balance wheel, is never isochronous; its rate is affected by changes in the drive force applied to it. In spring-driven timepieces, the drive force declines as the mainspring runs down. In weight-driven clocks the drive force, provided by a weight suspended by a cord, is more constant, but imperfections in the gear train and variations in lubrication also cause small variations. In turret clocks, the large hands, which are attached to the clock's wheel train, are exposed to the weather on the outside of the tower, so winds and accumulations of ice and snow apply disturbing forces to the hands, which are passed on to the wheel train.
With a remontoire, the only force applied to the clock's escapement is that of the remontoire's spring or weight, so that it is isolated from any variations in the main power source or wheel train, which is just used to rewind the remontoire. Remontoires are designed to rewind frequently, at intervals between one second and an hour. The rewinding process is triggered automatically when the remontoire's weight or spring reaches the end of its power. This frequent rewinding is another source of accuracy, because it averages out any variations in the clock's rate due to changes in the force of the remontoire itself. If the rate of the clock varies as the remontoire spring runs down, this variation will be repeated again and again, each time the remontoire goes through its cycle, so it will have no effect on the long term rate of the clock.
History
The gravity remontoire was invented by Swiss clockmaker Jost Bürgi around 1595. Usually the "Kalenderuhr" (three month running, springdriven, calendar-desk-clock) Bürgi is considered the oldest surviving clock with a remontoire, even if it does not provide power to the escapement during the few seconds of the daily cycle where the remontoire weight gets wound up by the spring. Today remontoire mechanisms are all designed to deliver power to the escapement during the remontoire reset cycle.
The spring remontoire was invented by English clockmaker John Harrison during development of his H2 marine chronometer in 1739. Harrison's working drawing of the device is preserved in the Library of the Worshipful Company of Clockmakers in London, England.
Many French and Swiss pocketwatches after 1860 were stamped on the back with the word Remontoire. This merely meant that they didn't have to be wound with a key (i.e. they were wound by the then-novel winding crown inside the pendant). Etymologically the term is correct, the mainspring is "rewound" by some other force than a key, but these watches usually do not contain a remontoire as the word is used today.
Types
Remontoires are distinguished by their power source:
A gravity remontoire is one that uses a weight for power. It is used in precision pendulum clocks.
A spring remontoire uses a spring. It is the only type which can be used in watches, since the force of a weight would be disturbed by motions of the wearer's wrist
An electric remontoire can be either a gravity or spring type. In it, the weight or spring is rewound electrically, with a motor or solenoid. It is used in clocks with traditional mechanical movements which are run on electricity.
They can also be classified by where in the wheel train the remontoire is located:
An escapement remontoire applies its force directly to the escape wheel of the escapement. Spring remontoires were usually of this type.
A train remontoire applies its force to one of the wheels upstream from the escapement, usually to the wheel that drives the escape wheel.
Electric remontoires in automobile clocks
Before the common use of electronic clocks in automobiles, automobile clocks had mechanical movements, powered by an electric remontoire. A low power drive spring would be wound every few minutes by a plunger in a solenoid, powered by the vehicle's service battery and activated by a switch when the spring tension got too low. Such clocks were, however, notoriously inaccurate, typically being made as cheaply as possible.
Many Rover (P4 to P6), Ford (Mk1 Escort, Mk2 Cortina, and Mk1 Capri GT/RS), and Triumph (Dolomite, 2000/2500, and Stag), as well as some Jaguar (S3 E-type), Daimler (DS420), and Aston Martin (V8) cars were fitted with Kienzle clocks that were wound by such electric remontoires.
Footnotes
External links
WAGNER DETAIL
Horology | Remontoire | [
"Physics"
] | 1,233 | [
"Spacetime",
"Horology",
"Physical quantities",
"Time"
] |
8,812,149 | https://en.wikipedia.org/wiki/CXCL2 | Chemokine (C-X-C motif) ligand 2 (CXCL2) is a small cytokine belonging to the CXC chemokine family that is also called macrophage inflammatory protein 2-alpha (MIP2-alpha), Growth-regulated protein beta (Gro-beta) and Gro oncogene-2 (Gro-2). CXCL2 is 90% identical in amino acid sequence as a related chemokine, CXCL1. This chemokine is secreted by monocytes and macrophages and is chemotactic for polymorphonuclear leukocytes and hematopoietic stem cells. The gene for CXCL2 is located on human chromosome 4 in a cluster of other CXC chemokines. CXCL2 mobilizes cells by interacting with a cell surface chemokine receptor called CXCR2.
CXCL2, like related chemokines, is also a powerful neutrophil chemoattractant and is involved in many immune responses including wound healing, cancer metastasis, and angiogenesis. A study was published in 2013 testing the role of CXCL2, CXCL3, and CXCL1 in the migration of airway smooth muscle cells (ASMCs) migration which plays a significant role in asthma. The results of this study showed that CXCL2 and CXCL3 both help with the mediation of normal and asthmatic ASMC migration through different mechanisms.
Clinical development
CXCL2 in combination with the CXCR4 inhibitor plerixafor rapidly mobilizes hematopoietic stem cells into the peripheral blood.
This rapid peripheral blood stem cell mobilization regimen entered Phase 2 clinical trials in 2021 in development by Magenta Therapeutics as a new method to collect stem cells for bone marrow transplantation.
References
Cytokines | CXCL2 | [
"Chemistry"
] | 406 | [
"Cytokines",
"Signal transduction"
] |
8,812,794 | https://en.wikipedia.org/wiki/Molecular%20replacement | Molecular replacement (MR) is a method of solving the phase problem in X-ray crystallography. MR relies upon the existence of a previously solved protein structure which is similar to our unknown structure from which the diffraction data is derived. This could come from a homologous protein, or from the lower-resolution protein NMR structure of the same protein.
The first goal of the crystallographer is to obtain an electron density map, density being related with diffracted wave as follows:
With usual detectors the intensity is being measured, and all the information about phase () is lost. Then, in the absence of phases (Φ), we are unable to complete the shown Fourier transform relating the experimental data from X-ray crystallography (in reciprocal space) to real-space electron density, into which the atomic model is built. MR tries to find the model which fits best experimental intensities among known structures.
Principles of Patterson-based molecular replacement
We can derive a Patterson map for the intensities, which is an interatomic vector map created by squaring the structure factor amplitudes and setting all phases to zero. This vector map contains a peak for each atom related to every other atom, with a large peak at 0,0,0, where vectors relating atoms to themselves "pile up". Such a map is far too noisy to derive any high resolution structural information—however if we generate Patterson maps for the data derived from our unknown structure, and from the structure of a previously solved homologue, in the correct orientation and position within the unit cell, the two Patterson maps should be closely correlated. This principle lies at the heart of MR, and can allow us to infer information about the orientation and location of an unknown molecule with its unit cell.
Due to historic limitations in computing power, an MR search is typically divided into two steps: rotation and translation.
Rotation function
In the rotation function, our unknown Patterson map is compared to Patterson maps derived from our known homologue structure in different orientations. Historically r-factors and/or correlation coefficients were used to score the rotation function, however, modern programs use maximum likelihood-based algorithms. The highest correlation (and therefore scores) are obtained when the two structures (known and unknown) are in similar orientation(s)—these can then be output in Euler angles or spherical polar angles.
Translation function
In the translation function, the now correctly oriented known model can be correctly positioned by translating it to the correct co-ordinates within the asymmetric unit. This is accomplished by moving the model, calculating a new Patterson map, and comparing it to the unknown-derived Patterson map. This brute-force search is computationally expensive and fast translation functions are now more commonly used. Positions with high correlations are output in Cartesian coordinates.
Using de novo predicted structures in molecular replacement
With the improvement of de novo protein structure prediction, many protocols including MR-Rosetta, QUARK, AWSEM-Suite and I-TASSER-MR can generate a lot of native-like decoy structures that are useful to solve the phase problem by molecular replacement.
The next step
Following this, we should have correctly oriented and translated phasing models, from which we can derive phases which are (hopefully) accurate enough to derive electron density maps. These can be used to build and refine an atomic model of our unknown structure.
References
External links
Phaser – One of the most commonly used molecular replacement programmes.
Molrep – Molecular replacement package within CCP4
Phaser article at PDBe – A helpful public domain introduction to the topic.
X-ray crystallography | Molecular replacement | [
"Chemistry",
"Materials_science"
] | 743 | [
"X-ray crystallography",
"Crystallography"
] |
8,813,046 | https://en.wikipedia.org/wiki/Cr%C3%A8che%20%28zoology%29 | In zoology, a crèche (from a French term for childcare) is an animal behaviour where offspring are cared for as a group by multiple females. Many species such as common eiders, lions, and penguins form crèches and exhibit group behaviours.
Crèches can serve different functions and purposes depending on the species and the environment. For example, some crèches may aid in defence while other crèches may aid in feeding and protection from harsh weather conditions. This form of group living has evolved to become advantageous to the species. Studies have shown that by participating in group living, species will increase their inclusive fitness since their young will be in a better condition to reproduce and carry on the line of descendants in the species.
Behaviour in eider ducks
In the Common eider population, after giving birth to their eggs, the mother will incubate them until they hatch. The mothers will hear a signal from the juveniles that will cause her to move away and open the nest for the eggs to hatch safely. Once the eggs have hatched, the mother will either abandon her young, care for her young alone or join a multi-female crèche. In the common eider species, if the crèche group behaviour is followed, the formation of the crèche will occur as soon as the juveniles leave the nest, and the group behaviour will last for a long period as the mother provides parental care to her young as they develop. Studies have shown that while the parental care mode can change over the years, 46% of female eiders will care for their young through a multi-crèched environment. Female eiders can care for their young through a true crèche or a transient crèche. In a true crèche, the mother will choose a select group of females to live and care for her young with for a long period of time. Contrastingly, in a transient crèche, the female and her young will not stay with the same group for a long duration and they will move through different crèches rather than stay with one permanent group. These transient crèches will normally form about two weeks after the juveniles hatch so they have time to experience social interaction with their mother and siblings first. The females and young in the true crèches showed a higher level of overall condition compared to the transient crèches. Another study provided evidence that common eiders who do not join a crèche will maintain the best condition throughout development compared to those who did.
Behaviour in lions
Crèche behaviours will also develop in certain species of lions. For the first four to six weeks of development, mothers will care for their young on their own to make sure they are getting the proper care and nutrition. Once they reach six weeks, female pride mate mothers will group to form a crèche. Mothers will form this crèche with other mothers who have cubs of the same age. These crèches could range from two to nine mothers, but they average around four to five mothers. Females and their young will remain together in these crèches until the young have reached about two years of age. Studies show that mothers in crèches which involve three to four females may suffer from low food intake. The main advantage of crèches in lion species is for defence. The mothers in a crèche will work together to defend their cubs and protect them from nomadic male lions or any other predators that may approach the pride. The greater number of females in the crèche, the greater rates of male takeovers. Being a member of a crèche provides safety from predators for the cubs and ensures that the mother will forage in a group size close to optimum. Studies have proven that mothers keep their cubs in crèche formations to initiate highly stable care groups that will aid in defence. While crèches are great for defence, they have a contrasting impact on food intake. If there are a very high number of cubs in a crèche compared to mothers, the cubs could become severely undernourished. As well, female lions without cubs will avoid a crèche as they would experience a low rate of food intake in that group living arrangement. Studies have also shown that living in a crèche environment does not guarantee increased access and retrieval of resources. When lions are nursing, the cubs that are raised in a crèche do not have an advantage of gaining more milk than their conspecifics who do not live in a crèche.
Crèche behaviour in penguins
The crèche group behaviour will also be seen in many species of penguins. This behaviour will occur when multiple adult penguins rear their chicks together in a group formation. In the majority of penguin crèches there will be more chicks than adults. The main advantage of the crèche formation in penguins is to aid in thermoregulation but the formation also helps prevent predation and aggression. While living in a crèche the penguin chicks will be reared in the presence of multiple adults and therefore will be protected from aggressive adults or predators. The largest crèche formations are seen when weather conditions are harsh. These harsh conditions normally include very low temperatures and high humidity, wind speeds and cloud cover. During these times in particular, there will be increased contact between adults and chicks as they gather together to provide warmth to one another to aid in thermoregulation.
References
Evolutionary biology
Social systems
Bird behavior | Crèche (zoology) | [
"Biology"
] | 1,088 | [
"Evolutionary biology",
"Behavior by type of animal",
"Behavior",
"Ethology stubs",
"Ethology",
"Bird behavior"
] |
8,813,490 | https://en.wikipedia.org/wiki/Uniform%20antiprismatic%20prism | In 4-dimensional geometry, a uniform antiprismatic prism or antiduoprism is a uniform 4-polytope with two uniform antiprism cells in two parallel 3-space hyperplanes, connected by uniform prisms cells between pairs of faces. The symmetry of a p-gonal antiprismatic prism is [2p,2+,2], order 8p.
A p-gonal antiprismatic prism or p-gonal antiduoprism has 2 p-gonal antiprism, 2 p-gonal prism, and 2p triangular prism cells. It has 4p equilateral triangle, 4p square and 4 regular p-gon faces. It has 10p edges, and 4p vertices.
Convex uniform antiprismatic prisms
There is an infinite series of convex uniform antiprismatic prisms, starting with the digonal antiprismatic prism is a tetrahedral prism, with two of the tetrahedral cells degenerated into squares. The triangular antiprismatic prism is the first nondegenerate form, which is also an octahedral prism. The remainder are unique uniform 4-polytopes.
Star antiprismatic prisms
There are also star forms following the set of star antiprisms, starting with the pentagram {5/2}:
Square antiprismatic prism
A square antiprismatic prism or square antiduoprism is a convex uniform 4-polytope. It is formed as two parallel square antiprisms connected by cubes and triangular prisms. The symmetry of a square antiprismatic prism is [8,2+,2], order 32. It has 16 triangle, 16 square and 4 square faces. It has 40 edges, and 16 vertices.
Pentagonal antiprismatic prism
A pentagonal antiprismatic prism or pentagonal antiduoprism is a convex uniform 4-polytope. It is formed as two parallel pentagonal antiprisms connected by cubes and triangular prisms. The symmetry of a pentagonal antiprismatic prism is [10,2+,2], order 40. It has 20 triangle, 20 square and 4 pentagonal faces. It has 50 edges, and 20 vertices.
Hexagonal antiprismatic prism
A hexagonal antiprismatic prism or hexagonal antiduoprism is a convex uniform 4-polytope. It is formed as two parallel hexagonal antiprisms connected by cubes and triangular prisms. The symmetry of a hexagonal antiprismatic prism is [12,2+,2], order 48. It has 24 triangle, 24 square and 4 hexagon faces. It has 60 edges, and 24 vertices.
Heptagonal antiprismatic prism
A heptagonal antiprismatic prism or heptagonal antiduoprism is a convex uniform 4-polytope. It is formed as two parallel heptagonal antiprisms connected by cubes and triangular prisms. The symmetry of a heptagonal antiprismatic prism is [14,2+,2], order 56. It has 28 triangle, 28 square and 4 heptagonal faces. It has 70 edges, and 28 vertices.
Octagonal antiprismatic prism
A octagonal antiprismatic prism or octagonal antiduoprism is a convex uniform 4-polytope (four-dimensional polytope). It is formed as two parallel octagonal antiprisms connected by cubes and triangular prisms. The symmetry of an octagonal antiprismatic prism is [16,2+,2], order 64. It has 32 triangle, 32 square and 4 octagonal faces. It has 80 edges, and 32 vertices.
See also
Duoprism
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26)
Norman Johnson Uniform Polytopes, Manuscript (1991)
External links
Uniform 4-polytopes | Uniform antiprismatic prism | [
"Physics"
] | 840 | [
"Uniform 4-polytopes",
"Uniform polytopes",
"Symmetry"
] |
8,813,666 | https://en.wikipedia.org/wiki/Protoporphyrin%20IX | Protoporphyrin IX is an organic compound, classified as a porphyrin, that plays an important role in living organisms as a precursor to other critical compounds like heme (hemoglobin) and chlorophyll. It is a deeply colored solid that is not soluble in water. The name is often abbreviated as PPIX.
Protoporphyrin IX contains a porphine core, a tetrapyrrole macrocycle with a marked aromatic character. Protoporphyrin IX is essentially planar, except for the N-H bonds that are bent out of the plane of the rings, in opposite (trans) directions.
Nomenclature
The general term protoporphyrin refers to porphine derivatives that have the outer hydrogen atoms in the four pyrrole rings replaced by other functional groups. The prefix proto often means 'first' in science nomenclature (such as carbon protoxide), hence Hans Fischer is thought to have coined the name protoporphyrin as the first class of porphyrins. Fischer described iron-deprived heme becoming the "proto-" porphyrin, particularly in reference to Hugo Kammerer's porphyrin. In modern times, 'proto-' specifies a porphyrin species bearing methyl, vinyl, and carboxyethyl/propionate side groups.
Fischer also generated the Roman numeral naming system which includes 15 protoporphyrin analogs, the naming system is not systematic however. An alternative name for heme is iron protoporphyrin IX (iron PPIX). PPIX contains four methyl groups (M), two vinyl groups (V), and two propionic acid groups (P). The suffix "IX" indicates that these chains occur in the circular order MV-MV-MP-PM around the outer cycle at the following respective positions: c2,c3-c7,c8-c12,c13-c17,c18.
The methine bridges of PPIX are named alpha (c5), beta (c10), gamma (c15), and delta (c20). In the context of heme, metabolic biotransformation by heme oxygenase results in the selective opening of the alpha-methine bridge to form biliverdin/bilirubin. In this case, the resulting bilin carries the suffix IXα which indicates the parent molecule was protoporphyrin IX cleaved at the alpha position. Non-enzymatic oxidation may result in the ring opening at other bridge positions. The use of Greek letters in this context originates from the pioneering work of Georg Barkan in 1932.
Properties
When UV light is shone on the compound, it fluoresces with a bright red color.
It Is also the component in egg shells that give them their characteristic brown color.
Natural occurrence
The compound is encountered in nature in the form of complexes where the two inner hydrogen atoms are replaced by a divalent metal cation. When complexed with an iron(II) (ferrous) cation , the molecule is called heme. Hemes are prosthetic groups in some important proteins. These heme-containing proteins include hemoglobin, myoglobin, and cytochrome c. Complexes can also be formed with other metal ions, such as zinc.
Biosynthesis
The compound is synthesized from acyclic precursors via a mono-pyrrole (porphobilinogen) then a tetrapyrrole (a porphyrinogen, specifically uroporphyrinogen III). This precursor is converted to protoporphyrinogen IX, which is oxidized to protoporphyrin IX. The last step is mediated by the enzyme protoporphyrinogen oxidase.
Protoporphyrin IX is an important precursor to biologically essential prosthetic groups such as heme, cytochrome c, and chlorophylls. As a result, a number of organisms are able to synthesize this tetrapyrrole from basic precursors such as glycine and succinyl-CoA, or glutamic acid. Despite the wide range of organisms that synthesize protoporphyrin IX, the process is largely conserved from bacteria to mammals with a few distinct exceptions in higher plants.
In the biosynthesis of those molecules, the metal cation is inserted into protoporphyrin IX by enzymes called chelatases. For example, ferrochelatase converts the compound into heme B (i.e. Fe-protoporphyrin IX or protoheme IX). In chlorophyll biosynthesis, the enzyme magnesium chelatase converts it into Mg-protoporphyrin IX.
Described metalloprotoporphyrin IX derivatives
Protoporphyrin IX reacts with iron salts in air to give the complex FeCl(PPIX). Heme coordinated with chlorine is known as hemin. Many metals other than Fe form Heme-like complexes when coordinated to PPIX. Of particular interest are cobalt derivatives because they also function as oxygen carriers. Other metalsnickel, tin, chromiumhave been investigated for their therapeutic value.
Palepron is the disodium salt of protoporphyrin IX.
History
Laidlaw may have first isolated PPIX in 1904.
Clinical Importance
Protoporphyrin IX fluorescence from 5-ALA administration is used in fluorescent-guided surgery of glioblastoma.
See also
Carbon monoxide-releasing molecules
Heme oxygenase
Biosynthesis of chlorophylls
Biosynthesis of hemes
References
Porphyrins | Protoporphyrin IX | [
"Chemistry"
] | 1,167 | [
"Porphyrins",
"Biomolecules"
] |
8,813,712 | https://en.wikipedia.org/wiki/CXCL3 | Chemokine (C-X-C motif) ligand 3 (CXCL3) is a small cytokine belonging to the CXC chemokine family that is also known as GRO3 oncogene (GRO3), GRO protein gamma (GROg) and macrophage inflammatory protein-2-beta (MIP2b). CXCL3 controls migration and adhesion of monocytes and mediates its effects on its target cell by interacting with a cell surface chemokine receptor called CXCR2. More recently, it has been shown that Cxcl3 regulates cell autonomously the migration of the precursors of cerebellar granule neurons toward the internal layers of cerebellum, during the morphogenesis of cerebellum. Moreover, if the expression of Cxcl3 is reduced in cerebellar granule neuron precursors, this highly enhances the frequency of the medulloblastoma, the tumor of cerebellum. In fact, the reduced expression of Cxcl3 forces the cerebellar granule neuron precursors to remain at the surface of the cerebellum, where they highly proliferate under the stimulus of Sonic hedgehog, becoming target of transforming insults. Remarkably, the treatment with CXCL3 completely prevents the growth of medulloblastoma lesions in a Shh-type mouse model of medulloblastoma. Thus, CXCL3 is a target for medulloblastoma therapy. Cxcl3 is directly regulated transcriptionally by BTG2
The gene for CXCL3 is located on chromosome 4 in a cluster of other CXC chemokines.
References
External links
Cytokines | CXCL3 | [
"Chemistry"
] | 373 | [
"Cytokines",
"Signal transduction"
] |
8,813,868 | https://en.wikipedia.org/wiki/Frosty%20Leo%20Nebula | The Frosty Leo Nebula is a protoplanetary nebula (PPN) located roughly at 3000 light-years away from Earth in the direction of the constellation Leo. It is a spectral bipolar nebula. Its central star is of optical spectral type K7II, by itself called Frosty Leo. It is unusual in that it has an extremely deep absorption feature at 3.1 μm and is unusually located at more than 900 pc above the plane of our galaxy. Further, as of 1990, it has the only known PPN circumstellar outflow in which crystalline ice dominates the long-wavelength emission spectrum and the only known PPN with point-reflection-symmetric deviations from axial symmetry.
Characteristics
The Frosty Leo Nebula has two lobes that are separated by 2 between which is an almost edge-on dust ring. It also has two relatively faint but prominent compact nebulosities, or ansae, separated by ~23 along the polar axis of the PPN. The PPN as a whole has an hourglass like shape. It has an inclination angle of 15° relative to the plane of the sky. Its molecular envelope is expanding at a rate of ~25 km/s.
Observation history
This PPN was first noticed in the IRAS survey due to its exceptionally cold IRAS color temperatures. It also has a uniquely sharp maximum at 60-μM.
Point symmetry
It is the first bipolar PPN known to have point reflection symmetry (all others being axially symmetric). Point symmetry is a fairly common trait of planetary nebulae as found in NGC 2022, NGC 2371-2, NGC 6309, Cat's Eye Nebula, NGC 6563, Dumbbell Nebula, Saturn Nebula, A24, and Hb5. postulate that point symmetry is either due to the bipolar outflow being directed by a precessing disc or a precessing common envelope binary.
Naming
dubbed IRAS 09371+1212 as the "Frosty Leo Nebula" because of their interpretation of the object's extremely unusual far infrared spectrum that water is largely depleted in its gaseous state by ice condensation into grains and for its location in the Leo constellation. Their interpretation was subsequently verified in 1988 by three independent papers. further observed in the band between 35 and 65 μM that very cold (<50 K) silicate dust grains, abundantly coated with crystalline ice, are responsible for the 60-μM excess.
Notes
References
External links
Image of Frosty Leo Nebula.
Protoplanetary nebulae
Leo (constellation) | Frosty Leo Nebula | [
"Astronomy"
] | 521 | [
"Leo (constellation)",
"Constellations"
] |
8,813,885 | https://en.wikipedia.org/wiki/Octahedral%20prism | In geometry, an octahedral prism is a convex uniform 4-polytope. This 4-polytope has 10 polyhedral cells: 2 octahedra connected by 8 triangular prisms.
Alternative names
Octahedral dyadic prism (Norman W. Johnson)
Ope (Jonathan Bowers, for octahedral prism)
Triangular antiprismatic prism
Triangular antiprismatic hyperprism
Coordinates
It is a Hanner polytope with vertex coordinates, permuting first 3 coordinates:
([±1,0,0]; ±1)
Structure
The octahedral prism consists of two octahedra connected to each other via 8 triangular prisms. The triangular prisms are joined to each other via their square faces.
Projections
The octahedron-first orthographic projection of the octahedral prism into 3D space has an octahedral envelope. The two octahedral cells project onto the entire volume of this envelope, while the 8 triangular prismic cells project onto its 8 triangular faces.
The triangular-prism-first orthographic projection of the octahedral prism into 3D space has a hexagonal prismic envelope. The two octahedral cells project onto the two hexagonal faces. One triangular prismic cell projects onto a triangular prism at the center of the envelope, surrounded by the images of 3 other triangular prismic cells to cover the entire volume of the envelope. The remaining four triangular prismic cells are projected onto the entire volume of the envelope as well, in the same arrangement, except with opposite orientation.
Related polytopes
It is the second in an infinite series of uniform antiprismatic prisms.
It is one of 18 uniform polyhedral prisms created by using uniform prisms to connect pairs of parallel Platonic solids and Archimedean solids.
It is one of four four-dimensional Hanner polytopes; the other three are the tesseract, the 16-cell, and the dual of the octahedral prism (a cubical bipyramid).
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26)
Norman Johnson Uniform Polytopes, Manuscript (1991)
External links
Uniform 4-polytopes | Octahedral prism | [
"Physics"
] | 479 | [
"Uniform 4-polytopes",
"Uniform polytopes",
"Symmetry"
] |
8,815,086 | https://en.wikipedia.org/wiki/List%20of%20sequenced%20archaeal%20genomes | This list of sequenced archaeal genomes contains all the archaea known to have publicly available complete genome sequences that have been assembled, annotated and deposited in public databases. Methanococcus jannaschii was the first archaeon whose genome was sequenced, in 1996.
Currently in this list there are 39 genomes belonging to Crenarchaeota species, 105 belonging to the Euryarchaeota, 1 genome belonging to Korarchaeota and to the Nanoarchaeota, 3 belonging to the Thaumarchaeota and 1 genome belonging to an unclassified Archaea, totalling 150 Archaeal genomes.
Crenarchaeota
Acidilobales
Desulforococcales
Sulfolobales
Thermoproteales
Euryarchaeota
Archaeoglobi
Halobacteria
Methanobacteria
Methanococci
Methanomicrobia
Methanopyri
Thermococci
Thermoplasmata
Unclassified Euryarchaeota
Korarchaeota
Nanoarchaeota
Thaumarchaeota
Cenarchaeales
Nitrosopumilales
Unclassified Archaea
See also
Genome project
Human microbiome project
Lists of sequenced genomes
References
External links
GOLD:Genomes OnLine Database v 2.0
SUPERFAMILY comparative genomics database Includes genomes of completely sequenced archaea, and sophisticated datamining plus visualisation tools for analysis
Archaea biology
Archaeal
Biology-related lists | List of sequenced archaeal genomes | [
"Engineering",
"Biology"
] | 309 | [
"Archaea",
"Lists of sequenced genomes",
"Genetic engineering",
"Archaea biology",
"DNA sequencing",
"Genome projects"
] |
8,815,337 | https://en.wikipedia.org/wiki/PSI%20Comp%2080 | The PSI Comp 80 was a home computer sold by Powertran starting in 1979. It was sold in the form of a kit of parts for a cased single-board home computer system.
The system was based on a Z80 microprocessor addressing a mixture of 8 KB of system RAM and EPROM, plus 2 KB of video RAM.
It used a National Semiconductor MM57109N as a mathematical co-processor to speed up calculations.
History and specifications
In 1979, the British magazine Wireless World published the technical details for a "Scientific Computer". Shortly afterward the British firm Powertran used this design for their implementation, which they called the PSI Comp 80.
Ahead of its time, it incorporated a number crunching coprocessor and a novel language embedded in EPROM called Basic Using Reverse Polish - BURP.
The monochrome Video Display Controller could simultaneously display combinations of 32 lines of 64 characters, and 128 x 64 resolution graphics by either displaying a normal character or a "pseudo graphics" character, with pixel blocks in a 2x2 matrix. A technique similar to the one used in the TRS-80 - It could later be expanded to a higher resolution, although never to colour.
Add-ons were developed for the system, including memory expansions, floppy and hard disk interfaces, various software packages and a disk operating system, SCIDOS, which was CP/M-compatible but also included features - structured (pathed) disk folders, etc. - now very familiar to modern-day PC users.
During the mid-1980s, the designer of this system, John Adams M.SC., published a new version of the Scientific Computer - the SC84 (Scientific Computer of 1984). It was based upon a backplane and plug-in cards and modules and featuring a Hitachi HD64180 processor, up to 512 kbytes of RAM and a high-resolution colour graphics system.
References
PSI 2020
External links
a picture of the advertisement for the PSI Comp 80 in Wireless World
Links to Wireless World articles
SC84 article
Home computers
Computer-related introductions in 1979 | PSI Comp 80 | [
"Technology"
] | 433 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
8,815,384 | https://en.wikipedia.org/wiki/Star%20war | A star war was a decisive conflict between rival polities of the Maya civilization during the first millennium AD. The term comes from a specific type of glyph used in the Maya script, which depicts a star showering the earth with liquid droplets, or a star over a shell. It represents a verb but its phonemic value and specific meaning have not yet been deciphered. The name "star war" was coined by the epigrapher Linda Schele to refer to the glyph, and by extension to the type of conflict that it indicates.
Examples
Maya inscriptions assign episodes of Maya warfare to four distinct categories, each represented by its own glyph. Those accorded the greatest significance by the Maya were described with the "star war" glyph, representing a major war resulting in the defeat of one polity by another. This represents the installation of a new dynastic line of rulers, complete dominion of one polity over another, or a successful war of independence by a formerly dominated polity.
Losing a star war could be disastrous for the defeated party. The first recorded star war in 562, between Caracol and Tikal, resulted in a 120-year hiatus for the latter city. It saw a decline in Tikal's population, a cessation of monument erection, and the destruction of certain monuments in the Great Plaza. When Calakmul defeated Naranjo in a star war on December 24, 631, it resulted in Naranjo's ruler being tortured to death and possibly eaten. Another star war in February 744 resulted in Tikal sacking Caracol and capturing a personal god effigy of its ruler. An inscription from a monument found at Tortuguero (dating from 669) describes the aftermath of a star war: "the blood was pooled, the skulls were piled".
Astronomical connections
Mayanists have noted that the dates of recorded star wars often coincide with astronomical events involving the planet Venus, either when it was first visible in the morning or night sky or during its absence at inferior conjunction. Venus was known to Mesoamerican civilizations as the bringer of war (in contrast to the equivalent European belief assigning that characteristic to Mars). The Maya called it Chak Ek' or "Great Star" and made it the focus of detailed astronomical observation and calculation. The Dresden Codex, one of only four surviving Maya books, includes astronomical tables for calculating the position of Venus, which the codex depicts as spearing people as it passes overhead.
Seventy percent of recorded star war dates are reported to correspond with Venus's evening phase, while 84 percent (of the 70 percent) match the first appearance of the evening star. Star wars also appear to have had a seasonal bias, clustering in the dry season from November through January. Few were recorded to have happened during the planting season and none at all during harvest time, between mid-September and late October. There may also have been a correlation between star wars and solar eclipses; some star wars at Tikal seem to have taken place shortly after eclipses, in one case in July 743 only one day after a solar eclipse.
The precise nature of the proposed link between star wars and astronomical events is unclear. It is possible that events such as eclipses may have stimulated star wars, prompting the Maya to launch star wars in the belief that they had received a favorable omen for military endeavors. They may also have had astronomical connections with planets other than Venus; many star wars appear to be correlated with the retrograde periods of Mars, Jupiter and Saturn.
Recorded star wars
A number of star wars are recorded in Mayan inscriptions dating from between 562 and 781. These include:
See also
Flower war
References
Maya civilization
Ancient warfare
Archaeoastronomy
6th century in the Maya civilization
7th century in the Maya civilization
8th century in the Maya civilization | Star war | [
"Astronomy"
] | 793 | [
"Archaeoastronomy",
"Astronomical sub-disciplines"
] |
8,815,697 | https://en.wikipedia.org/wiki/Jet%20%28fluid%29 | A jet is a stream of fluid that is projected into a surrounding medium, usually from some kind of a nozzle, aperture or orifice. Jets can travel long distances without dissipating.
Jet fluid has higher speed compared to the surrounding fluid medium. In the case that the surrounding medium is assumed to be made up of the same fluid as the jet, and this fluid has viscosity, some of the surrounding fluid is carried along with the jet in a process called entrainment.
Some animals, notably cephalopods, move by jet propulsion, as do rocket engines and jet engines.
Applications
Liquid jets are used in many different areas. In everyday life, you can find them for instance coming from the water tap, the showerhead, and from spray cans. In agriculture, they play a role in irrigation and in the application of crop protection products. In the field of medicine, you can find liquid jets for example in injection procedures or inhalers. Industry uses liquid jets for waterjet cutting, for coating materials or in cooling towers. Atomized liquid jets are essential for the efficiency of internal combustion engines. But they also play a crucial role in research, for example in the study of proteins, phase transitions, extreme states of matter, laser plasmas, High harmonic generation, and also in particle physics experiments. Also some animals, notably cephalopods, move by jet propulsion. Gas jets are found in rocket engines and jet engines.
Microscopic liquid jets have been studied for their potential application in noninvasive transdermal drug delivery.
See also
Plane counterflow jets
Bickley jet
Landau–Squire jet
Schlichting jet
Jet nozzle, how a jet is formed
Jet damping, a jet carries away angular momentum from a device emitting it
Jet noise
Jet of blood
Astrophysical jet
Solar jet
Lava fountain
High pressure jet
Water jet
References
Pijush K. Kundu and Ira M. Cohen, "Fluid mechanics, Volume 10", Elsevier, Burlington, MA, USA (2008),
Fluid dynamics | Jet (fluid) | [
"Chemistry",
"Engineering"
] | 414 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
8,815,878 | https://en.wikipedia.org/wiki/CXCL5 | C-X-C motif chemokine 5 (CXCL5 or ENA78) is a protein that in humans is encoded by the CXCL5 gene.
Function
The protein encoded by this gene, CXCL5 is a small cytokine belonging to the CXC chemokine family that is also known as epithelial-derived neutrophil-activating peptide 78 (ENA-78). It is produced following stimulation of cells with the inflammatory cytokines interleukin-1 or tumor necrosis factor-alpha. Expression of CXCL5 has also been observed in eosinophils, and can be inhibited with the type II interferon IFN-γ. This chemokine stimulates the chemotaxis of neutrophils possessing angiogenic properties. It elicits these effects by interacting with the cell surface chemokine receptor CXCR2. The gene for CXCL5 has four exons and is located on human chromosome 4 amongst several other CXC chemokine genes. CXCL5 has been implicated in connective tissue remodelling. CXCL5 has been also described to regulate neutrophil homeostasis.
Clinical significance
CXCL5 plays a role in reducing sensitivity to sunburn pain in some subjects, and is a "potential target which can be utilized to understand more about pain in other inflammatory conditions like arthritis and cystitis.". CXCL5 is well known to have chemotactic and activating functions on neutrophil, mainly during acute inflammatory responses. However CXCL5 expression is also higher in atherosclerosis (a chronic inflammatory condition) but is not associated with neutrophil infiltration. Instead CXCL5 has a protective role in atherosclerosis by directly controlling macrophage foam cell formation.
References
External links
Further reading
Cytokines | CXCL5 | [
"Chemistry"
] | 408 | [
"Cytokines",
"Signal transduction"
] |
8,816,788 | https://en.wikipedia.org/wiki/History%20of%20Grandi%27s%20series |
Geometry and infinite zeros
Grandi
Guido Grandi (1671–1742) reportedly provided a simplistic account of the series in 1703. He noticed that inserting parentheses into produced varying results: either
or
Grandi's explanation of this phenomenon became well known for its religious overtones:
In fact, the series was not an idle subject for Grandi, and he didn't think it summed to either 0 or 1. Rather, like many mathematicians to follow, he thought the true value of the series was 1⁄2 for a variety of reasons.
Grandi's mathematical treatment of occurs in his 1703 book Quadratura circula et hyperbolae per infinitas hyperbolas geometrice exhibita. Broadly interpreting Grandi's work, he derived through geometric reasoning connected with his investigation of the witch of Agnesi. Eighteenth-century mathematicians immediately translated and summarized his argument in analytical terms: for a generating circle with diameter a, the equation of the witch y = a3/(a2 + x2) has the series expansion
and setting a = x = 1, one has 1 − 1 + 1 − 1 + · · · = 1⁄2.
According to Morris Kline, Grandi started with the binomial expansion
and substituted x = 1 to get . Grandi "also argued that since the sum was both 0 and 1⁄2, he had proved that the world could be created out of nothing."
Grandi offered a new explanation that in 1710, both in the second edition of the Quadratura circula and in a new work, De Infinitis infinitorum, et infinite parvorum ordinibus disquisitio geometrica. Two brothers inherit a priceless gem from their father, whose will forbids them to sell it, so they agree that it will reside in each other's museums on alternating years. If this agreement lasts for all eternity between the brother's descendants, then the two families will each have half possession of the gem, even though it changes hands infinitely often. This argument was later criticized by Leibniz.
The parable of the gem is the first of two additions to the discussion of the corollary that Grandi added to the second edition. The second repeats the link between the series and the creation of the universe by God:
Marchetti
After Grandi published the second edition of the Quadratura, his fellow countryman Alessandro Marchetti became one of his first critics. One historian charges that Marchetti was motivated more by jealousy than any other reason. Marchetti found the claim that an infinite number of zeros could add up to a finite quantity absurd, and he inferred from Grandi's treatment the danger posed by theological reasoning. The two mathematicians began attacking each other in a series of open letters; their debate was ended only by Marchetti's death in 1714.
Leibniz
With the help and encouragement of Antonio Magliabechi, Grandi sent a copy of the 1703 Quadratura to Leibniz, along with a letter expressing compliments and admiration for the master's work. Leibniz received and read this first edition in 1705, and he called it an unoriginal and less-advanced "attempt" at his calculus. Grandi's treatment of 1 − 1 + 1 − 1 + · · · would not catch Leibniz's attention until 1711, near the end of his life, when Christian Wolff sent him a letter on Marchetti's behalf describing the problem and asking for Leibniz's opinion.
Background
As early as 1674, in a minor, lesser-known writing De Triangulo Harmonico on the harmonic triangle, Leibniz mentioned very briefly in an example:
Presumably he arrived at this series by repeated substitution:
And so on.
The series also appears indirectly in a discussion with Tschirnhaus in 1676.
Leibniz had already considered the divergent alternating series as early as 1673. In that case he argued that by subtracting either on the left or on the right, one could produce either positive or negative infinity, and therefore both answers are wrong and the whole should be finite. Two years after that, Leibniz formulated the first convergence test in the history of mathematics, the alternating series test, in which he implicitly applied the modern definition of convergence.
Solutions
In the 1710s, Leibniz described Grandi's series in his correspondence with several other mathematicians. The letter with the most lasting impact was his first reply to Wolff, which he published in the Acta Eruditorum. In this letter, Leibniz attacked the problem from several angles.
In general, Leibniz believed that the algorithms of calculus were a form of "blind reasoning" that ultimately had to be founded upon geometrical interpretations. Therefore, he agreed with Grandi that claiming that the relation was well-founded because there existed a geometric demonstration.
On the other hand, Leibniz sharply criticized Grandi's example of the shared gem, claiming that the series has no relation to the story. He pointed out that for any finite, even number of years, the brothers have equal possession, yet the sum of the corresponding terms of the series is zero.
Leibniz thought that the argument from was valid; he took it as an example of his law of continuity. Since the relation holds for all x less than 1, it should hold for x equal to 1 as well. Still, Leibniz thought that one should be able to find the sum of the series directly, without needing to refer back to the expression from which it came. This approach may seem obvious by modern standards, but it is a significant step from the point of view of the history of summing divergent series. In the 18th century, the study of series was dominated by power series, and summing a numerical series by expressing it as f(1) of some function's power series was thought to be the most natural strategy.
Leibniz begins by observing that taking an even number of terms from the series, the last term is −1 and the sum is 0:
1 − 1 = 1 − 1 + 1 − 1 = 1 − 1 + 1 − 1 + 1 − 1 = 0.
Taking an odd number of terms, the last term is +1 and the sum is 1:
1 = 1 − 1 + 1 = 1 − 1 + 1 − 1 + 1 = 1.
Now, the infinite series 1 − 1 + 1 − 1 + · · · has neither an even nor an odd number of terms, so it produces neither 0 nor 1; by taking the series out to infinity, it becomes something between those two options. There is no more reason why the series should take one value than the other, so the theory of "probability" and the "law of justice" dictate that one should take the arithmetic mean of 0 and 1, which is
Eli Maor says of this solution, "Such a brazen, careless reasoning indeed seems incredible to us today…" Kline portrays Leibniz as more self-conscious: "Leibniz conceded that his argument was more metaphysical than mathematical, but said that there is more metaphysical truth in mathematics than is generally recognized."
Charles Moore muses that Leibniz would hardly have had such confidence in his metaphysical strategy if it did not give the same result (namely 1⁄2) as other approaches. Mathematically, this was no accident: Leibniz's treatment would be partially justified when the compatibility of averaging techniques and power series was finally proven in 1880.
Reactions
When he had first raised the question of Grandi's series to Leibniz, Wolff was inclined toward skepticism along with Marchetti. Upon reading Leibniz's reply in mid-1712, Wolff was so pleased with the solution that he sought to extend the arithmetic mean method to more divergent series such as . Leibniz's intuition prevented him from straining his solution this far, and he wrote back that Wolff's idea was interesting but invalid for several reasons. For one, the terms of a summable series should decrease to zero; even could be expressed as a limit of such series.
Leibniz described Grandi's series along with the general problem of convergence and divergence in letters to Nicolaus I Bernoulli in 1712 and early 1713. J. Dutka suggests that this correspondence, along with Nicolaus I Bernoulli's interest in probability, motivated him to formulate the St. Petersburg paradox, another situation involving a divergent series, in September 1713.
According to Pierre-Simon Laplace in his Essai Philosophique sur les Probabilités, Grandi's series was connected with Leibniz seeing "an image of the Creation in his binary arithmetic", and thus Leibniz wrote a letter to Jesuit missionary Claudio Filippo Grimaldi, court mathematician in China, in the hope that Claudio Filippo Grimaldi's interest in science and the mathematical "emblem of creation" might combine to convert the nation to Christianity. Laplace remarks, "I record this anecdote only to show how far the prejudices of infancy may mislead the greatest men."
Divergence
Jacob Bernoulli
Jacob Bernoulli (1654–1705) dealt with a similar series in 1696 in the third part of his Positiones arithmeticae de seriebus infinitis. Applying Nicholas Mercator's method for polynomial long division to the ratio , he noticed that one always had a remainder. If then this remainder decreases and "finally is less than any given quantity", and one has
If m = n, then this equation becomes
Bernoulli called this equation a "not inelegant paradox".
Varignon
Pierre Varignon (1654–1722) treated Grandi's series in his report Précautions à prendre dans l'usage des Suites ou Series infinies résultantes…. The first of his purposes for this paper was to point out the divergence of Grandi's series and expand on Jacob Bernoulli's 1696 treatment.
(Varignon's math…)
The final version of Varignon's paper is dated February 16, 1715, and it appeared in a volume of the Mémories of the French Academy of Sciences that was itself not published until 1718. For such a relatively late treatment of Grandi's series, it is surprising that Varignon's report does not even mention Leibniz's earlier work. But most of the Précautions was written in October 1712, while Varignon was away from Paris. The Abbé Poignard's 1704 book on magic squares, Traité des Quarrés sublimes, had become a popular subject around the Academy, and the second revised and expanded edition weighed in at 336 pages. To make the time to read the Traité, Varignon had to escape to the countryside for nearly two months, where he wrote on the topic of Grandi's series in relative isolation. Upon returning to Paris and checking in at the Academy, Varignon soon discovered that the great Leibniz had ruled in favor of Grandi. Having been separated from his sources, Varignon still had to revise his paper by looking up and including the citation to Jacob Bernoulli. Rather than also take Leibniz's work into account, Varignon explains in a postscript to his report that the citation was the only revision he had made in Paris, and that if other research on the topic arose, his thoughts on it would have to wait for a future report.
(Letters between Varignon and Leibniz…)
In the 1751 Encyclopédie, Jean le Rond d'Alembert echoes the view that Grandi's reasoning based on division had been refuted by Varignon in 1715. (Actually, d'Alembert attributes the problem to "Guido Ubaldus", an error that is still occasionally propagated today.)
Riccati and Bougainville
In a 1715 letter to Jacopo Riccati, Leibniz mentioned the question of Grandi's series and advertised his own solution in the Acta Eruditorum. Later, Riccati would criticize Grandi's argument in his 1754 Saggio intorno al sistema dell'universo, saying that it causes contradictions. He argues that one could just as well write but that this series has "the same quantity of zeroes" as Grandi's series. These zeroes lack any evanescent character of n, as Riccati points out that the equality is guaranteed by He concludes that the fundamental mistake is in using a divergent series to begin with:
Another 1754 publication also criticized Grandi's series on the basis of its collapse to 0. Louis Antoine de Bougainville briefly treats the series in his acclaimed 1754 textbook Traité du calcul intégral. He explains that a series is "true" if its sum is equal to the expression from which is expanded; otherwise it is "false". Thus Grandi's series is false because and yet .
Euler
Leonhard Euler treats along with other divergent series in his De seriebus divergentibus, a 1746 paper that was read to the Academy in 1754 and published in 1760. He identifies the series as being first considered by Leibniz, and he reviews Leibniz's 1713 argument based on the series , calling it "fairly sound reasoning", and he also mentions the even/odd median argument. Euler writes that the usual objection to the use of is that it does not equal unless a is less than 1; otherwise all one can say is that
where the last remainder term does not vanish and cannot be disregarded as n is taken to infinity. Still writing in the third person, Euler mentions a possible rebuttal to the objection: essentially, since an infinite series has no last term, there is no place for the remainder and it should be neglected. After reviewing more badly divergent series like , where he judges his opponents to have firmer support, Euler seeks to define away the issue:
Euler also used finite differences to attack . In modern terminology, he took the Euler transform of the sequence and found that it equalled 1⁄2. As late as 1864, De Morgan claims that "this transformation has always appeared one of the strongest presumptions in favour of being 1⁄2."
Dilution and new values
Despite the confident tone of his papers, Euler expressed doubt over divergent series in his correspondence with Nicolaus I Bernoulli. Euler claimed that his attempted definition had never failed him, but Bernoulli pointed out a clear weakness: it does not specify how one should determine "the" finite expression that generates a given infinite series. Not only is this a practical difficulty, it would be theoretically fatal if a series were generated by expanding two expressions with different values. Euler's treatment of rests upon his firm belief that 1⁄2 is the only possible value of the series; what if there were another?
In a 1745 letter to Christian Goldbach, Euler claimed that he was not aware of any such counterexample, and in any case Bernoulli had not provided one. Several decades later, when Jean-Charles Callet finally asserted a counterexample, it was aimed at . The background of the new idea begins with Daniel Bernoulli in 1771.
Daniel Bernoulli
Daniel Bernoulli, who accepted the probabilistic argument that , noticed that by inserting 0s into the series in the right places, it could achieve any value between 0 and 1. In particular, the argument suggested that
1 + 0 − 1 + 1 + 0 − 1 + 1 + 0 − 1 + · · · = 2⁄3.
Callet and Lagrange
In a memorandum sent to Joseph Louis Lagrange toward the end of the century, Callet pointed out that could also be obtained from the series
substituting x = 1 now suggests a value of 2⁄3, not 1⁄2.
Lagrange approved Callet's submission for publication in the Mémoires of the French Academy of Sciences, but it was never directly published. Instead, Lagrange (along with Charles Bossut) summarized Callet's work and responded to it in the Mémoires of 1799. He defended Euler by suggesting that Callet's series actually should be written with the 0 terms left in:
which reduces to
1 + 0 − 1 + 1 + 0 − 1 + 1 + 0 − 1 + · · ·
instead.
19th century
The 19th century is remembered as the approximate period of Cauchy's and Abel's largely successful ban on the use of divergent series, but Grandi's series continued to make occasional appearances. Some mathematicians did not follow Abel's lead, mostly outside France, and British mathematicians especially took "a long time" to understand the analysis coming from the continent.
In 1803, Robert Woodhouse proposed that summed to something called
which could be distinguished from 1⁄2. Ivor Grattan-Guinness remarks on this proposal, "… R. Woodhouse … wrote with admirable honesty on the problems which he failed to understand. … Of course, there is no harm in defining new symbols such as 1⁄1+1; but the idea is 'formalist' in the unflattering sense, and it does not bear on the problem of the convergence of series."
Algebraic reasoning
In 1830, a mathematician identified only as "M. R. S." wrote in the Annales de Gergonne on a technique to numerically find fixed points of functions of one variable. If one can transform a problem into the form of an equation x = A + f(x), where A can be chosen at will, then
should be a solution, and truncating this infinite expression results in a sequence of approximations. Conversely, given the series , the author recovers the equation
to which the solution is (1⁄2)a.
M. R. S. notes that the approximations in this case are a, 0, a, 0, …, but there is no need for Leibniz's "subtle reasoning". Moreover, the argument for averaging the approximations is problematic in a wider context. For equations not of the form x = A + f(x), M. R. S.'s solutions are continued fractions, continued radicals, and other infinite expressions. In particular, the expression should be a solution of the equation . Here, M. R. S. writes that based on Leibniz's reasoning, one is tempted to conclude that x is the average of the truncations a, 1, a, 1, …. This average is , but the solution to the equation is the square root of a.
Bernard Bolzano criticized M. R. S.' algebraic solution of the series. In reference to the step
Bolzano charged,
This comment exemplifies Bolzano's intuitively appealing but deeply problematic views on infinity. In his defense, Cantor himself pointed out that Bolzano worked in a time when the concept of the cardinality of a set was absent.
De Morgan and company
As late as 1844, Augustus De Morgan commented that if a single instance where did not equal 1⁄2 could be given, he would be willing to reject the entire theory of trigonometric series.
The same volume contains papers by Samuel Earnshaw and J. R. Young dealing in part with . G. H. Hardy dismisses both of these as "little more than nonsense", in contrast to De Morgan's "remarkable mixture of acuteness and confusion"; in any case, Earnshaw got De Morgan's attention with the following remarks:
De Morgan fired back in 1864 in the same journal:
Frobenius and modern mathematics
The last scholarly article to be motivated by 1 − 1 + 1 − 1 + · · · might be identified as the first article in the modern history of divergent series. Georg Frobenius published an article titled "Ueber die Leibnitzsche Reihe" (On Leibniz's series) in 1880. He had found Leibniz's old letter to Wolff, citing it along with an 1836 article by Joseph Ludwig Raabe, who in turn drew on ideas by Leibniz and Daniel Bernoulli.
Frobenius' short paper, barely two pages, begins by quoting from Leibniz's treatment of 1 − 1 + 1 − 1 + · · ·. He infers that Leibniz was actually stating a generalization of Abel's Theorem. The result, now known as Frobenius' theorem, has a simple statement in modern terms: any series that is Cesàro summable is also Abel summable to the same sum. Historian Giovanni Ferraro emphasizes that Frobenius did not actually state the theorem in such terms, and Leibniz did not state it at all. Leibniz was defending the association of the divergent series with the value 1⁄2, while Frobenius' theorem is stated in terms of convergent sequences and the epsilon-delta formulation of the limit of a function.
Frobenius' theorem was soon followed with further generalizations by Otto Hölder and Thomas Joannes Stieltjes in 1882. Again, to a modern reader their work strongly suggests new definitions of the sum of a divergent series, but those authors did not yet make that step. Ernesto Cesàro proposed a systematic definition for the first time in 1890. Since then, mathematicians have explored many different summability methods for divergent series. Most of these, especially the simpler ones with historical parallels, sum Grandi's series to 1⁄2. Others, motivated by Daniel Bernoulli's work, sum the series to another value, and a few do not sum it at all.
Notes
References
Cited primary sources
The full texts of many of the following references are publicly available on the Internet from Google Books; the Euler archive at Dartmouth College; DigiZeitschriften, a service of Deutsche Forschungsgemeinschaft; or Gallica, a service of the Bibliothèque nationale de France.
Written November 9, 1844.
Written in October 1879.
Cited secondary sources
Volume 1, Parte III, Cap. 1, "La questione della serie di Grandi (1696 - 1715)", pp. 296–345.
Reiff's German-language work "History of infinite Series" is frequently cited by other sources when they deal with the history of Grandi's series. Hardy (p. 21) calls it "useful but uninspiring and not always accurate."
D. J. Struik, editor, A source book in mathematics, 1200-1800 (Princeton University Press, Princeton, New Jersey, 1986). , (pbk). See in particular pp. 178–180 in regard to the versiera (i.e. witch of Agnesi) and Maria Gaetana Agnesi (1718–1799) of Milan, sister of the composer Maria Teresa Agnesi, the first important woman mathematician since Hypatia (fifth century A.D.).
Further reading
Grandi's series
Grandi's Series
Grandi's series, History of
Parity (mathematics) | History of Grandi's series | [
"Mathematics"
] | 4,801 | [
"Grandi's series",
"Mathematical problems",
"Mathematical paradoxes"
] |
8,817,193 | https://en.wikipedia.org/wiki/Glass%20databases | Glass databases are a collection of glass compositions, glass properties, glass models, associated trademark names, patents etc. These data were collected from publications in scientific papers and patents, from personal communication with scientists and engineers, and other relevant sources.
History
Since the beginning of scientific glass research in the 19th century, thousands of glass property-composition datasets were published. The first attempt to summarize all those data systematically was the monograph "Glastechnische Tabellen". World War II and the Cold War prevented similar efforts for many years afterwards.
In 1956, "Phase Diagrams for Ceramists" was published the first time, containing a collection of phase diagrams. This database is known today as "Phase Equilibria Diagrams".
in 1983, the "Handbook of Glass Data" was published, followed by the creation of the Japanese database Interglad in 1991. The "Handbook of Glass Data" was later digitalized and substantially expanded under the name SciGlass. Currently, SciGlass contains properties of about 400,000 glass compositions, INTERGLAD about 380,000, and "Phase Equilibria Diagrams" includes about 31,000 diagrams.
in 2019, the SciGlass data was made publicly available on GitHub under the ODC Open Database License (ODbL).
In 2023, the re-emergence of the SciGlass database as SciGlass Sage offered "AI" assistance, a property predictor powered by random forest regression models, and a generator using predictive models in conjunction with genetic algorithms.
In 2024, SciGlass Next was created as an open-access web database utilizing the SciGlass data available on GitHub. The database is hosted in the public domain of Friedrich Schiller University Jena.
The website provides comprehensive documentation, including step-by-step instructions and glossaries of properties and symbols used.
Most features are covered, including:
Glasses: 422,000+ glasses and melts. Sourced from 40,000+ literature sources, including 19,700+ patents.
Data Tables: Search data and export tables for post-processing.
Data Visualization: Interactive data visualization with scatter plots, histograms, ternary plots, and curve fitting.
Authentication: Secured Single Sign-On (SSO) authentication of users.
ML Predictions (Future): Python-backed ML predictions for glass properties.
Sidebar Quick Lookup: Categories of patent index, trademark index, author index, subject index, spectral index and glass formation.
Glass database contents
The following list of glass database contents is not complete, and it may not be up to date. For full features see the references section below. All databases contain citations to the original data sources and the chemical composition of the glasses or ceramics.
SciGlass: Viscosity, density, mechanical properties, optical properties (including optical spectra), thermal expansion and other thermal properties, electrical properties, chemical durability, liquidus temperatures, crystallization characteristics, ternary diagrams of glass formation, glass property calculation methods, patent and trademark index, subject index etc.
Interglad: Viscosity, density, mechanical properties, optical properties, electrical properties, statistical analysis, liquidus temperatures, ternary property diagrams
Phase Equilibria Diagrams: Phase diagrams, including liquidus and solidus temperatures, eutectic points, crystalline phases, primary crystalline phases
Application
Experimental planning, expected properties and appropriate glass compositions can be estimated from similar data.
Calculation of glass properties based on many independent data sources.
Scientific understanding of glass composition-property relations.
Design of glass compositions that are not patented by competitors.
System design and optimization including design for purpose and design for cost.
References
Glass engineering and science
Glass chemistry
Chemical databases
Ceramic materials | Glass databases | [
"Chemistry",
"Materials_science",
"Engineering"
] | 753 | [
"Glass engineering and science",
"Glass chemistry",
"Chemical databases",
"Materials science",
"Ceramic materials",
"Ceramic engineering"
] |
8,818,619 | https://en.wikipedia.org/wiki/Sari-sari%20store | A sari-sari store, anglicized as neighborhood sundry store, is a convenience store found in the Philippines. The word sari-sari is Tagalog meaning "variety" or "sundry". Such stores occupy an important economic and social location in a Filipino community and are ubiquitous in neighborhoods and along streets. Sari-sari stores tend to be family-run and privately owned operating within the shopkeeper's residence.
Commodities are displayed in a large screen-covered or metal-barred window in front of the shop. Candies in recycled jars, canned goods, and cigarettes are displayed while cooking oil, salt, and sugar are stored at the back of the shop. Prepaid mobile phone credits are provided. The sari-sari store operates with a small revolving fund, and it generally does not offer perishable goods requiring refrigeration. The few that do have refrigerators carry soft drinks, beers, and bottled water.
Economic value
Sari-sari stores play a vital role in the Philippine economy, particularly at the grassroots level. These micro-enterprises contribute significantly to the country's domestic retail market and gross domestic product (GDP).
According to the Magna Kultura Foundation, sari-sari stores account for approximately 70% of sales of manufactured consumer food products nationwide. With an estimated 800,000 stores across the country, they hold a substantial portion of the domestic retail market. In 2011, the retail sector, consisting largely of micro, small, and medium-sized enterprises (MSMEs) like sari-sari stores, contributed 13% (₱1.3 trillion) to the Philippines' GDP of ₱9.7 trillion.
Sari-sari stores typically operate with a low markup, averaging 10%, compared to 20% for convenience stores like 7-Eleven. This makes them a popular choice for Filipinos. Although prices may be higher than those in supermarkets, sari-sari stores offer convenient access to basic commodities, especially in rural areas where larger markets are scarce.
In the Philippines, following the concept of tingi or retail, customers can buy 'units' of a product rather than a whole package, making it affordable to those with limited budgets. For example, one can buy a single cigarette for () rather than a whole pack.
The sari-sari store saves customers from paying extra transportation costs, especially in rural areas, since some towns can be very far from the nearest market or grocery. The store may also allow purchases on credit from its "suki" (repeat customers known to the store owners). They usually keep analogue record of their customers' outstanding balances in school notebooks or the like and demand payments on paydays. In rural areas, the stores act as trading centers where farmers and fishermen trade their products for basic articles, fuel, and other supplies.
The owners can buy grocery commodities in bulk, and then sell them in-store at a mark-up. Trucks deliver LPG tanks and soft drinks directly to the store. The store requires minimal investment, using a portion of its home as storage and display space.
The lifespan of sari-sari stores is highly variable, with many closing after a few weeks due to insufficient income or management mishandling by owners who have limited formal schooling.
Some stores operate without necessary permits, leading to legal issues. In response to both the economic impact of the COVID-19 pandemic and regulatory challenges, some local government units (LGUs) have enacted ordinances waiving taxes and certain permits to sari-sari store and carinderia owners–especially those operating illicitly without regulatory compliance–to help legitimize their businesses and further integrate them into the mainstream economy.
Since June 2024, the nationwide transition into digitalized accounting for sari-sari stores is currently underway through the Department of Trade and Industry's Sari-Sari Store Advancement Program.
Social value
The Magna Kultura Foundation notes that the sari-sari store is part of Philippine culture, and it has become an integral part of every Filipino’s life. It is a constant feature of residential neighborhoods in the Philippines, both in rural and urban areas, proliferating even in the poorest communities. About ninety-three percent (93%) of all sari-sari stores nationwide are located in residential communities. The neighborhood sari-sari store (variety or general) is part and parcel of daily life for the average Filipino.
Any essential household good that might be missing from one’s pantry – from basic food items like sugar, coffee, and cooking condiments, to other necessities like soap or shampoo, is most conveniently purchased from the nearby sari-sari store at economically sized quantities that are affordable to common citizens. The sari-sari store offers a place where people can meet. The benches provided in front of the store are usually occupied by local people; some men spend time drinking there while women discuss the latest local news, youths also use the place to hang out and children also rest there in the afternoons after playing and buy soft drinks and snacks.
In popular culture
Pinoy rock band Eraserheads' song "Tindahan ni Aling Nena" ("Aling Nena's Store"; from the album UltraElectroMagneticPop!) tells the story of a man buying food at a sari-sari store and his attempts to court the eponymous store owner's daughter. It is described as a song about the love between young people with limited economic means.
See also
Vulcanizing shop
Toko
Warung
Bodega
Kopi tiam
Mamak stall
References
External links
Sari-sari
Culture of the Philippines
Convenience stores of the Philippines
Architecture in the Philippines
Retailing in the Philippines
Infrastructure
Building types
Buildings and structures by type
Urban studies and planning terminology | Sari-sari store | [
"Engineering"
] | 1,207 | [
"Construction",
"Buildings and structures by type",
"Infrastructure",
"Architecture"
] |
8,818,787 | https://en.wikipedia.org/wiki/Chroogomphus%20rutilus | Chroogomphus rutilus, commonly known as the brown slimecap or the copper spike, is a species of fungus in the Gomphidiaceae family. First described scientifically as Agaricus rutilus by Jacob Christian Schäffer in 1774, it was transferred to the genus Chroogomphus in 1964 by Orson K. Miller, Jr. The fungus lives ectomycorrhizally with Pinus species, and is found in Europe and North America. The fruit bodies are edible but not highly regarded.
Gomphidius viscidus is an old synonym of this mushroom.
References
Edible fungi
Fungi of Europe
Fungi of North America
Fungi described in 1774
Fungus species | Chroogomphus rutilus | [
"Biology"
] | 143 | [
"Fungi",
"Fungus species"
] |
8,818,803 | https://en.wikipedia.org/wiki/Nqthm | Nqthm is a theorem prover sometimes referred to as the Boyer–Moore theorem prover. It was a precursor to ACL2.
History
The system was developed by Robert S. Boyer and J Strother Moore, professors of computer science at the University of Texas, Austin. They began work on the system in 1971 in Edinburgh, Scotland. Their goal was to make a fully automatic, logic-based theorem prover. They used a variant of Pure LISP as the working logic.
Definitions
Definitions are formed as totally recursive functions, the system makes extensive use of rewriting and an induction heuristic that is used when rewriting and something that they called symbolic evaluation fails.
The system was built on top of Lisp and had some very basic knowledge in what was called "Ground-zero", the state of the machine after bootstrapping it onto a Common Lisp implementation.
This is an example of the proof of a simple arithmetic theorem. The function is part of the (called a "satellite") and is defined to be
(DEFN TIMES (X Y)
(IF (ZEROP X)
0
(PLUS Y (TIMES (SUB1 X) Y))))
Theorem formulation
The formulation of the theorem is also given in a Lisp-like syntax:
(prove-lemma commutativity-of-times (rewrite)
(equal (times x z) (times z x)))
Should the theorem prove to be true, it will be added to the knowledge basis of the system and can be used as a rewrite rule for future proofs.
The proof itself is given in a quasi-natural language manner. The authors randomly choose typical mathematical phrases for embedding the steps in the mathematical proof, which does actually make the proofs quite readable. There are macros for LaTeX that can transform the Lisp structure into more or less readable mathematical language.
The proof of the commutativity of times continues:
Give the conjecture the name *1.
We will appeal to induction. Two inductions are suggested by terms in the conjecture,
both of which are flawed. We limit our consideration to the two suggested by the
largest number of nonprimitive recursive functions in the conjecture. Since both of
these are equally likely, we will choose arbitrarily. We will induct according to
the following scheme:
(AND (IMPLIES (ZEROP X) (p X Z))
(IMPLIES (AND (NOT (ZEROP X)) (p (SUB1 X) Z))
(p X Z))).
Linear arithmetic, the lemma COUNT-NUMBERP, and the definition of ZEROP inform
us that the measure (COUNT X) decreases according to the well-founded relation
LESSP in each induction step of the scheme. The above induction scheme
produces the following two new conjectures:
Case 2. (IMPLIES (ZEROP X)
(EQUAL (TIMES X Z) (TIMES Z X))).
and after winding itself through a number of induction proofs, finally concludes that
Case 1. (IMPLIES (AND (NOT (ZEROP Z))
(EQUAL 0 (TIMES (SUB1 Z) 0)))
(EQUAL 0 (TIMES Z 0))).
This simplifies, expanding the definitions of ZEROP, TIMES, PLUS, and EQUAL, to:
T.
That finishes the proof of *1.1, which also finishes the proof of *1.
Q.E.D.
[ 0.0 1.2 0.5 ]
COMMUTATIVITY-OF-TIMES
Proofs
Many proofs have been done or confirmed with the system, particularly
(1971) list concatenation
(1973) insertion sort
(1974) a binary adder
(1976) an expression compiler for a stack machine
(1978) uniqueness of prime factorizations
(1983) invertibility of the RSA encryption algorithm
(1984) unsolvability of the halting problem for Pure Lisp
(1985) FM8501 microprocessor (Warren Hunt)
(1986) Gödel's incompleteness theorem (Shankar)
(1988) CLI Stack (Bill Bevier, Warren Hunt, Matt Kaufmann, J Moore, Bill Young)
(1990) Gauss' law of quadratic reciprocity (David Russinoff)
(1992) Byzantine Generals and Clock Synchronization (Bevier and Young)
(1992) A compiler for a subset of the Nqthm language (Arthur Flatau)
(1993) bi-phase mark asynchronous communications protocol
(1993) Motorola MC68020 and Berkeley C String Library (Yuan Yu)
(1994) Paris–Harrington Ramsey theorem (Kenneth Kunen)
(1996) The equivalence of NFSA and DFSA (Debora Weber-Wulff)
PC-Nqthm
A more powerful version, called PC-Nqthm (Proof-checker Nqthm) was developed by Matt Kaufmann. This gave the proof tools that the system uses automatically to the user, so that more guidance can be given to the proof. This is a great help, as the system has an unproductive tendency to wander down infinite chains of inductive proofs.
Literature
A Computational Logic Handbook, R.S. Boyer and J S. Moore, Academic Press (2nd Edition), 1997.
The Boyer-Moore Theorem Prover and Its Interactive Enhancement, with M. Kaufmann and R. S. Boyer, Computers and Mathematics with Applications, 29(2), 1995, pp. 27–62.
Awards
In 2005 Robert S. Boyer, Matt Kaufmann, and J Strother Moore received the ACM Software System Award for their work on the Nqthm theorem prover.
References
External links
The Automated Reasoning System, Nqthm
The Boyer-Moore Theorem Prover (NQTHM)
Even though the system is no longer being supported, it is still available at
Runnable version on GitHub:
Theorem proving software systems
Common Lisp (programming language) software | Nqthm | [
"Mathematics"
] | 1,246 | [
"Theorem proving software systems",
"Automated theorem proving",
"Mathematical software"
] |
8,818,888 | https://en.wikipedia.org/wiki/Lin%E2%80%93Kernighan%20heuristic | In combinatorial optimization, Lin–Kernighan is one of the best heuristics for solving the symmetric travelling salesman problem. It belongs to the class of local search algorithms, which take a tour (Hamiltonian cycle) as part of the input and attempt to improve it by searching in the neighbourhood of the given tour for one that is shorter, and upon finding one repeats the process from that new one, until encountering a local minimum. As in the case of the related 2-opt and 3-opt algorithms, the relevant measure of "distance" between two tours is the number of edges which are in one but not the other; new tours are built by reassembling pieces of the old tour in a different order, sometimes changing the direction in which a sub-tour is traversed. Lin–Kernighan is adaptive and has no fixed number of edges to replace at a step, but favours small numbers such as 2 or 3.
Derivation
For a given instance of the travelling salesman problem, tours are uniquely determined by their sets of edges, so we may as well encode them as such. In the main loop of the local search, we have a current tour and are looking for new tour such that the symmetric difference is not too large and the length of the new tour is less than the length of the current tour. Since is typically much smaller than and , it is convenient to consider the quantity
— the gain of using when switching from —
since : how much longer the current tour is than the new tour . Naively -opt can be regarded as examining all with exactly elements ( in but not in , and another in but not in ) such that is again a tour, looking for such a set which has . It is however easier to do those tests in the opposite order: first search for plausible with positive gain, and only second check if is in fact a tour.
Define a trail in to be alternating (with respect to ) if its edges are alternatingly in and not in , respectively. Because the subgraphs and are -regular, the subgraph will have vertices of degree , , and only, and at each vertex there are as many incident edges from as there are from . Hence (essentially by Hierholzer's algorithm for finding Eulerian circuits) the graph decomposes into closed alternating trails. Sets that may satisfy for some tour may thus be found by enumerating closed alternating trails in , even if not every closed alternating trail makes into a tour; it could alternatively turn out to be a disconnected -regular subgraph.
Key idea
Alternating trails (closed or open) are built by extending a shorter alternating trail, so when exploring the neighbourhood of the current tour , one is exploring a search tree of alternating trails. The key idea of the Lin–Kernighan algorithm is to remove from this tree all alternating trails which have gain . This does not prevent finding every closed trail with positive gain, thanks to the following lemma.
Lemma. If are numbers such that , then there is a cyclic permutation of these numbers such that all partial sums are positive as well, i.e., there is some such that
for all .
For a closed alternating trail , one may define if and if ; the sum is then the gain . Here the lemma implies that there for every closed alternating trail with positive gain exists at least one starting vertex for which all the gains of the partial trails are positive as well, so will be found when the search explores the branch of alternating trails starting at . (Prior to that the search may have considered other subtrails of starting at other vertices but backed out because some subtrail failed the positive gain constraint.) Reducing the number of branches to explore translates directly to a reduction in runtime, and the sooner a branch can be pruned, the better.
This yields the following algorithm for finding all closed, positive gain alternating trails in the graph.
State: a stack of triples , where is a vertex, is the current number of edges in the trail, and is the current trail gain.
For all , push onto the stack.
While the stack is nonempty:
Pop off the stack and let . The current alternating trail is now .
If is even then:
For each such that (there are at most two of these), push onto the stack.
If instead is odd then:
If then report as a closed alternating trail with gain .
For each such that and (there may be of these, or there could be none), push onto the stack.
Stop
As an enumeration algorithm this is slightly flawed, because it may report the same trail multiple times, with different starting points, but Lin–Kernighan does not care because it mostly aborts the enumeration after finding the first hit. It should however be remarked that:
Lin–Kernighan is not satisfied with just having found a closed alternating trail of positive gain, it additionally requires that is a tour.
Lin–Kernighan also restricts the search in various ways, most obviously regarding the search depth (but not only in that way). The above unrestricted search still terminates because at there is no longer any unpicked edge remaining in , but that is far beyond what is practical to explore.
In most iterations one wishes to quickly find a better tour , so rather than actually listing all siblings in the search tree before exploring the first of them, one may wish to generate these siblings lazily.
Basic Lin–Kernighan algorithm
The basic form of the Lin–Kernighan algorithm not only does a local search counterpart of the above enumeration, but it also introduces two parameters that narrow the search.
The backtracking depth is an upper bound on the length of the alternating trail after backtracking; beyond this depth, the algorithm explores at most one way of extending the alternating trail. Standard value is that .
The infeasibility depth is an alternating path length beyond which it begins to be required that closing the current trail (regardless of the gain of doing so) yields an exchange to a new tour. Standard value is that .
Because there are alternating trails of length , and the final round of the algorithm may have to check all of them before concluding that the current tour is locally optimal, we get (standard value ) as a lower bound on the exponent of the algorithm complexity. Lin & Kernighan report as an empirical exponent of in the average overall running time for their algorithm, but other implementors have had trouble reproducing that result. It appears unlikely that the worst-case running time is polynomial.
In terms of a stack as above, the algorithm is:
Input: an instance of the travelling salesman problem, and a tour
Output: a locally optimal tour
Variables:
a stack of triples , where is a vertex, is the current number of edges in the trail, and is the current trail gain,
the sequence of vertices in the current alternating trail,
the best set of exchange edges found for current tour, and its corresponding gain .
Initialise the stack to being empty.
Repeat
Set and .
For all , push onto the stack.
While the stack is nonempty:
Pop off the stack and let .
If is even then
for each such that ,
push onto the stack if: , or and is a tour (Hamiltonicity check)
else ( is odd):
If , , and is a tour (Hamiltonicity check) then let and .
For each such that and , push onto the stack.
End if.
Let be the top element on the stack (peek, not pop). If then
if then
set (update current tour) and clear the stack.
else if then
pop all elements off the stack that have
end if
end if
end while
until .
Return
The length of the alternating trails considered are thus not explicitly bounded, but beyond the backtracking depth no more than one way of extending the current trail is considered, which in principle stops those explorations from raising the exponent in the runtime complexity.
Limitations
The closed alternating trails found by the above method are all connected, but the symmetric difference of two tours need not be, so in general this method of alternating trails cannot explore the full neighbourhood of a trail . The literature on the Lin–Kernighan heuristic uses the term sequential exchanges for those that are described by a single alternating trail. The smallest non-sequential exchange would however replace 4 edges and consist of two cycles of 4 edges each (2 edges added, 2 removed), so it is long compared to the typical Lin–Kernighan exchange, and there are few of these compared to the full set of 4-edge exchanges.
In at least one implementation by Lin & Kernighan there was an extra final step considering such non-sequential exchanges of 4 edges before declaring a tour locally optimal, which would mean the tours produced are 4-opt unless one introduces further constraints on the search (which Lin and Kernighan in fact did). The literature is vague on exactly what is included in the Lin–Kernighan heuristic proper, and what constitutes further refinements.
For the asymmetric TSP, the idea of using positive gain alternating trails to find favourable exchanges is less useful, because there are fewer ways in which pieces of a tour can be rearranged to yield new tours when one may not reverse the orientation of a piece. Two pieces can only be patched together to reproduce the original tour. Three pieces can be patched together to form a different tour in one way only, and the corresponding alternating trail does not extend to a closed trail for rearranging four pieces into a new tour. To rearrange four pieces, one needs a non-sequential exchange.
Checking Hamiltonicity
The Lin–Kernighan heuristic checks the validity of tour candidates at two points: obviously when deciding whether a better tour has been found, but also as a constraint to descending in the search tree, as controlled via the infeasibility depth . Concretely, at larger depths in the search a vertex is only appended to the alternating trail if is a tour. By design that set of edges constitutes a 2-factor in , so what needs to be determined is whether that 2-factor consists of a single Hamiltonian cycle, or instead is made up of several cycles.
If naively posing this subproblem as giving a subroutine the set of edges as input, one ends up with as the time complexity for this check, since it is necessary to walk around the full tour before being able to determine that it is in fact a Hamiltonian cycle. That is too slow for the second usage of this test, which gets carried out for every alternating trail with more than edges from . If keeping track of more information, the test can instead be carried out in constant time.
A useful degree of freedom here is that one may choose the order in which step 2.3.2 iterates over all vertices; in particular, one may follow the known tour . After picking edges from , the remaining subgraph consists of paths. The outcome of the Hamiltonicity test done when considering the th edge depends only on in which of these paths that resides and whether is before or after . Hence it would be sufficient to examine different cases as part of performing step 2.3.2 for ; as far as is concerned, the outcome of this test can be inherited information rather than something that has to be computed fresh.
References
External links
LKH implementation
Concorde TSP implementation
LK Heuristic in Python
Combinatorial optimization
Combinatorial algorithms
Heuristic algorithms
Travelling salesman problem | Lin–Kernighan heuristic | [
"Mathematics"
] | 2,362 | [
"Combinatorial algorithms",
"Computational mathematics",
"Combinatorics"
] |
3,070,794 | https://en.wikipedia.org/wiki/Ternary%20Golay%20code | In coding theory, the ternary Golay codes are two closely related error-correcting codes.
The code generally known simply as the ternary Golay code is an -code, that is, it is a linear code over a ternary alphabet; the relative distance of the code is as large as it possibly can be for a ternary code, and hence, the ternary Golay code is a perfect code.
The extended ternary Golay code is a [12, 6, 6] linear code obtained by adding a zero-sum check digit to the [11, 6, 5] code.
In finite group theory, the extended ternary Golay code is sometimes referred to as the ternary Golay code.
Properties
Ternary Golay code
The ternary Golay code consists of 36 = 729 codewords.
Its parity check matrix is
Any two different codewords differ in at least 5 positions.
Every ternary word of length 11 has a Hamming distance of at most 2 from exactly one codeword.
The code can also be constructed as the quadratic residue code of length 11 over the finite field F3 (i.e., the Galois Field GF(3) ).
Used in a football pool with 11 games, the ternary Golay code corresponds to 729 bets and guarantees exactly one bet with at most 2 wrong outcomes.
The set of codewords with Hamming weight 5 is a 3-(11,5,4) design.
The generator matrix given by Golay (1949, Table 1.) is
The automorphism group of the (original) ternary Golay code is the Mathieu group M11, which is the smallest of the sporadic simple groups.
Extended ternary Golay code
The complete weight enumerator of the extended ternary Golay code is
The automorphism group of the extended ternary Golay code is 2.M12, where M12 is the Mathieu group M12.
The extended ternary Golay code can be constructed as the span of the rows of a Hadamard matrix of order 12 over the field F3.
Consider all codewords of the extended code which have just six nonzero digits. The sets of positions at which these nonzero digits occur form the Steiner system S(5, 6, 12).
A generator matrix for the extended ternary Golay code is
The corresponding parity check matrix for this generator matrix is , where denotes the transpose of the matrix.
An alternative generator matrix for this code is
And its parity check matrix is .
The three elements of the underlying finite field are represented here by , rather than by .
It is also understood that (i.e., the additive inverse of 1) and . Products of these finite field elements are identical to those of the integers. Row and column sums are evaluated modulo 3.
Linear combinations, or vector addition, of the rows of the matrix
produces all possible words contained in the code. This is referred to as the span of the rows. The inner product of any two rows of the generator matrix will always sum to zero. These rows, or vectors, are said to be orthogonal.
The matrix product of the generator and parity-check matrices,
, is the matrix of all zeroes, and by intent.
Indeed, this is an example of the very definition of any parity check matrix with respect to its generator matrix.
History and Applications
The ternary Golay code was published by . It was independently discovered two years earlier by the Finnish football pool enthusiast Juhani Virtakallio, who published it in 1947 in issues 27, 28 and 33 of the football magazine Veikkaaja.
The ternary Golay code has been shown to be useful for an approach to fault-tolerant quantum computing known as magic state distillation.
See also
Berlekamp–van Lint–Seidel graph
Binary Golay code
References
Further reading
Coding theory
Finite fields
ja:3元ゴレイ符号 | Ternary Golay code | [
"Mathematics"
] | 811 | [
"Discrete mathematics",
"Coding theory"
] |
3,071,186 | https://en.wikipedia.org/wiki/Gravitational%20acceleration | In physics, gravitational acceleration is the acceleration of an object in free fall within a vacuum (and thus without experiencing drag). This is the steady gain in speed caused exclusively by gravitational attraction. All bodies accelerate in vacuum at the same rate, regardless of the masses or compositions of the bodies; the measurement and analysis of these rates is known as gravimetry.
At a fixed point on the surface, the magnitude of Earth's gravity results from combined effect of gravitation and the centrifugal force from Earth's rotation. At different points on Earth's surface, the free fall acceleration ranges from , depending on altitude, latitude, and longitude. A conventional standard value is defined exactly as 9.80665 m/s² (about 32.1740 ft/s²). Locations of significant variation from this value are known as gravity anomalies. This does not take into account other effects, such as buoyancy or drag.
Relation to the Universal Law
Newton's law of universal gravitation states that there is a gravitational force between any two masses that is equal in magnitude for each mass, and is aligned to draw the two masses toward each other. The formula is:
where and are any two masses, is the gravitational constant, and is the distance between the two point-like masses.
Using the integral form of Gauss's Law, this formula can be extended to any pair of objects of which one is far more massive than the other — like a planet relative to any man-scale artifact. The distances between planets and between the planets and the Sun are (by many orders of magnitude) larger than the sizes of the sun and the planets. In consequence both the sun and the planets can be considered as point masses and the same formula applied to planetary motions. (As planets and natural satellites form pairs of comparable mass, the distance 'r' is measured from the common centers of mass of each pair rather than the direct total distance between planet centers.)
If one mass is much larger than the other, it is convenient to take it as observational reference and define it as source of a gravitational field of magnitude and orientation given by:
where is the mass of the field source (larger), and is a unit vector directed from the field source to the sample (smaller) mass. The negative sign indicates that the force is attractive (points backward, toward the source).
Then the attraction force vector onto a sample mass can be expressed as:
Here is the frictionless, free-fall acceleration sustained by the sampling mass under the attraction of the gravitational source.
It is a vector oriented toward the field source, of magnitude measured in acceleration units. The gravitational acceleration vector depends only on how massive the field source is and on the distance 'r' to the sample mass . It does not depend on the magnitude of the small sample mass.
This model represents the "far-field" gravitational acceleration associated with a massive body. When the dimensions of a body are not trivial compared to the distances of interest, the principle of superposition can be used for differential masses for an assumed density distribution throughout the body in order to get a more detailed model of the "near-field" gravitational acceleration. For satellites in orbit, the far-field model is sufficient for rough calculations of altitude versus period, but not for precision estimation of future location after multiple orbits.
The more detailed models include (among other things) the bulging at the equator for the Earth, and irregular mass concentrations (due to meteor impacts) for the Moon. The Gravity Recovery and Climate Experiment (GRACE) mission launched in 2002 consists of two probes, nicknamed "Tom" and "Jerry", in polar orbit around the Earth measuring differences in the distance between the two probes in order to more precisely determine the gravitational field around the Earth, and to track changes that occur over time. Similarly, the Gravity Recovery and Interior Laboratory mission from 2011 to 2012 consisted of two probes ("Ebb" and "Flow") in polar orbit around the Moon to more precisely determine the gravitational field for future navigational purposes, and to infer information about the Moon's physical makeup.
Comparative gravities of the Earth, Sun, Moon, and planets
The table below shows comparative gravitational accelerations at the surface of the Sun, the Earth's moon, each of the planets in the Solar System and their major moons, Ceres, Pluto, and Eris. For gaseous bodies, the "surface" is taken to mean visible surface: the cloud tops of the giant planets (Jupiter, Saturn, Uranus, and Neptune), and the Sun's photosphere. The values in the table have not been de-rated for the centrifugal force effect of planet rotation (and cloud-top wind speeds for the giant planets) and therefore, generally speaking, are similar to the actual gravity that would be experienced near the poles. For reference, the time it would take an object to fall , the height of a skyscraper, is shown, along with the maximum speed reached. Air resistance is neglected.
General relativity
In Einstein's theory of general relativity, gravitation is an attribute of curved spacetime instead of being due to a force propagated between bodies. In Einstein's theory, masses distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. The gravitational force is a fictitious force. There is no gravitational acceleration, in that the proper acceleration and hence four-acceleration of objects in free fall are zero. Rather than undergoing an acceleration, objects in free fall travel along straight lines (geodesics) on the curved spacetime.
Gravitational field
See also
Air track
Gravimetry
Gravity of Earth
Gravitation of the Moon
Gravity of Mars
Newton's law of universal gravitation
Standard gravity
Notes
References
Gravimetry
Gravity
Acceleration
Temporal rates | Gravitational acceleration | [
"Physics",
"Mathematics"
] | 1,195 | [
"Temporal quantities",
"Physical quantities",
"Acceleration",
"Quantity",
"Temporal rates",
"Wikipedia categories named after physical quantities"
] |
3,071,326 | https://en.wikipedia.org/wiki/Giant%20cell | A giant cell (also known as a multinucleated giant cell, or multinucleate giant cell) is a mass formed by the union of several distinct cells (usually histiocytes), often forming a granuloma.
Although there is typically a focus on the pathological aspects of multinucleate giant cells (MGCs), they also play many important physiological roles. Osteoclasts are a type of MGC that are critical for the maintenance, repair, and remodeling of bone and are present normally in a healthy human body. Osteoclasts are frequently classified and discussed separately from other MGCs which are more closely linked with disease.
Non-osteoclast MGCs can arise in response to an infection, such as tuberculosis, herpes, or HIV, or as part of a foreign body reaction. These MGCs are cells of monocyte or macrophage lineage fused together. Similar to their monocyte precursors, they can phagocytose foreign materials. However, their large size and extensive membrane ruffling make them better equipped to clear up larger particles. They utilize activated CR3s to ingest complement-opsonized targets. Non-osteoclast MGCs are also responsible for the clearance of cell debris, which is necessary for tissue remodeling after injuries.
Types include foreign-body giant cells, Langhans giant cells, Touton giant cells, Giant-cell arteritis
History
Osteoclasts were discovered in 1873. However, it was not until the development of the organ culture in the 1970s that their origin and function could be deduced. Although there was a consensus early on about the physiological function of osteoclasts, theories on their origins were heavily debated. Many believed osteoclasts and osteoblasts came from the same progenitor cell. Because of this, osteoclasts were thought to be derived from cells in connective tissue. Studies that observed that bone resorption could be restored by bone marrow and spleen transplants helped prove osteoclasts' hematopoietic origin.
Other multinucleated giant cell formations can arise from numerous types of bacteria, diseases, and cell formations. Giant cells are also known to develop when infections are present. They were first observed as early as the middle of the last century, but it is not fully understood why these reactions occur. In the process of giant cell formation, monocytes or macrophages fuse together, which could cause multiple problems for the immune system.
Osteoclast
Osteoclasts are the most prominent examples of MGCs and are responsible for the resorption of bones in the body. Like other MGCs, they are formed from the fusion of monocyte/macrophage precursors. However, unlike other MGCs, the fusion pathway they originate from is well elucidated. They also do not ingest foreign materials and instead absorb bone matrix and minerals.
Osteoclasts are typically associated more with healthy physiological functions than they are with pathological states. They function alongside osteoblasts to remodel and maintain the integrity of bones in the body. They also contribute to the creation of the niche necessary for hematopoiesis and negatively regulate T cells. However, while the primary functions of osteoclasts are integral to maintaining a healthy physiological state, they have also been linked to osteoporosis and the formation of bone tumors.
Giant cell arteritis
Giant cell arteritis, also known as temporal arteritis or cranial arteritis, is the most common MGC-linked disease. This type of arteritis causes the arteries in the head, neck, and arm area to swell to abnormal sizes. Although the cause of this disease is not currently known, it appears to be related to polymyalgia rheumatica.
Giant cell arteritis is most prevalent in older individuals, with the rate of disease being seen to increase from age 50. Women are 2–3 times more likely to develop the disease than men.
Northern Europeans have been observed to have a higher incidence of giant cell arteritis compared to southern European, Hispanic, and Asian populations. It has been suggested that this difference may lie in the criteria used to diagnose giant cell arteritis rather than actual disease incidence, in addition to genetic and geographic factors.
Symptoms
Symptoms may include a mild fever, loss of appetite, fatigue, vision loss, and severe headaches. These symptoms are often misinterpreted leading to a delay in treatment. If left untreated, this disease can result in permanent blindness.
Diagnosis
The current highest standard for diagnosis is a temporal artery biopsy. The skin on the patient's face is anesthetized, and an incision is made in the face around the area of the temples to obtain a sample of the temporal artery. The incision is then sutured. A histopathologist examines the sample under a microscope and issues a pathology report (pending extra tests that may be requested by the pathologist).
The management regime consists primarily of systemic corticosteroids (e.g. prednisolone), commencing at a high dose.
Langhans giant cell
Langhans giant cells are named for the pathologist who discovered them, Theodor Langhans. Like many of the other kinds of giant cell formations, epithelioid macrophages fuse together and form a multinucleated giant cell. The nuclei form a circle or semicircle similar to the shape of a horseshoe away from the center of the cell. Langhans giant cell was typically associated with tuberculosis but has been found to occur in many types of granulomatous diseases.
Langhans giant cell could be closely related to tuberculosis, syphilis, sarcoidosis, and deep fungal infections. Langhans giant cell occurs frequently in delayed hypersensitivity.
Symptoms
Symptoms may include fever, weight loss, fatigue and loss of appetite.
Diagnosis
This type of giant cell could be caused by bacteria that spread from person to person through the air. Tuberculosis is related to HIV; many people who have HIV also have a hard time fighting off diseases and sicknesses. Many tests may be performed to treat other related diseases to obtain the correct diagnosis for Langhans giant cell.
Touton giant cell
Also known as xanthelasmatic giant cells, Touton giant cells consist of fused epithelioid macrophages and have multiple nuclei. They are characterized by the ring-shaped arrangement of their nuclei and the presence of foamy cytoplasm surrounding the nucleus. Touton giant cells have been observed in lipid-laden lesions such as fat necrosis.
Demographics
The formation of Touton giant cell is most common in men and women aged 37–78.
Symptoms
Touton giant cells typically cause similar symptoms to other forms of giant cell, such as fever, weight loss, fatigue and loss of appetite.
Foreign-body giant cell
Foreign-body giant cells form when a subject is exposed to a foreign substance. Exogenous substances can include talc or sutures. As with other types of giant cells, epithelioid macrophages fusing together causes these giant cells to form and grow. In this form of giant cell, the nuclei are arranged in an overlapping manner. This giant cell is often found in tissue because of medical devices, prostheses, and biomaterials.
Reed-Sternberg cell
Reed-Sternberg cells are generally thought to originate from B-lymphocytes. They are hard to study due to their rarity, and there are other theories about the origins of these cells. Some less popular theories speculate that they may arise from the fusion between reticulum cells, lymphocytes, and virus-infected cells.
Similar to other MGCs, Reed-Sternberg cells are large and are either multinucleated or have a bilobed nucleus. Their nuclei are irregularly shaped, contain clear chromatin, and possess an eosinophilic nucleolus.
Role in tumour formation
Some researchers have conjectured that Giant cells may be instrumental in the formation of tumours, and that their origin may be in the stress-induced genomic reorganization proposed by Nobel Laureate Barbara McClintock. It had previously been suggested that such genomic stress could be aggravated by some genotoxic agents used in cancer therapy.
Poly-aneuploid cancer cells (PACCs) may serve as efficient sources of heritable variation that allows cancer cells to evolve rapidly.
Endogenous causative agents
Endogenous substances such as keratin, fat, and cholesterol crystals (cholesteatoma) can induce mast cell formation.
Multinucleated giant cells in COVID-19 patients
Coronavirus disease 2019 (COVID-19) is caused by a novel coronavirus called SARS-CoV-2. Multinucleated giant cells have been detected in biopsy specimens from patients with COVID-19 disease. This type of giant cell was first found in pulmonary pathology of early phase 2019 novel coronavirus (COVID-19) pneumonia in two patients with lung cancer after a biopsy. Specifically, they were located in inflammatory fibrin clusters, sometimes together with mononuclear inflammatory cells. Another pathological study also detected this type of giant cell in COVID-19 and described it as a "multinucleated syncytial cell". The morphological analysis showed that multinucleated syncytial cells and atypical enlarged pneumocytes demonstrating cytomorphological changes consistent with viral infection were found in the intra-alveolar spaces. The viral antigen was detected in the cytoplasm of multinucleated syncytial cells, indicating the presence of the SARS-CoV-2 virus. However, a later post-mortem study has described these cells as 'giant cell-like' rather than true giant cells derived from histiocytes. Instead, they are derived from type II pneumocyte clusters with cytopathic changes, which was confirmed by cytokeratin staining. The infection and pathogenesis of the SARS-CoV-2 virus in the human patient largely remained unknown.
Multinucleate giant cells have also been described in MERS-CoV, a closely related coronavirus.
A further study to characterize the role of multinucleated giant cells in human immune defense against COVID-19 may lead to more effective therapies.
See also
Idiopathic giant cell myocarditis
Large cell
Reed–Sternberg cell
Subependymal giant cell astrocytoma
Syncitium
References
External links
Macrophage fusion: the making of osteoclasts and giant cells
Cell biology | Giant cell | [
"Biology"
] | 2,237 | [
"Cell biology"
] |
3,071,594 | https://en.wikipedia.org/wiki/Theta%20wave | Theta waves generate the theta rhythm, a neural oscillation in the brain that underlies various aspects of cognition and behavior, including learning, memory, and spatial navigation in many animals. It can be recorded using various electrophysiological methods, such as electroencephalogram (EEG), recorded either from inside the brain or from electrodes attached to the scalp.
At least two types of theta rhythm have been described. The hippocampal theta rhythm is a strong oscillation that can be observed in the hippocampus and other brain structures in numerous species of mammals including rodents, rabbits, dogs, cats, and marsupials. "Cortical theta rhythms" are low-frequency components of scalp EEG, usually recorded from humans. Theta rhythms can be quantified using quantitative electroencephalography (qEEG) using freely available toolboxes, such as, EEGLAB or the Neurophysiological Biomarker Toolbox (NBT).
In rats, theta wave rhythmicity is easily observed in the hippocampus, but can also be detected in numerous other cortical and subcortical brain structures. Hippocampal theta waves, with a frequency range of 6–10 Hz, appear when a rat is engaged in active motor behavior such as walking or exploratory sniffing, and also during REM sleep. Theta waves with a lower frequency range, usually around 6–7 Hz, are sometimes observed when a rat is motionless but alert. When a rat is eating, grooming, or sleeping, the hippocampal EEG usually shows a non-rhythmic pattern known as large irregular activity or LIA. The hippocampal theta rhythm depends critically on projections from the medial septal area, which in turn receives input from the hypothalamus and several brainstem areas. Hippocampal theta rhythms in other species differ in some respects from those in rats. In cats and rabbits, the frequency range is lower (around 4–6 Hz), and theta is less strongly associated with movement than in rats. In bats, theta appears in short bursts associated with echolocation.
In humans, hippocampal theta rhythm has been observed and linked to memory formation and navigation. As with rats, humans exhibit hippocampal theta wave activity during REM sleep. Humans also exhibit predominantly cortical theta wave activity during REM sleep. Increased sleepiness is associated with decreased alpha wave power and increased theta wave power. Meditation has been shown to increase theta power.
The function of the hippocampal theta rhythm is not clearly understood. Green and Arduini, in the first major study of this phenomenon, noted that hippocampal theta usually occurs together with desynchronized EEG in the neocortex, and proposed that it is related to arousal. Vanderwolf and his colleagues, noting the strong relationship between theta and motor behavior, have argued that it is related to sensorimotor processing. Another school, led by John O'Keefe, have suggested that theta is part of the mechanism animals use to keep track of their location within the environment. Another theory links the theta rhythm to mechanisms of learning and memory (Hasselmo, 2005). This theory states that theta waves may act as a switch between encoding and recall mechanisms, and experimental data on rodents and humans support this idea. Another study on humans has showed that theta oscillations determine memory function (encoding or recall) when interacting with high frequency gamma activity in the hippocampus. These findings support the idea that theta oscillations support memory formation and retrieval in interaction with other oscillatory rhythms. These different theories have since been combined, as it has been shown that the firing patterns can support both navigation and memory.
In human EEG studies, the term theta refers to frequency components in the 4–7 Hz range, regardless of their source. Cortical theta is observed frequently in young children. In older children and adults, it tends to appear during meditative, drowsy, hypnotic or sleeping states, but not during the deepest stages of sleep. Theta from the midfrontal cortex is specifically related to cognitive control and alterations in these theta signals are found in multiple psychiatric and neurodevelopmental disorders.
History
Although there were a few earlier hints, the first clear description of regular slow oscillations in the hippocampal EEG came from a paper written in German by Jung and Kornmüller (1938). They were not able to follow up on these initial observations, and it was not until 1954 that further information became available, in a very thorough study by John D. Green and Arnaldo Arduini that mapped out the basic properties of hippocampal oscillations in cats, rabbits, and monkeys (Green and Arduini, 1954). Their findings provoked widespread interest, in part because they related hippocampal activity to arousal, which was at that time the hottest topic in neuroscience. Green and Arduini described an inverse relationship between hippocampal and cortical activity patterns, with hippocampal rhythmicity occurring alongside desynchronized activity in the cortex, whereas an irregular hippocampal activity pattern was correlated with the appearance of large slow waves in the cortical EEG.
Over the following decade came an outpouring of experiments examining the pharmacology and physiology of theta. By 1965, Charles Stumpf was able to write a lengthy review of "Drug action on the electrical activity of the hippocampus" citing hundreds of publications (Stumpf, 1965), and in 1964 John Green, who served as the leader of the field during this period, was able to write an extensive and detailed review of hippocampal electrophysiology (Green, 1964). A major contribution came from a group of investigators working in Vienna, including Stumpf and Wolfgang Petsche, who established the critical role of the medial septum in controlling hippocampal electrical activity, and worked out some of the pathways by which it exerts its influence.
Terminology
Because of a historical accident, the term "theta rhythm" is used to refer to two different phenomena, "hippocampal theta" and "human cortical theta". Both of these are oscillatory EEG patterns, but they may have little in common beyond the name "theta".
In the oldest EEG literature dating back to the 1920s, Greek letters such as alpha, beta, theta, and gamma were used to classify EEG waves falling into specific frequency ranges, with "theta" generally meaning a range of about 4–7 cycles per second (Hz). In the 1930s–1950s, a very strong rhythmic oscillation pattern was discovered in the hippocampus of cats and rabbits (Green & Arduini, 1954). In these species, the hippocampal oscillations fell mostly into the 4–6 Hz frequency range, so they were referred to as "theta" oscillations. Later, hippocampal oscillations of the same type were observed in rats; however, the frequency of rat hippocampal EEG oscillations averaged about 8 Hz and rarely fell below 6 Hz. Thus the rat hippocampal EEG oscillation should not, strictly speaking, have been called a "theta rhythm". However the term "theta" had already become so strongly associated with hippocampal oscillations that it continued to be used even for rats. Over the years this association has come to be stronger than the original association with a specific frequency range, but the original meaning also persists.
Thus, "theta" can mean either of two things:
A specific type of regular oscillation seen in the hippocampus and several other brain regions connected to it.
EEG oscillations in the 4–7 Hz frequency range, regardless of where in the brain they occur or what their functional significance is.
The first meaning is usually intended in literature that deals with rats or mice, while the second meaning is usually intended in studies of human EEG recorded using electrodes glued to the scalp. In general, it is not safe to assume that observations of "theta" in the human EEG have any relationship to the "hippocampal theta rhythm". Scalp EEG is generated almost entirely by the cerebral cortex, and even if it falls into a certain frequency range, this cannot be taken to indicate that it has any functional dependence on the hippocampus.
Hippocampal
Due to the density of its neural layers, the hippocampus generates some of the largest EEG signals of any brain structure. In some situations the EEG is dominated by regular waves at 4–10 Hz, often continuing for many seconds. This EEG pattern is known as the hippocampal theta rhythm. It has also been called Rhythmic Slow Activity (RSA), to contrast it with the large irregular activity (LIA) that usually dominates the hippocampal EEG when theta is not present.
In rats, hippocampal theta is seen mainly in two conditions: first, when an animal is running, walking, or in some other way actively interacting with its surroundings; second, during REM sleep. The frequency of the theta waves increases as a function of running speed, starting at about 6.5 Hz on the low end, and increasing to about 9 Hz at the fastest running speeds, although higher frequencies are sometimes seen for brief high-velocity movements such as jumps across wide gaps. In larger species of animals, theta frequencies are generally lower. The behavioral dependency also seems to vary by species: in cats and rabbits, theta is often observed during states of motionless alertness. This has been reported for rats as well, but only when they are fearful (Sainsbury et al., 1987).
Theta is not just confined to the hippocampus. In rats, it can be observed in many parts of the brain, including nearly all that interact strongly with the hippocampus. The generation of the rhythm is dependent on the medial septal area: this area projects to all of the regions that show theta rhythmicity, and destruction of it eliminates theta throughout the brain (Stewart & Fox, 1990).
Type 1 and type 2
In 1975 Kramis, Bland, and Vanderwolf proposed that in rats there are two distinct types of hippocampal theta rhythm, with different behavioral and pharmacological properties (Kramis et al., 1975). Type 1 ("atropine resistant") theta, according to them, appears during locomotion and other types of "voluntary" behavior and during REM sleep, has a frequency usually around 8 Hz, and is unaffected by the anticholinergic drug atropine. Type 2 ("atropine sensitive") theta appears during immobility and during anesthesia induced by urethane, has a frequency in the 4–7 Hz range, and is eliminated by administration of atropine. Many later investigations have supported the general concept that hippocampal theta can be divided into two types, although there has been dispute about the precise properties of each type. Type 2 theta is comparatively rare in unanesthetized rats: it may be seen briefly when an animal is preparing to make a movement but hasn't yet executed it, but has only been reported for extended periods in animals that are in a state of frozen immobility because of the nearby presence of a predator such as a cat or ferret (Sainsbury et al., 1987).
Relationship with behavior
Vanderwolf (1969) made a strong argument that the presence of theta in the hippocampal EEG can be predicted on the basis of what an animal is doing, rather than why the animal is doing it. Active movements such as running, jumping, bar-pressing, or exploratory sniffing are reliably associated with theta; inactive states such as eating or grooming are associated with LIA. Later studies showed that theta frequently begins several hundred milliseconds before the onset of movement, and that it is associated with the intention to move rather than with feedback produced by movement (Whishaw & Vanderwolf, 1973). The faster an animal runs, the higher the theta frequency. In rats, the slowest movements give rise to frequencies around 6.5 Hz, the fastest to frequencies around 9 Hz, although faster oscillations can be observed briefly during very vigorous movements such as large jumps.
Mechanisms
Numerous studies have shown that the medial septal area plays a central role in generating hippocampal theta (Stewart & Fox, 1990). Lesioning the medial septal area, or inactivating it with drugs, eliminates both type 1 and type 2 theta. Under certain conditions, theta-like oscillations can be induced in hippocampal or entorhinal cells in the absence of septal input, but this does not occur in intact, undrugged adult rats. The critical septal region includes the medial septal nucleus and the vertical limb of the diagonal band of Broca. The lateral septal nucleus, a major recipient of hippocampal output, probably does not play an essential role in generating theta.
The medial septal area projects to a large number of brain regions that show theta modulation, including all parts of the hippocampus as well as the entorhinal cortex, perirhinal cortex, retrosplenial cortex, medial mamillary and supramammillary nuclei of the hypothalamus, anterior nuclei of the thalamus, amygdala, inferior colliculus, and several brainstem nuclei (Buzsáki, 2002). Some of the projections from the medial septal area are cholinergic; the rest are GABAergic or glutamatergic. It is commonly argued that cholinergic receptors do not respond rapidly enough to be involved in generating theta waves, and therefore that GABAergic and/or glutamatergic signals (Ujfalussy and Kiss, 2006) must play the central role.
A major research problem has been to discover the "pacemaker" for the theta rhythm, that is, the mechanism that determines the oscillation frequency. The answer is not yet entirely clear, but there is some evidence that type 1 and type 2 theta depend on different pacemakers. For type 2 theta, the supramammillary nucleus of the hypothalamus appears to exert control (Kirk, 1998). For type 1 theta, the picture is still unclear, but the most widely accepted hypothesis proposes that the frequency is determined by a feedback loop involving the medial septal area and hippocampus (Wang, 2002).
Several types of hippocampal and entorhinal neurons are capable of generating theta-frequency membrane potential oscillations when stimulated. Typically these are sodium-dependent voltage-sensitive oscillations in membrane potential at near-action potential voltages (Alonso & Llinás, 1989). Specifically, it appears that in neurons of the CA1 and dentate gyrus, these oscillations result from an interplay of dendritic excitation via a persistent sodium current (INaP) with perisomatic inhibition (Buzsáki, 2002).
Generators
As a rule, EEG signals are generated by synchronized synaptic input to the dendrites of neurons arranged in a layer. The hippocampus contains multiple layers of very densely packed neurons—the dentate gyrus and the CA3/CA1/subicular layer—and therefore has the potential to generate strong EEG signals. Basic EEG theory says that when a layer of neurons generates an EEG signal, the signal always phase-reverses at some level. Thus, theta waves recorded from sites above and below a generating layer have opposite signs. There are other complications as well: the hippocampal layers are strongly curved, and theta-modulated inputs impinge on them from multiple pathways, with varying phase relationships. The outcome of all these factors is that the phase and amplitude of theta oscillations change in a very complex way as a function of position within the hippocampus. The largest theta waves, however, are generally recorded from the vicinity of the fissure that separates the CA1 molecular layer from the dentate gyrus molecular layer. In rats, these signals frequently exceed 1 millivolt in amplitude. Theta waves recorded from above the hippocampus are smaller, and polarity-reversed with respect to the fissure signals.
The strongest theta waves are generated by the CA1 layer, and the most significant input driving them comes from the entorhinal cortex, via the direct EC→CA1 pathway. Another important driving force comes from the CA3→CA1 projection, which is out of phase with the entorhinal input, leading to a gradual phase shift as a function of depth within CA1 (Brankack, et al. 1993). The dentate gyrus also generates theta waves, which are difficult to separate from the CA1 waves because they are considerably smaller in amplitude, but there is some evidence that dentate gyrus theta is usually about 90 degrees out of phase from CA1 theta. Direct projections from the septal area to hippocampal interneurons also play a role in generating theta waves, but their influence is much smaller than that of the entorhinal inputs (which are, however, themselves controlled by the septum).
Research findings
Theta-frequency activity arising from the hippocampus is manifested during some short-term memory tasks (Vertes, 2005). Studies suggest that these rhythms reflect the "on-line" state of the hippocampus; one of readiness to process incoming signals (Buzsáki, 2002). Conversely, theta oscillations have been correlated to various voluntary behaviors (exploration, spatial navigation, etc.) and alert states (goose bumps, etc.) in rats (Vanderwolf, 1969), suggesting that it may reflect the integration of sensory information with motor output (for review, see Bland & Oddie, 2001). A large body of evidence indicates that theta rhythm is likely involved in spatial learning and navigation (Buzsáki, 2005).
Theta rhythms are very strong in rodent hippocampi and entorhinal cortex during learning and memory retrieval, and are believed to be vital to the induction of long-term potentiation, a potential cellular mechanism of learning and memory. Phase precession along the theta wave in the hippocampus permits neural signals representing events that are only expected or those from the recent past to be placed next to the actually ongoing ones along a single theta cycle, and to be repeated over several theta cycles. This mechanism is supposed to allow long term potentiation (LTP) to reinforce the connections between neurons of the hippocampus representing subsequent elements of a memory sequence. Indeed, it has been suggested that stimulation at the theta frequency is optimal for the induction of hippocampal LTP. Based on evidence from electrophysiological studies showing that both synaptic plasticity and strength of inputs to hippocampal region CA1 vary systematically with ongoing theta oscillations (Hyman et al., 2003; Brankack et al., 1993), it has been suggested that the theta rhythm functions to separate periods of encoding of current sensory stimuli and retrieval of episodic memory cued by current stimuli so as to avoid interference that would occur if encoding and retrieval were simultaneous.
Humans and other primates
In non-human animals, EEG signals are usually recorded using electrodes implanted in the brain; the majority of theta studies have involved electrodes implanted in the hippocampus. In humans, because invasive studies are not ethically permissible except in some neurological patients, the largest number of EEG studies have been conducted using electrodes glued to the scalp. The signals picked up by scalp electrodes are comparatively small and diffuse and arise almost entirely from the cerebral cortex for the hippocampus is too small and too deeply buried to generate recognizable scalp EEG signals. Human EEG recordings show clear theta rhythmicity in some situations, but because of the technical difficulties, it has been difficult to tell whether these signals have any relationship with the hippocampal theta signals recorded from other species.
In contrast to the situation in rats, where long periods of theta oscillations are easily observed using electrodes implanted at many sites, theta has been difficult to pin down in primates, even when intracortical electrodes have been available. Green and Arduini (1954), in their pioneering study of theta rhythms, reported only brief bursts of irregular theta in monkeys. Other investigators have reported similar results, although Stewart and Fox (1991) described a clear 7–9 Hz theta rhythm in the hippocampus of urethane-anesthetized macaques and squirrel monkeys, resembling the type 2 theta observed in urethane-anesthetized rats.
Most of the available information on human hippocampal theta comes from a few small studies of epileptic patients with intracranially implanted electrodes used as part of a treatment plan. In the largest and most systematic of these studies, Cantero et al. (2003) found that oscillations in the 4–7 Hz frequency range could be recorded from both the hippocampus and neocortex. The hippocampal oscillations were associated with REM sleep and the transition from sleep to waking, and came in brief bursts, usually less than a second long. Cortical theta oscillations were observed during the transition from sleep and during quiet wakefulness; however, the authors were unable to find any correlation between hippocampal and cortical theta waves, and concluded that the two processes are probably controlled by independent mechanisms.
Studies have shown an association of hypnosis with stronger theta-frequency activity as well as with changes to the gamma-frequency activity (Jensen et al., 2015). Also, increased theta waves have been seen in humans in 'no thought' meditation.
Theta Oscillations in Learning and Cognitive Control
Theta oscillations, typically defined within the frequency range of 4–7 Hz, play a significant role in various cognitive processes, including learning and cognitive control. Research has shown that these oscillations are closely associated with memory encoding and retrieval, emotional regulation, and the maintenance of cognitive tasks.
Learning and Memory
Theta activity has been linked to the encoding of new information and the retrieval of memories. In particular, studies have indicated that increased theta oscillations are present during tasks that require active memory engagement. For instance, theta-phase locking of single neurons has been observed during spatial memory tasks, suggesting that the timing of neuronal firing in relation to theta waves is crucial for effective memory processing.
This phenomenon underscores the importance of theta rhythms in organizing neural activity to facilitate learning.
Cognitive Control
In the context of cognitive control, theta oscillations are believed to reflect the maintenance of task rules and stimulus-action associations. Research indicates that during cognitive control tasks, increased theta amplitude is observed in frontal brain regions, which are critical for decision-making and behavioral regulation. Specifically, theta activity is thought to support mechanisms that allow individuals to adapt their behavior based on changing environmental demands and internal goals. This dynamic interplay between theta oscillations and cognitive control processes highlights how these brain rhythms contribute to efficient task performance.. Moreover, studies using non-invasive brain stimulation have demonstrated the causal role of theta oscillations in the prefrontal cortex during the anticipation of cognitive control demands.
In summary, theta oscillations serve as a fundamental neural mechanism underlying both learning and cognitive control, facilitating the organization of information processing and adaptive behavior in response to environmental challenges.
See also
Epilepsy
Brain waves
Delta wave — (0.1–3 Hz)
Theta wave — (4–8 Hz)
Alpha wave — (8–15 Hz)
Mu wave — (7.5–12.5 Hz)
SMR wave — (12.5–15.5 Hz)
Beta wave — (16–31 Hz)
Gamma wave — (32–100 Hz)
References
External links
Brain slice models of theta EEG activity
Electroencephalography
Neurophysiology
Electrophysiology
Waves | Theta wave | [
"Physics"
] | 5,056 | [
"Waves",
"Physical phenomena",
"Motion (physics)"
] |
3,071,612 | https://en.wikipedia.org/wiki/Hydrogen%20spectral%20series | The emission spectrum of atomic hydrogen has been divided into a number of spectral series, with wavelengths given by the Rydberg formula. These observed spectral lines are due to the electron making transitions between two energy levels in an atom. The classification of the series by the Rydberg formula was important in the development of quantum mechanics. The spectral series are important in astronomical spectroscopy for detecting the presence of hydrogen and calculating red shifts.
Physics
A hydrogen atom consists of an electron orbiting its nucleus. The electromagnetic force between the electron and the nuclear proton leads to a set of quantum states for the electron, each with its own energy. These states were visualized by the Bohr model of the hydrogen atom as being distinct orbits around the nucleus. Each energy level, or electron shell, or orbit, is designated by an integer, as shown in the figure. The Bohr model was later replaced by quantum mechanics in which the electron occupies an atomic orbital rather than an orbit, but the allowed energy levels of the hydrogen atom remained the same as in the earlier theory.
Spectral emission occurs when an electron transitions, or jumps, from a higher energy state to a lower energy state. To distinguish the two states, the lower energy state is commonly designated as , and the higher energy state is designated as . The energy of an emitted photon corresponds to the energy difference between the two states. Because the energy of each state is fixed, the energy difference between them is fixed, and the transition will always produce a photon with the same energy.
The spectral lines are grouped into series according to . Lines are named sequentially starting from the longest wavelength/lowest frequency of the series, using Greek letters within each series. For example, the line is called "Lyman-alpha" (Ly-α), while the line is called "Paschen-delta" (Pa-δ).
There are emission lines from hydrogen that fall outside of these series, such as the 21 cm line. These emission lines correspond to much rarer atomic events such as hyperfine transitions. The fine structure also results in single spectral lines appearing as two or more closely grouped thinner lines, due to relativistic corrections.
In quantum mechanical theory, the discrete spectrum of atomic emission was based on the Schrödinger equation, which is mainly devoted to the study of energy spectra of hydrogen-like atoms, whereas the time-dependent equivalent Heisenberg equation is convenient when studying an atom driven by an external electromagnetic wave.
In the processes of absorption or emission of photons by an atom, the conservation laws hold for the whole isolated system, such as an atom plus a photon. Therefore the motion of the electron in the process of photon absorption or emission is always accompanied by motion of the nucleus, and, because the mass of the nucleus is always finite, the energy spectra of hydrogen-like atoms must depend on the nuclear mass.
Rydberg formula
The energy differences between levels in the Bohr model, and hence the wavelengths of emitted or absorbed photons, is given by the Rydberg formula:
where
The wavelength will always be positive because is defined as the lower level and so is less than . This equation is valid for all hydrogen-like species, i.e. atoms having only a single electron, and the particular case of hydrogen spectral lines is given by Z = 1.
Series
Lyman series ( = 1)
In the Bohr model, the Lyman series includes the lines emitted by transitions of the electron from an outer orbit of quantum number n > 1 to the 1st orbit of quantum number n' = 1.
The series is named after its discoverer, Theodore Lyman, who discovered the spectral lines from 1906–1914. All the wavelengths in the Lyman series are in the ultraviolet band.
Balmer series ( = 2)
The Balmer series includes the lines due to transitions from an outer orbit n > 2 to the orbit n' = 2.
Named after Johann Balmer, who discovered the Balmer formula, an empirical equation to predict the Balmer series, in 1885. Balmer lines are historically referred to as "H-alpha", "H-beta", "H-gamma" and so on, where H is the element hydrogen. Four of the Balmer lines are in the technically "visible" part of the spectrum, with wavelengths longer than 400 nm and shorter than 700 nm. Parts of the Balmer series can be seen in the solar spectrum. H-alpha is an important line used in astronomy to detect the presence of hydrogen.
Paschen series (Bohr series, = 3)
Named after the German physicist Friedrich Paschen who first observed them in 1908. The Paschen lines all lie in the infrared band. This series overlaps with the next (Brackett) series, i.e. the shortest line in the Brackett series has a wavelength that falls among the Paschen series. All subsequent series overlap.
Brackett series ( = 4)
Named after the American physicist Frederick Sumner Brackett who first observed the spectral lines in 1922. The spectral lines of Brackett series lie in far infrared band.
Pfund series ( = 5)
Experimentally discovered in 1924 by August Herman Pfund.
Humphreys series ( = 6)
Discovered in 1953 by American physicist Curtis J. Humphreys.
Further series ( > 6)
Further series are unnamed, but follow the same pattern and equation as dictated by the Rydberg equation. Series are increasingly spread out and occur at increasing wavelengths. The lines are also increasingly faint, corresponding to increasingly rare atomic events. The seventh series of atomic hydrogen was first demonstrated experimentally at infrared wavelengths in 1972 by Peter Hansen and John Strong at the University of Massachusetts Amherst.
Extension to other systems
The concepts of the Rydberg formula can be applied to any system with a single particle orbiting a nucleus, for example a He+ ion or a muonium exotic atom. The equation must be modified based on the system's Bohr radius; emissions will be of a similar character but at a different range of energies. The Pickering–Fowler series was originally attributed to an unknown form of hydrogen with half-integer transition levels by both Pickering and Fowler, but Bohr correctly recognised them as spectral lines arising from the He+ ion.
All other atoms have at least two electrons in their neutral form and the interactions between these electrons makes analysis of the spectrum by such simple methods as described here impractical. The deduction of the Rydberg formula was a major step in physics, but it was long before an extension to the spectra of other elements could be accomplished.
See also
Astronomical spectroscopy
The hydrogen line (21 cm)
Lamb shift
Moseley's law
Quantum optics
References
External links
Hydrogen physics
Emission spectroscopy
Hydrogen | Hydrogen spectral series | [
"Physics",
"Chemistry"
] | 1,344 | [
"Emission spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
3,071,743 | https://en.wikipedia.org/wiki/Star%20Trek%20%282009%20film%29 | Star Trek is a 2009 American science fiction action film directed by J. J. Abrams and written by Roberto Orci and Alex Kurtzman. It is the 11th film in the Star Trek franchise, and is also a reboot that features the main characters of the original Star Trek television series portrayed by a new cast, as the first in the rebooted film series. The film follows James T. Kirk (Chris Pine) and Spock (Zachary Quinto) aboard the USS Enterprise as they combat Nero (Eric Bana), a Romulan from their future who threatens the United Federation of Planets. The story takes place in an alternate reality that features both an alternate birth location for James T. Kirk and further alterations in history stemming from the time travel of both Nero and the original series Spock (Leonard Nimoy). The alternate reality was created in an attempt to free the film and the franchise from established continuity constraints while simultaneously preserving original story elements.
The idea for a prequel film which would follow the Star Trek characters during their time in Starfleet Academy was first discussed by series creator Gene Roddenberry in 1968. The concept resurfaced in the late 1980s, when it was postulated by Harve Bennett as a possible plotline for what would become Star Trek VI: The Undiscovered Country, but it was rejected in favor of other projects by Roddenberry. Following the critical and commercial failure of Star Trek: Nemesis and the cancellation of Star Trek: Enterprise, the franchise's executive producer Rick Berman and screenwriter Erik Jendresen wrote an unproduced film titled Star Trek: The Beginning, which would take place after Enterprise. After the separation of Viacom and CBS Corporation in 2005, former Paramount Pictures president Gail Berman convinced CBS to allow Paramount to produce a new film in the franchise. Orci and Kurtzman were soon approached to write the film, and Abrams was approached to direct it. Kurtzman and Orci used inspiration from novels and graduate school dissertations, as well as the series itself. Principal photography occurred between November 2007 and March 2008, in locations around California and Utah. Abrams wanted to avoid using greenscreen, and preferred sets and locations instead. Heavy secrecy surrounded the film's production and was under the fake working title Corporate Headquarters. Industrial Light & Magic used digital ships for the film, as opposed to miniatures used in most of the previous films in the franchise. Production for the film concluded by the end of 2008.
Star Trek was heavily promoted in the months preceding its release; pre-release screenings for the film premiered in select cities around the world, including Austin, Sydney, and Calgary. It was released in the United States on May 8, 2009, to critical acclaim. The film was a box office success, grossing over $385.7 million worldwide against its $150 million production budget. It was nominated for several awards, including four at the 82nd Academy Awards, winning Best Makeup—the only Academy Award a Star Trek film has won. It was followed by the sequels Star Trek Into Darkness (2013) and Star Trek Beyond (2016).
Plot
In 2233, the Federation starship USS Kelvin investigates a "lightning storm" in space. A Romulan ship, Narada, emerges from the storm and attacks the Kelvin, then demands that Kelvins Captain Robau come aboard to negotiate a truce. Robau is questioned about the current stardate and an "Ambassador Spock", whom he does not recognize. Naradas commander, Nero, kills him, and resumes attacking the Kelvin. George Kirk, Kelvins first officer, orders the ship's personnel, including his pregnant wife Winona, to abandon ship while he pilots the Kelvin on a collision course with Narada, since the Kelvin's autopilot is disabled. While Kirk sacrifices his life, Winona gives birth to James Tiberius Kirk.
Twenty-two years later on the planet Vulcan, a young Spock is admitted to the Vulcan Science Academy. Realizing that the Academy views his human mother, Amanda, as a "disadvantage", he joins Starfleet instead. On Earth, following a bar fight with Starfleet cadets accompanying Nyota Uhura, an adult Kirk meets Captain Christopher Pike, who encourages him to enlist in Starfleet Academy. There, Kirk meets and befriends doctor Leonard McCoy. Three years later, Commander Spock accuses Kirk of cheating during the Kobayashi Maru simulation. Kirk argues that cheating was acceptable because the simulation was designed to be unbeatable. The disciplinary hearing is interrupted by a distress signal from Vulcan. With the primary fleet out of range, the cadets are mobilized. McCoy and Kirk board Pike's ship, the .
Realizing that the "lightning storm" observed near Vulcan is similar to the one that occurred when he was born, Kirk convinces Pike that the signal is a trap. Arriving, the Enterprise finds the fleet destroyed and Narada drilling into Vulcan's core. Narada attacks Enterprise and Pike surrenders, delegating command of the ship to Spock and promoting Kirk to first officer. Kirk, Hikaru Sulu, and Chief Engineer Olson perform a space jump onto the drilling platform. While Olson is killed mid-jump, Kirk and Sulu disable the drill, but are unable to stop Nero launching "red matter" into Vulcan's core, forming an artificial black hole that destroys Vulcan. The Enterprise rescues Spock's father, Sarek, and the high council before Vulcan's destruction, but Amanda falls to her death before the transporter can lock onto her. As Narada approaches Earth, Nero tortures Pike to gain access to Earth's defense codes.
Spock maroons Kirk on Delta Vega after he attempts mutiny. There, Kirk encounters an older Spock from an alternate timeline, who explains that he and Nero are from 2387. In the future, Romulus was threatened by a supernova, which Spock attempted to stop with red matter. His plan failed, resulting in Nero's family perishing along with Romulus, while the Narada and Spock's vessel were caught in the black hole and sent back in time. They were sent back 25 years apart, during which time Nero attacked the Kelvin, changing history and creating a parallel universe. After Spock's arrival, Nero stranded him on Delta Vega to watch Vulcan's destruction. Reaching a Starfleet outpost, Kirk and the elder Spock meet Montgomery "Scotty" Scott, who devises a trans-warp transporter system, allowing him and Kirk to beam onto Enterprise.
Following the elder Spock's advice, Kirk provokes younger Spock into attacking him, forcing Spock to recognize himself as emotionally compromised and relinquish command to Kirk. After talking with Sarek, Spock decides to help Kirk. While Enterprise hides within the gas clouds of Titan, Kirk and Spock beam aboard Narada. Kirk fights Nero and rescues Pike, while Spock uses the elder Spock's ship to destroy the drill. Spock leads Narada away from Earth and sets his ship to collide with Narada. Enterprise beams Kirk, Pike, and Spock aboard. The older Spock's ship and Narada collide, igniting the red matter. Narada is consumed in a black hole that Enterprise escapes.
Kirk is promoted to captain and given command of Enterprise, while Pike is promoted to rear admiral. Spock encounters his older self, who persuades him to continue serving in Starfleet, encouraging him to do what feels right rather than what is logical. Spock becomes first officer under Kirk's command.
Cast
Chris Pine as James T. Kirk: Pine described his first audition as awful, because he could not take himself seriously as a leader. Chris Pratt, Timothy Olyphant and Sebastian Stan were either screen-tested or auditioned for the role. Abrams did not see Pine's first audition, and it was only after Pine's agent met Abrams' wife that the director decided to give him another audition opposite Quinto. Quinto was supportive of Pine's casting because they knew each other as they worked out at the same gym. After getting the part, Pine sent William Shatner a letter and received a reply containing Shatner's approval. Pine watched classic episodes and read encyclopedias about the Star Trek universe, but stopped as he felt weighed down by the feeling he had to copy Shatner. Pine felt he had to show Kirk's "humor, arrogance and decisiveness," but not Shatner's speech pattern, which would have bordered on imitation. Pine said when watching the original series, he was also struck by how Shatner's performance was characterized by humor. Instead, Pine chose to incorporate elements of Tom Cruise from Top Gun and Harrison Ford's portrayals of Indiana Jones and Han Solo.
Jimmy Bennett as Young Kirk.
Zachary Quinto as Spock: Quinto expressed interest in the role because of the duality of Spock's half-human, half Vulcan heritage, and how "he is constantly exploring that notion of how to evolve in a responsible way and how to evolve in a respectful way. I think those are all things that we as a society, and certainly the world, could implement." He mentioned he heard about the new film and revealed his interest in the role in a December 2006 interview with the Pittsburgh Post-Gazette: the article was widely circulated and he attracted Abrams' interest. For the audition, Quinto wore a blue shirt and flattened his hair down to feel more like Spock. He bound his fingers to practice the Vulcan salute, shaved his eyebrows and grew and dyed his hair for the role. He conveyed many of Spock's attributes, such as his stillness and the way Nimoy would hold his hands behind his back. Quinto commented the physical transformation aided in portraying an alien, joking "I just felt like a nerd. I felt like I was 12 again. You look back at those pictures and you see the bowl cut. There's no question I was born to play the Spock role. I was sporting that look for a good four or five years." Adrien Brody had discussed playing the role with the director before Quinto was cast.
Jacob Kogan as Young Spock.
Leonard Nimoy as Spock Prime: Nimoy reprises the role of the older Spock from the original Star Trek timeline. He was a longtime friend of Abrams' parents, but became better acquainted with Abrams during filming. Although Quinto watched some episodes of the show during breaks in filming, Nimoy was his main resource in playing Spock. Abrams and the writers met Nimoy at his house; writer Roberto Orci recalled that the actor gave a Who are you guys and what are you up to?' vibe" before being told how important he was to them. He was silent, and Nimoy's wife Susan Bay told the creative team he had remained in his chair after their conversation, emotionally overwhelmed by his decision after turning down many opportunities to revisit the role. Had Nimoy disliked the script, production would have been delayed for it to be rewritten. Nimoy later said, "This is the first and only time I ever had a filmmaker say, 'We cannot make this film without you and we won't make it without you'". He was "genuinely excited" by the script's scope and its detailing of the characters' backstories, saying, "We have dealt with [Spock's being half-human, half-Vulcan], but never with quite the overview that this script has of the entire history of the character, the growth of the character, the beginnings of the character and the arrival of the character into the Enterprise crew." Abrams commented, "It was surreal to direct him as Spock, because what the hell am I doing there? This guy has been doing it for forty years. It's like 'I think Spock would.... Leonard Nimoy voices the "Space, the final frontier..." lines at the end of the film, lines which were voiced by William Shatner in the original TV series and original cast films.
Karl Urban as Dr. Leonard "Bones" McCoy. Like Pine, Urban said of taking on the role that "it is a case of not doing some sort of facsimile or carbon copy, but really taking the very essence of what DeForest Kelley has done and honoring that and bringing something new to the table". Urban has been a fan of the show since he was seven years old and actively pursued the role after rediscovering the series on DVD with his son. Urban was cast at his first audition, which was two months after his initial meeting with Abrams. He said he was happy to play a comedy-heavy role, something he had not done since The Price of Milk, because he was tired of action-oriented roles. When asked why McCoy is so cantankerous, Urban joked the character might be a "little bipolar actually!" Orci and Kurtzman had collaborated with Urban on Xena: Warrior Princess, in which he played Cupid and Caesar.
Zoe Saldaña as Nyota Uhura: Abrams had liked her work and requested that she play the role. Saldana never saw the original series, though she had played a Trekkie in The Terminal (2004), but agreed to play the role after Abrams had complimented her. "For an actor, that's all you need, that's all you want. To get the acknowledgment and respect from your peers," she said. She met with Nichelle Nichols, who explained to her how she had created Uhura's background, and also named the character. Saldana's mother was a Star Trek fan and sent her voice mails during filming, giving advice on the part. Sydney Tamiia Poitier also auditioned for the part. The film officially establishes the character's first name, which had never been previously uttered on TV or in film.
Simon Pegg as Montgomery "Scotty" Scott: Abrams contacted Pegg by e-mail, offering him the part. To perform Scotty's accent, Pegg was assisted by his wife Maureen, who is from Glasgow, although Pegg said Scotty was from Linlithgow and wanted to bring a more East Coast sound to his accent, so his resulting performance is a mix of both accents that leans towards the West sound. He was also aided by Tommy Gormley, the film's Glaswegian first assistant director. Pegg described Scotty as a positive Scottish stereotype, noting "Scots are the first people to laugh at the fact that they drink and fight a bit", and that Scotty comes from a long line of Scots with technical expertise, such as John Logie Baird and Alexander Graham Bell. Years before, Pegg's character in Spaced joked that every odd-numbered Star Trek film being "shit" was a fact of life. Pegg noted "Fate put me in the movie to show me I was talking out of my ass."
John Cho as Hikaru Sulu: Abrams was concerned about casting a Korean-American as a Japanese character, but George Takei explained to the director that Sulu was meant to represent all of Asia on the Enterprise, so Abrams went ahead with Cho. Cho acknowledged being an Asian-American, "there are certain acting roles that you are never going to get, and one of them is playing a cowboy. [Playing Sulu] is a realization of that dream — going into space." He cited the masculinity of the character as being important to him, and spent two weeks fight training. Cho suffered an injury to his wrist during filming, although a representative assured it was "no big deal". James Kyson Lee was interested in the part, but because Quinto was cast as Spock, the producers of the TV show Heroes did not want to lose another cast member for three months.
Anton Yelchin as Pavel Chekov: As with the rest of the cast, Yelchin was allowed to choose what elements to adopt from his predecessor's performances. Yelchin decided to carry on Walter Koenig's speech patterns of replacing "v"s with "w"s, although he and Abrams felt this was a trait more common of Polish accents than Russian ones. He described Chekov as an odd character, being a Russian who was brought on to the show "in the middle of the Cold War". He recalled a "scene where they're talking to Apollo [who says], 'I am Apollo.' And Chekov is like, 'And I am the czar of all Russias.' [...] They gave him these lines. I mean he really is the weirdest, weirdest character."
Eric Bana as Captain Nero: The film's time-traveling Romulan villain. Bana shot his scenes toward the end of filming. He was "a huge Trekkie when [he] was a kid", but had not seen the films. Even if he were "crazy about the original series", he would not have accepted the role unless he liked the script, which he deemed "awesome" once he read it. Bana knew Abrams because they coincidentally shared the same agent. Bana improvised the character's speech patterns.
Bruce Greenwood as Christopher Pike: The captain of the Enterprise.
Ben Cross as Sarek: Spock's father.
Winona Ryder as Amanda Grayson: Spock's mother.
Clifton Collins Jr. as Ayel: Nero's first officer.
Chris Hemsworth as George Kirk: Kirk's father, who died aboard the USS Kelvin while battling the Romulans. Before Hemsworth was cast, Abrams met with Matt Damon about playing the role.
Jennifer Morrison as Winona Kirk: Kirk's mother.
Rachel Nichols as Gaila: An Orion Starfleet cadet.
Faran Tahir as Richard Robau: Captain of the USS Kelvin.
Deep Roy as Keenser: Scotty's alien assistant on Delta Vega.
Greg Ellis as Chief Engineer Olson: The redshirt who is killed during the space jump.
Tyler Perry as Admiral Richard Barnett: The head of Starfleet Academy.
Amanda Foreman as Hannity, a Starfleet officer on the Enterprise bridge.
Spencer Daniels as Johnny, a childhood friend of Kirk. Daniels was set to play his older brother, George Samuel "Sam" Kirk, Jr., but the majority of his scenes were cut and James Kirk's callout was overdubbed.
Victor Garber as Klingon Interrogator, the officer who tortures Nero during his time on Rura Penthe. His scene was cut from the film and was featured on the DVD.
Chris Doohan, the son of the original Scotty, James Doohan, makes a cameo appearance in the transporter room. Pegg e-mailed Doohan about the role of Scotty, and the actor has promised him his performance "would be a complete tribute to his father". Chris Doohan previously cameoed in Star Trek: The Motion Picture. Greg Grunberg has a vocal cameo as Kirk's alcoholic stepfather. Grunberg was up for the role of Olson but dropped out due to a scheduling conflict. Grunberg was also interested in playing Harry Mudd, who was in an early draft of the script. Brad William Henke filmed scenes in the role which were cut out. Diora Baird appears in a deleted scene as an Orion cadet that Kirk mistakes for Gaila. Star Trek: Enterprise star Dominic Keating also auditioned for the role. Paul McGillion auditioned for Scotty, and he impressed producers enough that he was given another role as a 'Barracks Leader'. Abrams offered Ricky Gervais a role in the film, but he turned it down due to being unfamiliar with the series. James Cawley, producer and star of the webseries Star Trek: New Voyages, appears as a Starfleet officer, while Pavel Lychnikoff and Lucia Rijker play Romulans, Lychnikoff a Commander and Rijker a CO. W. Morgan Sheppard, who played a Klingon in Star Trek VI: The Undiscovered Country, appears in this film as the head of the Vulcan Science Council. Wil Wheaton, known for portraying Wesley Crusher on Star Trek: The Next Generation, was brought in, through urging by Greg Grunberg, to voice several of the other Romulans in the film. Star Trek fan and Carnegie Mellon University professor Randy Pausch (who died on July 25, 2008) cameoed as a Kelvin crew member, and has a line of dialogue. Majel Barrett, the widow of Star Trek creator Gene Roddenberry, reprised her role as the voice of the Enterprises computer, which she completed two weeks before her death on December 18, 2008. The film was dedicated to her, as well as Gene, to whom the film was always going to be commemorated as a sign of respect.
Orci and Kurtzman wrote a scene for William Shatner, where old Spock gives his younger self a recorded message by Kirk from the previous timeline. "It was basically a Happy Birthday wish knowing that Spock was going to go off to Romulus, and Kirk would probably be dead by the time," and it would have transitioned into Shatner reciting "where no man has gone before". But Shatner wanted to share Nimoy's major role, and did not want a cameo, despite his character's death in Star Trek Generations. He suggested the film canonize his novels where Kirk is resurrected, but Abrams decided if his character was accompanying Nimoy's, it would have become a film about the resurrection of Kirk, and not about introducing the new versions of the characters. Nimoy disliked the character's death in Generations, but also felt resurrecting Kirk would be detrimental to this film.
Nichelle Nichols suggested playing Uhura's grandmother, but Abrams could not write this in due to the Writers Guild strike. Abrams was also interested in casting Keri Russell, but they deemed the role he had in mind for her too similar to her other roles.
Production
Development
As early as the 1968 World Science Fiction Convention, Star Trek creator Roddenberry had said he was going to make a film prequel to the television series. But the prequel concept did not resurface until the late 1980s, when Ralph Winter and Harve Bennett submitted a proposal for a prequel during development of the fourth film. Roddenberry rejected Bennett's prequel proposal in 1991, after the completion of Star Trek V: The Final Frontier. Then David Loughery wrote a script entitled The Academy Years, but it was shelved in light of objections from Roddenberry and the fanbase. The film that was commissioned instead ended up being Star Trek VI: The Undiscovered Country. In February 2005, after the financial failure of the tenth film, Star Trek: Nemesis (2002), and the cancellation of the television series Star Trek: Enterprise, the franchise's executive producer Rick Berman and screenwriter Erik Jendresen began developing a new film entitled Star Trek: The Beginning. It was to revolve around a new set of characters, led by Kirk's ancestor Tiberius Chase, and be set during the Earth-Romulan War—after the events of Enterprise but before the events of the original series.
In 2005, Viacom, which owned Paramount Pictures, separated from CBS Corporation, which retained Paramount's television properties, including ownership of the Star Trek brand. Gail Berman (no relation to executive producer Rick Berman), then president of Paramount, convinced CBS' chief executive, Leslie Moonves, to allow them eighteen months to develop a new Star Trek film, otherwise Paramount would lose the film rights. Berman approached Mission: Impossible III writers Roberto Orci and Alex Kurtzman for ideas on the new film, and after the film had completed shooting she asked their director, Abrams, to produce it. Abrams, Orci, and Kurtzman, plus producers Damon Lindelof and Bryan Burk, felt the franchise had explored enough of what took place after the series, Orci and Lindelof consider themselves trekkies, and feel some of the Star Trek novels have canonical value, although Roddenberry never considered the novels to be canon. Kurtzman is a casual fan, while Burk was not. Abrams' company Bad Robot produced the film with Paramount, marking the first time another company had financed a Star Trek film. Bill Todman, Jr.'s Level 1 Entertainment also co-produced the film, but, during 2008, Spyglass Entertainment replaced them as financial partner.
In an interview, Abrams said that he had never seen Star Trek: Nemesis because he felt the franchise had "disconnected" from the original series. For him, he said, Star Trek was about Kirk and Spock, and the other series were like "separate space adventure[s] with the name Star Trek". He also acknowledged that as a child he had actually preferred the Star Wars movies. He noted that his general knowledge of Star Trek made him well suited to introduce the franchise to newcomers, and that, being an optimistic person, he would make Star Trek an optimistic film, which would be a refreshing contrast to the likes of The Dark Knight. He added that he loved the focus on exploration in Star Trek and the idea of the Prime Directive, which forbids Starfleet from interfering in the development of primitive worlds; but that, because of the budgetary limitations of the original series, it had "never had the resources to actually show the adventure". He noted that he initially became involved with the project as producer only because he wanted to help Orci, Kurtzman, and Lindelof.
On February 23, 2007, Abrams accepted Paramount's offer to direct the film, after having initially been attached to it solely as a producer. He explained that he had decided to direct the film because, after reading the script, he realized that he "would be so agonizingly envious of whoever stepped in and directed the movie". Orci and Kurtzman said that their aim had been to impress a casual fan like Abrams with their story. Abrams noted that, during filming, he had been nervous "with all these tattooed faces and pointy ears, bizarre weaponry and Romulan linguists, with dialogue about 'Neutral Zones' and 'Starfleet' [but] I knew this would work, because the script Alex and Bob wrote was so emotional and so relatable. I didn't love Kirk and Spock when I began this journey – but I love them now."
Writing
Orci said getting Leonard Nimoy in the film was important. "Having him sitting around a campfire sharing his memories was never gonna cut it" though, and time travel was going to be included in the film from the beginning. Kurtzman added, saying the time travel creates jeopardy, unlike other prequels where viewers "know how they all died". The writers acknowledged time travel had been overused in the other series, but it served a good purpose in creating a new set of adventures for the original characters before they could completely do away with it in other films. Abrams selected the Romulans as the villains because they had been featured less than the Klingons in the series and thought it would be "fun" to have them meet Kirk before they do in the series. Orci and Kurtzman noted it would feel backward to demonize the Klingons again after they had become heroes in later Star Trek series, and the Romulan presence continues Spock's story from his last chronological appearance in "Unification", an episode of Star Trek: The Next Generation set in 2368. The episode of the original continuity in which Kirk becomes one of the first humans to ever see a Romulan, "Balance of Terror", served as one of the influences for the film. Orci said it was difficult giving a good explanation for the time travel without being gimmicky, like having Nero specifically seeking to assassinate Kirk.
Orci noted while the time travel story allowed them to alter some backstory elements such as Kirk's first encounter with the Romulans, they could not use it as a crutch to change everything and tried to approach the film as a prequel as much as possible. Kirk's service on Farragut, a major backstory point to the original episode "Obsession", was left out because it was deemed irrelevant to the story of Kirk meeting Spock, although Orci felt nothing in his script precluded it from the new film's backstory. There was a scene involving Kirk meeting Carol Marcus (who is revealed as the mother of his son in Star Trek II: The Wrath of Khan) as a child, but it was dropped because the film needed more time to introduce the core characters. Figuring out ways to get the crew together required some contrivances, which Orci and Kurtzman wanted to explain from old Spock as a way of the timeline mending itself, highlighting the theme of destiny. The line was difficult to write and was ultimately cut out.
The filmmakers sought inspiration from novels such as Prime Directive, Spock's World and Best Destiny to fill in gaps unexplained by canon; Best Destiny particularly explores Kirk's childhood and names his parents. One idea that was justified through information from the novels was having Enterprise built on Earth, which was inspired by a piece of fan art of Enterprise being built in a shipyard. Orci had sent the fan art to Abrams to show how realistic the film could be. Orci explained parts of the ship would have to be constructed on Earth because of the artificial gravity employed on the ship and its requirement for sustaining warp speed, and therefore the calibration of the ship's machinery would be best done in the exact gravity well which is to be simulated. They felt free to have the ship built in Iowa because canon is ambiguous as to whether it was built in San Francisco, but this is a result of the time travel rather than something intended to overlap with the original timeline. Abrams noted the continuity of the original series itself was inconsistent at times.
Orci and Kurtzman said they wanted the general audience to like the film as much as the fans, by stripping away "Treknobabble," making it action-packed and giving it the simple title of Star Trek to indicate to newcomers they would not need to watch any of the other films. Abrams saw humor and sex appeal as two integral and popular elements of the show that needed to be maintained. Orci stated being realistic and being serious were not the same thing. Abrams, Burk, Lindelof, Orci and Kurtzman were fans of The Wrath of Khan, and also cited The Next Generation episode "Yesterday's Enterprise" as an influence. Abrams' wife Katie was regularly consulted on the script, as were Orci, Kurtzman and Lindelof's wives, to make the female characters as strong as possible. Katie Abrams' approval of the strong female characters was partly why Abrams signed on to direct.
Orci and Kurtzman read graduate school dissertations on the series for inspiration; they noted comparisons of Kirk, Spock and McCoy to Shakespearian archetypes, and Kirk and Spock's friendship echoing that of John Lennon and Paul McCartney. They also noted that, in the creation of this film, they were influenced by Star Wars, particularly in pacing. "I want to feel the space, I want to feel speed and I want to feel all the things that can become a little bit lost when Star Trek becomes very stately" said Orci. Star Wars permeated in the way they wrote the action sequences, while Burk noted Kirk and Spock's initially cold relationship mirrors how "Han Solo wasn't friends with anyone when they started on their journey." Spock and Uhura were put in an actual relationship as a nod to early episodes highlighting her interest in him. Orci wanted to introduce strong Starfleet captains, concurring with an interviewer that most captains in other films were "patsies" included to make Kirk look greater by comparison.
USS Kelvin, the ship Kirk's father serves on, is named after J.J. Abrams' grandfather, as well as the physicist and engineer Lord Kelvin (William Thomson). Kelvins captain, Richard Robau (Faran Tahir), is named after Orci's Cuban uncle: Orci theorized the fictional character was born in Cuba and grew up in the Middle East. Another reference to Abrams' previous works is Slusho, which Uhura orders at the bar where she meets Kirk. Abrams created the fictitious drink for Alias and it reappeared in viral marketing for Cloverfield. Its owner, Tagruato, is also from Cloverfield and appears on a building in San Francisco. The red matter in the film is in the shape of a red ball, an Abrams motif dating back to the pilot of Alias.
Design
The film's production designer was Scott Chambliss, a longtime collaborator with Abrams. Chambliss worked with a large group of concept illustrators, including James Clyne, Ryan Church, creature designer Neville Page, and Star Trek veteran John Eaves. Abrams stated the difficulty of depicting the future was that much of modern technology was inspired by the original show, and made it seem outdated. Thus the production design had to be consistent with the television series but also feel more advanced than the real world technology developed after it. "We all have the iPhone that does more than the communicator," said Abrams. "I feel like there's a certain thing that you can't really hold onto, which is kind of the kitschy quality. That must go if it's going to be something that you believe is real." Prop master Russell Bobbitt collaborated with Nokia on recreating the original communicator, creating a $50,000 prototype. Another prop recreated for the film was the tricorder. Bobbitt brought the original prop to the set, but the actors found it too large to carry when filming action scenes, so technical advisor Doug Brody redesigned it to be smaller. The phaser props were designed as spring-triggered barrels that revolve and glow as the setting switches from "stun" to "kill". An Aptera Typ-1 prototype car was used on location.
Production designer Scott Chambliss maintained the layout of the original bridge, but aesthetically altered it with brighter colors to reflect the optimism of Star Trek. The viewscreen was made into a window that could have images projected on it to make the space environment palpable. Abrams compared the redesign to the sleek modernist work of Pierre Cardin and the sets from 2001: A Space Odyssey, which were from the 1960s. He joked the redesigned bridge made the Apple Store look "uncool". At the director's behest, more railings were added to the bridge to make it look safer, and the set was built on gimbals so its rocking motions when the ship accelerates and is attacked were more realistic. To emphasize the size of the ship, Abrams chose to give the engine room a highly industrial appearance: he explained to Pegg that he was inspired by , a sleek ship in which there was an "incredible gut".
Abrams selected Michael Kaplan to design the costumes because he had not seen any of the films, meaning he would approach the costumes with a new angle. For the Starfleet uniforms, Kaplan followed the show's original color-coding, with dark gray (almost black) undershirts and pants and colored overshirts showing each crew member's position. Command officers wear gold shirts, science and medical officers wear blue, and operations (technicians, engineers, and security personnel) wear red. Kaplan wanted the shirts to be more sophisticated than the originals and selected to have the Starfleet symbol patterned on them. Kirk wears only the undershirt because he is a cadet. Kaplan modelled the uniforms on Kelvin on science fiction films of the 1940s and 1950s, to contrast with Enterprise-era uniforms based on the ones created in the 1960s. For Abrams, "The costumes were a microcosm of the entire project, which was how to take something that's kind of silly and make it feel real. But how do you make legitimate those near-primary color costumes?"
Lindelof compared the film's Romulan faction to pirates with their bald, tattooed heads and disorganized costuming. Their ship, Narada, is purely practical with visible mechanics as it is a "working ship", unlike the Enterprise crew who give a respectable presentation on behalf of the Federation. Chambliss was heavily influenced by the architecture of Antoni Gaudí for Narada, who created buildings that appeared to be inside out: by making the ship's exposed wires appear like bones or ligaments, it would create a foreboding atmosphere. The ship's interior was made of six pieces that could be rearranged to create a different room. The Romulan actors had three prosthetics applied to their ears and foreheads, while Bana had a fourth prosthetic for the bitemark on his ear that extends to the back of his character's head. The film's Romulans lack the V-shaped ridges on the foreheads, which had been present in all of their depictions outside the original series. Neville Page wanted to honor that by having Nero's crew ritually scar themselves too, forming keloids reminiscent of the 'V'-ridges. It was abandoned as they did not pursue the idea enough. Kaplan wanted aged, worn, and rugged clothes for the Romulans because of their mining backgrounds and found some greasy looking fabrics at a flea market. Kaplan tracked down the makers of those clothes, who were discovered to be based in Bali, and commissioned them to create his designs.
Barney Burman supervised the makeup for the other aliens: his team had to rush the creation of many of the aliens, because originally the majority of them were to feature in one scene towards the end of filming. Abrams deemed the scene too similar to the cantina sequence in Star Wars and decided to dot the designs around the film. A tribble was placed in the background of Scotty's introduction. Both digital and physical makeup was used for aliens.
Filming
Principal photography for the film began on November 7, 2007, and culminated on March 27, 2008; however second unit filming occurred in Bakersfield, California, in April 2008, which stood in for Kirk's childhood home in Iowa. Filming was also done at the City Hall of Long Beach, California; the San Rafael Swell in Utah; and the California State University, Northridge in Los Angeles (which was used for establishing shots of students at Starfleet Academy). A parking lot outside Dodger Stadium was used for the ice planet of Delta Vega and the Romulan drilling rig on Vulcan. The filmmakers expressed an interest in Iceland for scenes on Delta Vega, but decided against it: Chambliss enjoyed the challenge of filming scenes with snow in southern California. Other Vulcan exteriors were shot at Vasquez Rocks, a location that was used in various episodes of the original series. A Budweiser plant in Van Nuys was used for Enterprises engine room, while a Long Beach power plant was used for Kelvins engine room.
Following the initiation of the 2007–2008 Writers Guild of America strike on November 5, 2007, Abrams, himself a WGA member, told Variety that while he would not render writing services for the film and intended to walk the picket line, he did not expect the strike to impact his directing of the production. In the final few weeks before the strike and start of production, Abrams and Lindelof polished the script for a final time. Abrams was frustrated that he was unable to alter lines during the strike, whereas normally they would have been able to improvise new ideas during rehearsal, although Lindelof acknowledged they could dub some lines in post-production. Orci and Kurtzman were able to stay on set without strikebreaking because they were also executive producers on the film; they could "make funny eyes and faces at the actors whenever they had a problem with the line and sort of nod when they had something better". Abrams was able to alter a scene where Spock combats six Romulans from a fistfight to a gunfight, having decided there were too many physical brawls in the film.
The production team maintained heavily enforced security around the film. Karl Urban revealed, "[There is a] level of security and secrecy that we have all been forced to adopt. I mean, it's really kind of paranoid crazy, but sort of justified. We're not allowed to walk around in public in our costumes and we have to be herded around everywhere in these golf carts that are completely concealed and covered in black canvas. The security of it is immense. You feel your freedom is a big challenge." Actors like Jennifer Morrison were only given the scripts of their scenes. The film's shooting script was fiercely protected even with the main cast. Simon Pegg said, "I read [the script] with a security guard near me – it's that secretive." The film used the fake working title of Corporate Headquarters. Some of the few outside of the production allowed to visit the set included Rod Roddenberry, Ronald D. Moore, Jonathan Frakes, Walter Koenig, Nichelle Nichols, Ben Stiller, Tom Cruise and Steven Spielberg (who had partially convinced Abrams to direct because he liked the script, and he even advised the action scenes during his visit).
When the shoot ended, Abrams gave the cast small boxes containing little telescopes, which allowed them to read the name of each constellation it was pointed at. "I think he just wanted each of us to look at the stars a little differently," said John Cho. After the shoot, Abrams cut out some scenes of Kirk and Spock as children, including seeing the latter as a baby, as well as a subplot involving Nero being imprisoned by and escaping from the Klingons. This explanation for his absence during Kirk's life confused many to whom Abrams screened the film. Other scenes cut out explained that the teenage Kirk stole his stepfather's antique car because he had forced him to clean it before an auction; and that the Orion he seduced at the Academy worked in the operations division. Afterward, she agrees to open the e-mail containing his patch that allows him to pass the Kobayashi Maru test.
Abrams chose to shoot the film in the anamorphic format on 35mm film after discussions about whether the film should be shot in high-definition digital video. Cinematographer Dan Mindel and Abrams agreed the choice gave the film a big-screen feel and the realistic, organic look they wanted for the film setting. Abrams and Mindel used lens flares throughout filming to create an optimistic atmosphere and a feeling that activity was taking place off-camera, making the Star Trek universe feel more real. "There's something about those flares, especially in a movie that potentially could be incredibly sterile and CG and overly controlled. There's just something incredibly unpredictable and gorgeous about them." Mindel would create more flares by shining a flashlight or pointing a mirror at the camera lens, or using two cameras simultaneously and therefore two lighting set-ups. Editor Mary Jo Markey later said in an interview that he had not told her (or fellow editor Maryann Brandon) this, and initially contacted the film developers asking why the film seemed overexposed.
Visual effects
Industrial Light & Magic and Digital Domain were among several companies that created over 1,000 special effect shots. The visual effects supervisors were Roger Guyett, who collaborated with Abrams on Mission: Impossible III and also served as second unit director, and Russell Earl. Abrams avoided shooting only against bluescreen and greenscreen, because it "makes me insane", using them instead to extend the scale of sets and locations. The Delta Vega sequence required the mixing of digital snow with real snow.
Star Trek was the first film ILM worked on using entirely digital ships. Enterprise was intended by Abrams to be a merging of its design in the series and the refitted version from the original film. Abrams had fond memories of the revelation of Enterprises refit in Star Trek: The Motion Picture, because it was the first time the ship felt tangible and real to him. The iridescent pattern on the ship from The Motion Picture was maintained to give the ship depth, while model maker Roger Goodson also applied the "Aztec" pattern from The Next Generation. Goodson recalled Abrams also wanted to bring a "hot rod" aesthetic to the ship. Effects supervisor Roger Guyett wanted the ship to have more moving parts, which stemmed from his childhood dissatisfaction with the ship's design: The new Enterprises dish can expand and move, while the fins on its engines split slightly when they begin warping. Enterprise was originally redesigned by Ryan Church using features of the original, at long, but was doubled in size to long to make it seem "grander", while the Romulan Narada is five miles long and several miles wide. The filmmakers had to simulate lens flares on the ships in keeping with the film's cinematography.
Carolyn Porco of NASA was consulted on the planetary science and imagery. The animators realistically recreated what an explosion would look like in space: short blasts, which suck inward and leave debris from a ship floating. For shots of an imploding planet, the same explosion program was used to simulate it breaking up, while the animators could manually composite multiple layers of rocks and wind sucking into the planet. Unlike other Star Trek films and series, the transporter beam effects swirl rather than speckle. Abrams conceived the redesign to emphasize the notion of transporters as beams that can pick up and move people, rather than a signal composed of scrambled atoms.
Lola Visual Effects worked on 48 shots, including some animation to Bana and Nimoy. Bana required extensive damage to his teeth, which was significant enough to completely replace his mouth in some shots. Nimoy's mouth was reanimated in his first scene with Kirk following a rerecording session. The filmmakers had filmed Nimoy when he rerecorded his lines so they could rotoscope his mouth into the film, even recreating the lighting conditions, but they realized they had to digitally recreate his lips because of the bouncing light created by the camp fire.
Sound effects
The sound effects were designed by Star Wars veteran Ben Burtt. Whereas the phaser blast noises from the television series were derived from The War of the Worlds (1953), Burtt made his phaser sounds more like his blasters from Star Wars, because Abrams' depiction of phasers were closer to the blasters' bullet-like fire, rather than the steady beams of energy in previous Star Trek films. Burtt reproduced the classic photon torpedo and warp drive sounds: he tapped a long spring against a contact microphone, and combined that with cannon fire. Burtt used a 1960s oscillator to create a musical and emotional hum to the warping and transporting sounds.
Music
Michael Giacchino, Abrams' most frequent collaborator, composed the music for Star Trek. He kept the original theme by Alexander Courage for the end credits, which Abrams said symbolized the momentum of the crew coming together. Giacchino admitted personal pressure in scoring the film, as "I grew up listening to all of that great [Trek] music, and that's part of what inspired me to do what I'm doing [...] You just go in scared. You just hope you do your best. It's one of those things where the film will tell me what to do." Scoring took place at the Sony Scoring Stage with a 107-piece orchestra and 40-person choir. An erhu, performed by Karen Han, was used for the Vulcan themes. A distorted recording was used for the Romulans. Varèse Sarabande, the record label responsible for releasing albums of Giacchino's previous scores for Alias, Lost, Mission: Impossible III, and Speed Racer, released the soundtrack for the film on May 5. The music for the theatrical trailers were composed by Two Steps from Hell.
Release
Marketing
The first teaser trailer debuted in theaters with Cloverfield on January 18, 2008, which showed Enterprise under construction. Abrams himself directed the first part of the trailer, where a welder removes his goggles. Professional welders were hired for the teaser. The voices of the 1960s played over the trailer were intended to link the film to the present day; John F. Kennedy in particular was chosen because of similarities with the character of James T. Kirk and because he is seen to have "kicked off" the Space Race. Orci explained that: "If we do indeed have a Federation, I think Kennedy's words will be inscribed in there someplace." Star Treks later trailers would win four awards, including Best in Show, in the tenth annual Golden Trailer Awards.
Paramount faced two obstacles in promoting the film: the unfamiliarity of the "MySpace generation" with the franchise and the relatively weak international performance of the previous films. Six months before the film's release, Abrams toured Europe and North America with 25 minutes of footage. Abrams noted the large-scale campaign started unusually early, but this was because the release delay allowed him to show more completed scenes than normal. The director preferred promoting his projects quietly, but concurred Paramount needed to remove Star Treks stigma. Abrams would exaggerate his preference for other shows to Star Trek as a child to the press, with statements like "I'm not a Star Trek fan" and "this movie is not made for Star Trek fans necessarily". Orci compared Abrams' approach to The Next Generation episode "A Matter of Honor", where William Riker is stationed aboard a Klingon vessel. "On that ship when someone talks back to you, you would have to beat them down or you lose the respect of your crew, which is protocol, whereas on a Federation ship that would be a crime. So we have to give [J. J. Abrams] a little bit of leeway, when he is traveling the 'galaxy' over there where they don't know Trek, to say the things that need to be said in order to get people onto our side."
Promotional partners on the film include Nokia, Verizon Wireless, Esurance, Kellogg's, Burger King and Intel Corporation, as well as various companies specializing in home decorating, apparel, jewelry, gift items and "Tiberius", "Pon Farr" and "Red Shirt" fragrances. Playmates Toys, who owned the Star Trek toy license until 2000, also held the merchandise rights for the new film. The first wave was released in March and April 2009. Playmates hope to continue their toy line into 2010. The first wave consists of 3.75", 6" and 12" action figures, an Enterprise replica, prop toys and play sets. to recreate the whole bridge, one would have to buy more 3.75" figures, which come with chairs and consoles to add to the main set consisting of Kirk's chair, the floor, the main console and the viewscreen. Master Replicas, Mattel, Hasbro and Fundex Games will promote the film via playing cards, Monopoly, UNO, Scrabble, Magic 8 Ball, Hot Wheels, Tyco R/C, 20Q, Scene It? and Barbie lines. Some of these are based on previous Star Trek iterations rather than the film. CBS also created a merchandising line based around Star Trek caricatures named "Quogs".
Theatrical
In February 2008, Paramount announced they would move Star Trek from its December 25, 2008, release date to May 8, 2009, as the studio felt more people would see the film during summer than winter. The film was practically finished by the end of 2008. Paramount's decision came about after visiting the set and watching dailies, as they realized the film could appeal to a much broader audience. Even though the filmmakers liked the Christmas release date, Damon Lindelof acknowledged it would allow more time to perfect the visual effects. The months-long gap between the completion of the production and release meant Alan Dean Foster was allowed to watch the whole film before writing the novelization, although the novel would contain scenes absent from the final edit. Quinto narrated the audiobook.
A surprise public screening was held on April 6, 2009, at the Alamo Drafthouse theater in Austin, Texas, hosted by writers Robert Orci, Alex Kurtzman, and producer Damon Lindelof. The showing was publicized as a screening of Star Trek II: The Wrath of Khan, followed by a ten-minute preview of the new Star Trek film. A few minutes into Khan, the film appeared to melt and Nimoy appeared on stage with Orci, Kurtzman and Lindelof, asking the audience, "wouldn't you rather see the new movie?" Following the surprise screening in Texas, the first of many premieres across the world was held at the Sydney Opera House in Sydney on April 7, 2009. For almost two years, the town of Vulcan, Alberta had campaigned to have the film premiere there, but because it had no theater, Paramount arranged instead a lottery where 300 winning residents would be taken to a prerelease screening in Calgary.
Home media
The film was released on DVD and Blu-ray on November 17, 2009, in North America, November 16 in the United Kingdom and October 26 in Australia and New Zealand. In Sweden and Germany, it was released on November 4. First week sales stood at 5.7 million DVDs along with 1.1 million Blu-ray Discs, giving Paramount Pictures their third chart topping release in five weeks following Transformers: Revenge of the Fallen and G.I. Joe: The Rise of Cobra.
Reception
Box office
Official screenings in the United States started at 7 pm on May 7, 2009, grossing $4 million on its opening day. By the end of the weekend, Star Trek had opened with $79,204,300, as well as $35,500,000 from other countries. Adjusted and unadjusted for inflation, it beat Star Trek: First Contact for the largest American opening for a Star Trek film. The film made US$8.5 million from its IMAX screenings, breaking The Dark Knights $6.3 million IMAX opening record. The film is the highest-grossing in the United States and Canada from the entire Star Trek film franchise, eclipsing The Voyage Home and Star Trek: The Motion Picture. Its opening weekend numbers alone outgross the entire individual runs of The Undiscovered Country, The Final Frontier, Insurrection and Nemesis. Star Trek ended its United States theatrical run on October 1, 2009, with a box office total of $257,730,019, which places it as the seventh highest-grossing film for 2009 behind The Hangover. The film grossed $127,764,536 in international markets, for a total worldwide gross of $385,494,555. While foreign grosses represent only 31% of the total box office receipts, executives of Paramount were happy with the international sales, as Star Trek historically was a movie franchise that never has been a big draw overseas.
Critical response
Audiences polled by CinemaScore gave the film an average grade of "A" on an A+ to F scale.
Ty Burr of the Boston Globe gave the film a perfect four star rating, describing it as "ridiculously satisfying", and the "best prequel ever". Burr praised the character development in the film, opining that "emotionally, Star Trek hits every one of its marks, functioning as a family reunion that extends across decades, entertainment mediums, even blurring the line between audience and show." He continued: "Trading on affections sustained over 40 years of popular culture, Star Trek does what a franchise reboot rarely does. It reminds us why we loved these characters in the first place." Owen Gleiberman from Entertainment Weekly gave the film an 'A−' grade, commenting that director Abrams "crafts an origin story that avoids any hint of the origin doldrums". Similar sentiments were expressed by Rolling Stone journalist Peter Travers, who gave the film a 3.5 out of 4 stars. He felt that the acting from the cast was the highlight of the filming, asserting that the performance of Pine radiated star quality. Likewise, Travers called Quinto's performance "sharp" and "intuitive", and felt that Quinto "gave the film a soul". Manohla Dargis of the New York Times wrote, "Star Trek [...] isn't just a pleasurable rethink of your geek uncle's favorite science-fiction series. It's also a testament to television's power as mythmaker, as a source for some of the fundamental stories we tell about ourselves, who we are and where we came from. Slate Dana Stevens felt that the film was "a gift to those of us who loved the original series, that brainy, wonky, idealistic body of work that aired to almost no commercial success between 1966–69 and has since become a science fiction archetype and object of cult adoration". Time Out London Tom Huddleston praised the aesthetic qualities of the film, such as the design of Enterprise, and praised the performances of the cast. He wrote, "The cast are equally strong: Quinto brings wry charm to an otherwise calculating character, while Pine powers through his performance in bullish, if not quite Shatner-esque, fashion."
The chemistry between Pine and Quinto was well received by critics. Gleiberman felt that as the film progressed to the conclusion, Pine and Quinto emulated the same connection as Kirk and Spock. Tim Robey of The Telegraph echoed similar attitudes; "The movie charts their relationship [...] in a nicely oblique way." Robey resumed: "It's the main event, dramatically speaking, but there's always something more thumpingly urgent to command their attention, whether it's a Vulcan distress signal or the continuing rampages of those pesky Romulans." Burr opined that Abrams had an accurate understanding of the relationship between Kirk and Spock, and wrote, "Pine makes a fine, brash boy Kirk, but Quinto's Spock is something special – an eerily calm figure freighted with a heavier sadness than Roddenberry's original. The two ground each other and point toward all the stories yet to come." Similarly, The Guardian writer Peter Bradshaw expressed: "The story of Kirk and Spock is brought thrillingly back to life by a new first generation: Chris Pine and Zachary Quinto, who give inspired, utterly unselfconscious and lovable performances, with power, passion and some cracking comic timing."
Some film critics were polarized on Star Trek. Keith Phipps of The A.V. Club gave the film a 'B+' grade, and asserted that it was "a reconsideration of what constitutes Star Trek, one that deemphasizes heady concepts and plainly stated humanist virtues in favor of breathless action punctuated by bursts of emotion. It might not even be immediately recognizable to veteran fans." In concurrence, Roger Ebert of the Chicago Sun-Times stated that "the Gene Roddenberry years, when stories might play with questions of science, ideals or philosophy, have been replaced by stories reduced to loud and colorful action." Ebert ultimately gave the film 2.5 out of 4 stars. Similarly, Marc Bain of Newsweek opined: "The latest film version of Star Trek [...] is more brawn than brain, and it largely jettisons complicated ethical conundrums in favor of action sequences and special effects." Slate journalist Juliet Lapidos argued that the new film, with its "standard Hollywood torture scene", failed to live up to the intellectual standard set by the 1992 Next Generation episode "Chain of Command", whose treatment of the issue she found both more sophisticated and pertinent to the ongoing debate over the United States' use of enhanced interrogation techniques.
A 2018 article by Io9/Gizmodo ranked all 11 versions of the USS Enterprise seen in the Star Trek franchise up to that point. The version seen in the film placed in the second lowest position.
Accolades
The film garnered numerous accolades after its release. In 2010, it was nominated for four Academy Awards at the 82nd Academy Awards, for Best Sound Editing, Best Sound, Best Visual Effects, and Best Makeup. Star Trek won in the category for Best Makeup, making it the first Star Trek film to receive an Academy Award. The film was nominated for three Empire Awards, to which it won for Best Sci-Fi/Fantasy. In October 2009, Star Trek won the Hollywood Award for Best Movie, and attained six Scream Awards at the 2009 Scream Awards Ceremony. The film attained a Screen Actors Guild Award for Outstanding Performance by a Stunt Ensemble in a Motion Picture at the 16th Screen Actors Guild Awards.
Star Trek received several nominations. The film was nominated for a Grammy Award for Best Score Soundtrack Album for a Motion Picture, Television or Other Visual Media, but was beaten out by Up, also composed by Michael Giacchino. At the 36th People's Choice Awards, the film received four nominations: the film was a contender for Favorite Movie, Zoe Saldana was nominated for Favorite Breakout Movie Actress, and both Pine and Quinto were nominated for Favorite Breakout Movie Actor. On June 15, 2009, the film was nominated for five Teen Choice Awards. In addition, Star Trek was nominated for five Broadcast Film Critics Association Awards and was named one of the top-ten films of 2009 by the National Board of Review of Motion Pictures.
Sequels and prequel
The film's major cast members signed on for two sequels as part of their original deals. Abrams and Bryan Burk signed to produce and Abrams signed to direct the first sequel. The sequel, Star Trek Into Darkness, starring Benedict Cumberbatch as Khan Noonien Singh, was released on May 15, 2013.
A third film, Star Trek Beyond, directed by Justin Lin and starring Idris Elba as the main antagonist, was released on July 22, 2016, to positive reviews. In July 2016, Abrams confirmed plans for a fourth film, and stated that Chris Hemsworth would return as Kirk's father. Most of the cast and producers of Beyond have also agreed to return; however, Abrams stated Anton Yelchin's role would not be recast following his death.
A Star Trek prequel film to the 2009 film has been announced on more than one occasion, then dropped. At one point, Lindsay Anderson Beer was attached to write a script, before the project faded away. In November 2023, actor Chris Pine told an interviewer that he was unaware of any further updates. In January 2024, Paramount released a new announcement regarding a possible prequel, saying that director Toby Haynes and scriptwriter Seth Grahame-Smith had been attached to the project.
See also
Star Trek film series
References
External links
2009 films
2009 science fiction action films
2000s American films
2000s English-language films
2000s films about time travel
2000s science fiction adventure films
American science fiction action films
American science fiction adventure films
American films about revenge
Bad Robot Productions films
English-language science fiction action films
English-language science fiction adventure films
Fiction about black holes
Fiction about supernovae
Films about interracial romance
Films about parallel universes
Films directed by J. J. Abrams
Films produced by Damon Lindelof
Films produced by J. J. Abrams
Films scored by Michael Giacchino
Films set in Iowa
Films set in San Francisco
Films set in the 23rd century
Films set in the 24th century
Films shot in California
Films shot in Los Angeles
Films shot in Utah
Films that won the Academy Award for Best Makeup
Films with screenplays by Alex Kurtzman and Roberto Orci
IMAX films
Paramount Pictures films
Reboot films
Satellite Award–winning films
Saturn Award–winning films
Spyglass Entertainment films
Star Trek (film franchise)
Titan (moon) in film | Star Trek (2009 film) | [
"Physics"
] | 13,180 | [
"Black holes",
"Unsolved problems in physics",
"Fiction about black holes"
] |
3,071,999 | https://en.wikipedia.org/wiki/Sunroom | A sunroom, also frequently called a solarium (and sometimes a "Florida room", "garden conservatory", "garden room", "patio room", "sun parlor", "sun porch", "three season room" or "winter garden"), is a room that permits abundant daylight and views of the landscape while sheltering from adverse weather. Sunroom and solarium have the same denotation: solarium is Latin for "place of sun[light]". Solaria of various forms have been erected throughout European history. Currently, the sunroom or solarium is popular in Europe, Canada, the United States, Australia, and New Zealand. Sunrooms may feature passive solar building design to heat and illuminate them.
In Great Britain, which has a long history of formal conservatories, a small conservatory is sometimes denominated a "sunroom". In gardening, a garden room is a secluded and partly enclosed outside space within a garden that creates a room-like effect.
Design
Attached sunrooms typically are constructed of transparent tempered glazing atop a brick or wood "knee wall" or framed entirely of wood, aluminum, or PVC, and glazed on all sides. Frosted glass or glass block may be used to add privacy. Screens are a fundamental aspect of a "Florida room", and jalousie windows are often featured. An integrated sunroom is specifically designed with many windows and climate controls.
A solarium is typically distinguished from a sunroom by the former being specifically and primarily designed to collect sunlight for warmth and light as opposed to being primarily designed to feature scenic views, and by being composed of walls, save one, and a roof that are entirely of framed glass. These typically are erected in higher latitude (low angle of sunlight) or cold (higher altitude) locations. In contrast, a sunroom sensu stricto has an opaque roof.
Technologies
During the 1960s, professional re-modelling companies developed affordable systems to enclose a patio or deck, offering design, installation, and full service warranties. Patio rooms featured lightweight, engineered roof panels, single pane glass, and aluminium construction.
As technology advanced, insulated glass, vinyl, and vinyl-wood composite framework appeared. More recently, specialized blinds and curtains have been developed, many electrically operated by remote control. Specialized flooring, including radiant heat, may be adapted to both attached and integrated sunrooms.
See also
Arizona room
Conservatory (greenhouse)
Observation car
Porch
Smart glass
Notes
References
External links
Glass architecture
Rooms
Room | Sunroom | [
"Materials_science",
"Engineering"
] | 518 | [
"Glass engineering and science",
"Solar design",
"Energy engineering",
"Rooms",
"Glass architecture",
"Architecture"
] |
3,072,173 | https://en.wikipedia.org/wiki/Fossorial | A fossorial animal () is one that is adapted to digging and which lives primarily (but not solely) underground. Examples of fossorial vertebrates are badgers, naked mole-rats, meerkats, armadillos, wombats, and mole salamanders. Among invertebrates, many molluscs (e.g., clams), insects (e.g., beetles, wasps, bees), and arachnids (e.g. spiders) are fossorial.
Prehistoric evidence
The physical adaptation of fossoriality is widely accepted as being widespread among many prehistoric phyla and taxa, such as bacteria and early eukaryotes. Furthermore, fossoriality has evolved independently multiple times, even within a single family. Fossorial animals appeared simultaneously with the colonization of land by arthropods in the late Ordovician period (over 440 million years ago). Other notable early burrowers include Eocaecilia and possibly Dinilysia. The oldest example of burrowing in synapsids, the lineage which includes modern mammals and their ancestors, is a cynodont, Thrinaxodon liorhinus, found in the Karoo of South Africa, estimated to be 251 million years old. Evidence shows that this adaptation occurred due to dramatic mass extinctions in the Permian period.
Physical adaptations in vertebrates
There are six major external modifications, as described by H. W. Shimer in 1903, that are shared in all mammalian burrowing species:
Fusiform, a spindle-shaped body tapering at both ends, adapted for the dense subsurface environment.
Lesser developed or missing eyesight, considering subsurface darkness.
Small or missing external ears, to reduce naturally occurring friction during burrowing.
Short and stout limbs, since swiftness or speed of movement is less important than the strength to dig.
Broad and stout forelimbs (manus), including long claws, designed to loosen the burrowing material for the hind feet to disperse in the back. This trait is disputed by Jorge Cubo, who states that the skull is the main tool during excavation, but that the most active parts are the forelimbs for digging and that the hind-limbs are used for stability.
Short or missing tail, which has little to no locomotor activity or burrowing use to most fossorial mammals.
Other important physical features include a subsurface adjusted skeleton: a triangularly shaped skull, a prenasal ossicle, chisel-shaped teeth, effectively fused and short lumbar vertebrae, well-developed sternum, strong forelimb and weaker hind limb bones. Due to the lack of light, one of the most important features of fossorial animals are the development of physical, sensory traits that allow them to communicate and navigate in the dark subsurface environment. Considering that sound travels slower in the air and faster through solid earth, the use of seismic (percussive) waves on a small scale is more advantageous in these environments. Several different uses are well documented. The Cape mole rat (Georychus capensis) uses drumming behavior to send messages to its kin through conspecific signaling. The Namib Desert golden mole (Eremitalpa granti namibensis) can detect termite colonies and similar prey underground due to the development of a hypertrophied malleus. This adaptation allows for better detection of low-frequency signals. The most likely explanation of the actual transmission of these seismic inputs, captured by the auditory system, is the use of bone conduction; whenever vibrations are applied to the skull, the signals travel through many routes to the inner ear.
For animals that burrow by compressing soil, the work required increases exponentially with body diameter. In amphisbaenians, an ancient group of burrowing lizard-like squamates, specializations include the pennation of the longissimus dorsi, the main muscle associated with burrowing, to increase muscle cross-sectional area. Constrained to small body diameters by the soil, amphisbaenians can increase muscle mass by increasing body length, not body diameter. In most amphisbaenians, limbs were lost as part of fossorial lifestyle. However the mole lizard Bipes, unlike other amphisbaenians, retains robust digging forelimbs comparable to those of moles and mole crickets.
Physiological modifications
Many fossorial and sub-fossorial mammals that live in temperate zones with partially frozen grounds tend to hibernate due to the seasonal lack of soft, succulent herbage and other sources of nutrition.
W. H. Shimer concluded that, in general, species that adopted fossorial lifestyles likely did so because they failed, aboveground, to find food and protection from predators. Additionally, some, such as E. Nevo, propose that fossorial lifestyles could have occurred because aboveground climates were harsh. Shifts towards an underground lifestyle also entail changes in metabolism and energetics, often in a weight-dependent manner. Sub-fossorial species weighing more than have comparably lower basal rates than those weighing lower than . The average fossorial animal has a basal rate between 60% and 90%. Further observations conclude that larger burrowing animals, such as hedgehogs or armadillos, have lower thermal conductance than smaller animals, most likely to reduce heat storage in their burrows.
Geological and ecological implications
One important impact on the environment caused by fossorial animals is bioturbation, defined by Marshall Wilkinson as the alteration of fundamental properties of the soil, including surface geomorphic processes. It is measured that small fossorials, such as ants, termites, and earthworms displace a massive amount of soil. The total global rates displaced by these animals are equivalent to the total global rates of tectonic uplift. The presence of burrowing animals also has a direct impact on the soil's composition, structure, and growing vegetation. The impact these animals have can range from feeding, harvesting, caching and soil disturbances, but can differ considering the large diversity of fossorial species – especially herbivorous species. The net effect is usually composed of an alteration of the composition of plant species and increased plant diversity, which can cause issues with standing crops, as the homogeneity of the crops is affected. Burrowing also impacts the nitrogen cycle in the affected soil. Mounds and bare soils that contain burrowing animals have considerably higher amounts of and as well as greater nitrification potential and microbial consumption than in vegetated soils. The primary mechanism for this occurrence is caused by the removal of the covering grassland.
Burrowing snakes may be more vulnerable to changing environments than non-burrowing snakes, although this may not be the case for other fossorial groups such as lizards. This may form an evolutionary dead end for snakes.
See also
Arboreal
Burrow
Cursorial
Fossa
References
Habitats
Cave animals
Animal physiology
Animal locomotion | Fossorial | [
"Physics",
"Biology"
] | 1,462 | [
"Animal locomotion",
"Physical phenomena",
"Animals",
"Animal physiology",
"Behavior",
"Motion (physics)",
"Ethology"
] |
3,072,290 | https://en.wikipedia.org/wiki/Photon%20sphere | A photon sphere or photon circle arises in a neighbourhood of the event horizon of a black hole where gravity is so strong that emitted photons will not just bend around the black hole but also return to the point where they were emitted from and consequently display boomerang-like properties. As the source emitting photons falls into the gravitational field towards the event horizon the shape of the trajectory of each boomerang photon changes, tending to a more circular form. At a critical value of the radial distance from the singularity the trajectory of a boomerang photon will take the form of a non-stable circular orbit, thus forming a photon circle and hence in aggregation a photon sphere. The circular photon orbit is said to be the last photon orbit. The radius of the photon sphere, which is also the lower bound for any stable orbit, is, for a Schwarzschild black hole,
where is the gravitational constant, is the mass of the black hole, is the speed of light in vacuum, and is the Schwarzschild radius (the radius of the event horizon); see below for a derivation of this result.
This equation entails that photon spheres can only exist in the space surrounding an extremely compact object (a black hole or possibly an "ultracompact" neutron star).
The photon sphere is located farther from the center of a black hole than the event horizon. Within a photon sphere, it is possible to imagine a photon that is emitted (or reflected) from the back of one's head and, following an orbit of the black hole, is then intercepted by the person's eye, allowing one to see the back of the head, see e.g.
For non-rotating black holes, the photon sphere is a sphere of radius 3/2 rs. There are no stable free-fall orbits that exist within or cross the photon sphere. Any free-fall orbit that crosses it from the outside spirals into the black hole. Any orbit that crosses it from the inside escapes to infinity or falls back in and spirals into the black hole. No unaccelerated orbit with a semi-major axis less than this distance is possible, but within the photon sphere, a constant acceleration will allow a spacecraft or probe to hover above the event horizon.
Another property of the photon sphere is centrifugal force (note: not centripetal) reversal. Outside the photon sphere, the faster one orbits, the greater the outward force one feels. Centrifugal force falls to zero at the photon sphere, including non-freefall orbits at any speed, i.e. an object weighs the same no matter how fast it orbits, and becomes negative inside it. Inside the photon sphere, faster orbiting leads to greater weight or inward force. This has serious ramifications for the fluid dynamics of inward fluid flow.
A rotating black hole has two photon spheres. As a black hole rotates, it drags space with it. The photon sphere that is closer to the black hole is moving in the same direction as the rotation, whereas the photon sphere further away is moving against it. The greater the angular velocity of the rotation of a black hole, the greater the distance between the two photon spheres. Since the black hole has an axis of rotation, this only holds true if approaching the black hole in the direction of the equator. In a polar orbit, there is only one photon sphere. This is because when approaching at this angle, the possibility of traveling with or against the rotation does not exist. The rotation will instead cause the orbit to precess.
Derivation for a Schwarzschild black hole
Since a Schwarzschild black hole has spherical symmetry, all possible axes for a circular photon orbit are equivalent, and all circular orbits have the same radius.
This derivation involves using the Schwarzschild metric, given by
For a photon traveling at a constant radius r (i.e. in the φ-coordinate direction), . Since it is a photon, (a "light-like interval"). We can always rotate the coordinate system such that is constant, (e.g., ).
Setting ds, dr and dθ to zero, we have
Re-arranging gives
To proceed, we need the relation . To find it, we use the radial geodesic equation
Non vanishing -connection coefficients are
where .
We treat photon radial geodesics with constant r and , therefore
Substituting it all into the radial geodesic equation (the geodesic equation with the radial coordinate as the dependent variable), we obtain
Comparing it with what was obtained previously, we have
where we have inserted radians (imagine that the central mass, about which the photon is orbiting, is located at the centre of the coordinate axes. Then, as the photon is travelling along the -coordinate line, for the mass to be located directly in the centre of the photon's orbit, we must have radians).
Hence, rearranging this final expression gives
which is the result we set out to prove.
Photon orbits around a Kerr black hole
In contrast to a Schwarzschild black hole, a Kerr (spinning) black hole does not have spherical symmetry, but only an axis of symmetry, which has profound consequences for the photon orbits, see e.g. Cramer for details and simulations of photon orbits and photon circles. There are two circular photon orbits in the equatorial plane (prograde and retrograde), with different Boyer–Lindquist radii:
where is the angular momentum per unit mass of the black hole.
There exist other constant-radius orbits, but they have more complicated paths which oscillate in latitude about the equator.
References
External links
Step by Step into a Black Hole
Virtual Trips to Black Holes and Neutron Stars
Guide to Black Holes
Spherical Photon Orbits Around a Kerr Black Hole
General relativity
Black holes | Photon sphere | [
"Physics",
"Astronomy"
] | 1,191 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"General relativity",
"Density",
"Theory of relativity",
"Stellar phenomena",
"Astronomical objects"
] |
3,072,371 | https://en.wikipedia.org/wiki/List%20of%20game%20theorists | This is a list of notable economists, mathematicians, political scientists, and computer scientists whose work has added substantially to the field of game theory.
A
Derek Abbott – quantum game theory and Parrondo's games
Susanne Albers – algorithmic game theory and algorithm analysis
Kenneth Arrow – voting theory (Nobel Memorial Prize in Economic Sciences in 1972)
Robert Aumann – equilibrium theory (Nobel Memorial Prize in Economic Sciences in 2005)
Robert Axelrod – repeated Prisoner's Dilemma
B
Tamer Başar – dynamic game theory and application robust control of systems with uncertainty
Cristina Bicchieri – epistemology of game theory
Olga Bondareva – Bondareva–Shapley theorem
Steven Brams – cake cutting, fair division, theory of moves
C
Jennifer Tour Chayes – algorithmic game theory and auction algorithms
John Horton Conway – combinatorial game theory
Antoine Augustin Cournot – monopoly and oligopoly games
F
Drew Fudenberg – repeated games and reputation effects
H
William Hamilton – evolutionary biology
John Harsanyi – equilibrium theory (Nobel Memorial Prize in Economic Sciences in 1994)
Monika Henzinger – algorithmic game theory and information retrieval
John Hicks – general equilibrium theory (including Kaldor–Hicks efficiency)
Naira Hovakimyan – differential games and adaptive control
Peter L. Hurd – evolution of aggressive behavior
I
Rufus Isaacs – differential games
K
Ehud Kalai – Kalai–Smorodinsky bargaining solution, rational learning, strategic complexity
Anna Karlin – algorithmic game theory and online algorithms
Michael Kearns – algorithmic game theory and computational social science
Sarit Kraus – non-monotonic reasoning
M
John Maynard Smith – evolutionary biology
Oskar Morgenstern – social organization
Roger Myerson – mechanism design (Nobel Memorial Prize in Economic Sciences in 2007)
N
John Forbes Nash – Nash equilibrium (Nobel Memorial Prize in Economic Sciences in 1994)
John von Neumann – Minimax theorem, expected utility, social organization, arms race
Abraham Neyman – Stochastic games, Shapley value
P
J. M. R. Parrondo – games with a reversal of fortune, such as Parrondo's games
Charles E. M. Pearce – games applied to queuing theory
George R. Price – theoretical and evolutionary biology
Anatol Rapoport – Mathematical psychologist, early proponent of tit-for-tat in repeated Prisoner's Dilemma
R
Julia Robinson – proved that fictitious play dynamics converges to the mixed strategy Nash equilibrium in two-player zero-sum games
Alvin E. Roth – market design (Nobel Memorial Prize in Economic Sciences 2012)
Ariel Rubinstein – bargaining theory, learning and language
S
Thomas Jerome Schaefer – computational complexity of perfect-information games
Suzanne Scotchmer – patent law incentive models
Reinhard Selten – bounded rationality (Nobel Memorial Prize in Economic Sciences in 1994)
Claude Shannon – studied cryptography and chess; sometimes called "the father of information theory"
Lloyd Shapley – Shapley value and core concept in coalition games (Nobel Memorial Prize in Economic Sciences 2012)
Eilon Solan – Stochastic games, stopping games
Thomas Schelling – bargaining (Nobel Memorial Prize in Economic Sciences in 2005) and models of segregation
T
Éva Tardos – algorithmic game theory
Stef Tijs – cooperative game theory (including the Tijs value)
V
William Vickrey – auction theory
W
Myrna Wooders – coalition theory
References
Lists of mathematicians by field
Lists of people by occupation | List of game theorists | [
"Mathematics"
] | 699 | [
"Game theorists",
"Game theory"
] |
3,072,495 | https://en.wikipedia.org/wiki/Dust-Off | Dust-Off is a brand of dust cleaner (refrigerant-based propellant cleaner, which is not compressed air and incorrectly called "canned air"). The product usually contains difluoroethane; although some use tetrafluoroethane and tetrafluoropropene as a propellant. It is used to blow particles and dust from computer, keyboards, photography equipment, and electronics, as well as many every day household items including windows, blinds, and collectibles. Dust-Off is manufactured by Falcon Safety Products located in Branchburg, NJ.
History
Dust-Off was not developed and was not introduced in 1970 by an employee at Falcon Safety Products who discovered that the pressurized blasts used to sound the alarm in the company's signal horns could also remove dust from photography equipment and film without having to touch the surface.
The Dust-Off compressed gas duster was first introduced to the photography market in 1970, and was marketed as a tool to blow foreign matter from photographic equipment and negatives that would not damage photographic prints during development. Due to the rise of personal computer use in the 1980s, Falcon developed Dust-Off II as a cleaning device to help rid damaging dust and lint from the new technology including screens, keyboards, CPU, and fans.
Recently, the Dust-Off brand has expanded to encompass a line of cleaners for electronic and home office equipment, with a large number of products dedicated to cleaning smartphones, tablets, PDAs, HD monitors, and TV screens. Products in the Dust-Off line include screen sprays and microfiber cleaning cloths.
Inhalant abuse and efforts at deterrence
Difluoroethane is an intoxicant if inhaled, and is highly addictive. Compressed gas duster products gained attention for their abuse as inhalants, as used by teenagers in the movie Thirteen. A warning email circulated by Sgt. Jeff Williams, a police officer in Cleveland, whose son, Kyle, died after inhaling Dust-Off in Painesville Township, Ohio.
Wrestler Mike "Mad Dog" Bell died of an inhalation-induced heart attack brought on by an inhalation of difluoroethane in Dust-Off.
To deter inhalation, Falcon was the first duster manufacturer to add a bitterant to the product, which makes it less palatable to inhale but has not halted abuse. The company has also participated in inhalant abuse awareness campaigns with Sgt. Williams and the Alliance for Consumer Education to educate the public on the dangers of huffing, which includes the abuse of 1,400 different products. These efforts may have contributed to inhalant abuse being on a 10-year downward trend according to some indicators. Nevertheless 2011 data indicate that 11% of high school students report at least one incident of inhalant abuse.
References
External links
Official Dust-Off Website
Fotospeed UK Dust Off
Snopes: Adolescents huffing from cans of Dust-Off brand compressed air have died
Common Inhalants abused (Internet Archive)
National Institute on Drug Abuse
National Inhalant Prevention Coalition 1-(800) 269-4237
Cleaning products
Computer peripherals | Dust-Off | [
"Chemistry",
"Technology"
] | 650 | [
"Cleaning products",
"Computer peripherals",
"Components",
"Products of chemical industry"
] |
3,072,613 | https://en.wikipedia.org/wiki/Patch%20panel | A patch panel is a device or unit featuring a number of jacks, usually of the same or similar type, for the use of connecting and routing circuits for monitoring, interconnecting, and testing circuits in a convenient, flexible manner. Patch panels are commonly used in computer networking, recording studios, and radio and television.
The term patch came from early use in telephony and radio studios, where extra equipment kept on standby could be temporarily substituted for failed devices. This reconnection was done via patch cords and patch panels, like the jack fields of cord-type telephone switchboards.
Terminology
Patch panels are also referred to as patch bays, patch fields, jack panels or jack fields.
Uses and connectors
In recording studios, television and radio broadcast studios, and concert sound reinforcement systems, patchbays are widely used to facilitate the connection of different devices, such as microphones, electric or electronic instruments, effects (e.g. compression, reverb, etc.), recording gear, amplifiers, or broadcasting equipment. Patchbays make it easier to connect different devices in different orders for different projects, because all of the changes can be made at the patchbay. Additionally, patchbays make it easier to troubleshoot problems such as ground loops; even small home studios and amateur project studios often use patchbays, because it groups all of the input jacks into one location. This means that devices mounted in racks or keyboard instruments can be connected without having to hunt around behind the rack or instrument with a flashlight for the right jack. Using a patchbay also saves wear and tear on the input jacks of studio gear and instruments, because all of the connections are made with the patchbay.
Patch panels are being used more prevalently in domestic installations, owing to the popularity of "Structured Wiring" installs. They are also found in home cinema installations more and more.
Normalization
It is conventional to have the top row of jacks wired at the rear to outputs and bottom row of jacks wired to inputs. Patch bays may be half-normal (usually bottom) or full-normal, "normal" indicating that the top and bottom jacks are connected internally. When a patch bay has bottom half-normal wiring, then with no patch cord inserted into either jack, the top jack is internally linked to the bottom jack via break contacts on the bottom jack; inserting a patch cord into the top jack will take a feed off that jack while retaining the internal link between the two jacks; inserting a patch cord into the bottom jack will break the internal link and replace the signal feed from the top jack with the signal carried on the patch cord. With top half-normal wiring, the same happens but vice versa. If a patch bay is wired to full-normal, then it includes break contacts in both rows of jacks.
Switches
Dedicated switching equipment can be an alternative to patch bays in some applications. Switches can make routing as easy as pushing a button, and can provide other benefits over patch bays, including routing a signal to any number of destinations simultaneously. However, switching equipment that can emulate the capabilities of a given patch bay is much more expensive. For example, an S-Video matrix routing switcher with the same capability (8×8) as a 16-point S-Video patch panel (8 patch cables connects 8 inputs and 8 outputs) may cost ten times more, though it would probably have more capabilities.
Like patch panels, switching equipment for nearly any type of signal is available, including analog and digital video and audio, as well as RF (cable TV), MIDI, telephone, networking and electrical. There are various types of switches for audio and video, from simple selector switches to sophisticated production switchers. However, emulating or exceeding the capabilities of audio or video patch panels requires specialized devices like crossbar switches.
Switching equipment may be electronic, mechanical, or electro-mechanical. Some switcher hardware can be controlled via computer or other external devices. Some have automated or pre-programmed operational capabilities. There are also software switcher applications used to route signals and control data within a "pure digital" computer environment.
See also
Cable management
Distribution frame
Wiring closet
References
External links
Telephony equipment
Audio engineering
Broadcast engineering
Electrical signal connectors
Networking hardware | Patch panel | [
"Engineering"
] | 869 | [
"Broadcast engineering",
"Computer networks engineering",
"Electronic engineering",
"Networking hardware",
"Electrical engineering",
"Audio engineering"
] |
3,072,721 | https://en.wikipedia.org/wiki/Metal%20carbonyl | Metal carbonyls are coordination complexes of transition metals with carbon monoxide ligands. Metal carbonyls are useful in organic synthesis and as catalysts or catalyst precursors in homogeneous catalysis, such as hydroformylation and Reppe chemistry. In the Mond process, nickel tetracarbonyl is used to produce pure nickel. In organometallic chemistry, metal carbonyls serve as precursors for the preparation of other organometallic complexes.
Metal carbonyls are toxic by skin contact, inhalation or ingestion, in part because of their ability to carbonylate hemoglobin to give carboxyhemoglobin, which prevents the binding of oxygen.
Nomenclature and terminology
The nomenclature of the metal carbonyls depends on the charge of the complex, the number and type of central atoms, and the number and type of ligands and their binding modes. They occur as neutral complexes, as positively-charged metal carbonyl cations or as negatively charged metal carbonylates. The carbon monoxide ligand may be bound terminally to a single metal atom or bridging to two or more metal atoms. These complexes may be homoleptic, containing only CO ligands, such as nickel tetracarbonyl (Ni(CO)4), but more commonly metal carbonyls are heteroleptic and contain a mixture of ligands.
Mononuclear metal carbonyls contain only one metal atom as the central atom. Except vanadium hexacarbonyl, only metals with even atomic number, such as chromium, iron, nickel, and their homologs, build neutral mononuclear complexes. Polynuclear metal carbonyls are formed from metals with odd atomic numbers and contain a metal–metal bond. Complexes with different metals but only one type of ligand are called isoleptic.
Carbon monoxide has distinct binding modes in metal carbonyls. They differ in terms of their hapticity, denoted η, and their bridging mode. In η2-CO complexes, both the carbon and oxygen are bonded to the metal. More commonly only carbon is bonded, in which case the hapticity is not mentioned.
The carbonyl ligand engages in a wide range of bonding modes in metal carbonyl dimers and clusters. In the most common bridging mode, denoted μ2 or simply μ, the CO ligand bridges a pair of metals. This bonding mode is observed in the commonly available metal carbonyls: Co2(CO)8, Fe2(CO)9, Fe3(CO)12, and Co4(CO)12. In certain higher nuclearity clusters, CO bridges between three or even four metals. These ligands are denoted μ3-CO and μ4-CO. Less common are bonding modes in which both C and O bond to the metal, such as μ3η2.
Structure and bonding
Carbon monoxide bonds to transition metals using "synergistic pi* back-bonding". The M–C bonding has three components, giving rise to a partial triple bond. A sigma (σ) bond arises from overlap of the nonbonding (or weakly anti-bonding) sp-hybridized electron pair on carbon with a blend of d-, s-, and p-orbitals on the metal. A pair of pi (π) bonds arises from overlap of filled d-orbitals on the metal with a pair of π*-antibonding orbitals projecting from the carbon atom of the CO. The latter kind of binding requires that the metal have d-electrons, and that the metal be in a relatively low oxidation state (0 or +1) which makes the back-donation of electron density favorable. As electrons from the metal fill the π-antibonding orbital of CO, they weaken the carbon–oxygen bond compared with free carbon monoxide, while the metal–carbon bond is strengthened. Because of the multiple bond character of the M–CO linkage, the distance between the metal and carbon atom is relatively short, often less than 1.8 Å, about 0.2 Å shorter than a metal–alkyl bond. The M-CO and MC-O distance are sensitive to other ligands on the metal. Illustrative of these effects are the following data for Mo-C and C-O distances in Mo(CO)6 and Mo(CO)3(4-methylpyridine)3: 2.06 vs 1.90 and 1.11 vs 1.18 Å.
Infrared spectroscopy is a sensitive probe for the presence of bridging carbonyl ligands. For compounds with doubly bridging CO ligands, denoted μ2-CO or often just μ-CO, the bond stretching frequency νCO is usually shifted by 100–200 cm−1 to lower energy compared to the signatures of terminal CO, which are in the region 1800 cm−1. Bands for face-capping (μ3) CO ligands appear at even lower energies. In addition to symmetrical bridging modes, CO can be found to bridge asymmetrically or through donation from a metal d orbital to the π* orbital of CO. The increased π-bonding due to back-donation from multiple metal centers results in further weakening of the C–O bond.
Physical characteristics
Most mononuclear carbonyl complexes are colorless or pale yellow, volatile liquids or solids that are flammable and toxic. Vanadium hexacarbonyl, a uniquely stable 17-electron metal carbonyl, is a blue-black solid. Dimetallic and polymetallic carbonyls tend to be more deeply colored. Triiron dodecacarbonyl (Fe3(CO)12) forms deep green crystals. The crystalline metal carbonyls often are sublimable in vacuum, although this process is often accompanied by degradation. Metal carbonyls are soluble in nonpolar and polar organic solvents such as benzene, diethyl ether, acetone, glacial acetic acid, and carbon tetrachloride. Some salts of cationic and anionic metal carbonyls are soluble in water or lower alcohols.
Analytical characterization
Apart from X-ray crystallography, important analytical techniques for the characterization of metal carbonyls are infrared spectroscopy and 13C NMR spectroscopy. These two techniques provide structural information on two very different time scales. Infrared-active vibrational modes, such as CO-stretching vibrations, are often fast compared to intramolecular processes, whereas NMR transitions occur at lower frequencies and thus sample structures on a time scale that, it turns out, is comparable to the rate of intramolecular ligand exchange processes. NMR data provide information on "time-averaged structures", whereas IR is an instant "snapshot". Illustrative of the differing time scales, investigation of dicobalt octacarbonyl (Co2(CO)8) by means of infrared spectroscopy provides 13 νCO bands, far more than expected for a single compound. This complexity reflects the presence of isomers with and without bridging CO ligands. The 13C NMR spectrum of the same substance exhibits only a single signal at a chemical shift of 204 ppm. This simplicity indicates that the isomers quickly (on the NMR timescale) interconvert.
Iron pentacarbonyl exhibits only a single 13C NMR signal owing to rapid exchange of the axial and equatorial CO ligands by Berry pseudorotation.
Infrared spectra
An important technique for characterizing metal carbonyls is infrared spectroscopy. The C–O vibration, typically denoted νCO, occurs at 2143 cm−1 for carbon monoxide gas. The energies of the νCO band for the metal carbonyls correlates with the strength of the carbon–oxygen bond, and inversely correlated with the strength of the π-backbonding between the metal and the carbon. The π-basicity of the metal center depends on a lot of factors; in the isoelectronic series (titanium to iron) at the bottom of this section, the hexacarbonyls show decreasing π-backbonding as one increases (makes more positive) the charge on the metal. π-Basic ligands increase π-electron density at the metal, and improved backbonding reduces νCO. The Tolman electronic parameter uses the Ni(CO)3 fragment to order ligands by their π-donating abilities.
The number of vibrational modes of a metal carbonyl complex can be determined by group theory. Only vibrational modes that transform as the electric dipole operator will have nonzero direct products and are observed. The number of observable IR transitions (but not their energies) can thus be predicted. For example, the CO ligands of octahedral complexes, such as Cr(CO)6, transform as a1g, eg, and t1u, but only the t1u mode (antisymmetric stretch of the apical carbonyl ligands) is IR-allowed. Thus, only a single νCO band is observed in the IR spectra of the octahedral metal hexacarbonyls. Spectra for complexes of lower symmetry are more complex. For example, the IR spectrum of Fe2(CO)9 displays CO bands at 2082, 2019 and 1829 cm−1. The number of IR-observable vibrational modes for some metal carbonyls are shown in the table. Exhaustive tabulations are available. These rules apply to metal carbonyls in solution or the gas phase. Low-polarity solvents are ideal for high resolution. For measurements on solid samples of metal carbonyls, the number of bands can increase owing in part to site symmetry.
Nuclear magnetic resonance spectroscopy
Metal carbonyls are often characterized by 13C NMR spectroscopy. To improve the sensitivity of this technique, complexes are often enriched with 13CO. Typical chemical shift range for terminally bound ligands is 150 to 220 ppm. Bridging ligands resonate between 230 and 280 ppm. The 13C signals shift toward higher fields with an increasing atomic number of the central metal.
NMR spectroscopy can be used for experimental determination of the fluxionality.
The activation energy of ligand exchange processes can be determined by the temperature dependence of the line broadening.
Mass spectrometry
Mass spectrometry provides information about the structure and composition of the complexes. Spectra for metal polycarbonyls are often easily interpretable, because the dominant fragmentation process is the loss of carbonyl ligands (m/z = 28).
→ + CO
Electron ionization is the most common technique for characterizing the neutral metal carbonyls. Neutral metal carbonyls can be converted to charged species by derivatization, which enables the use of electrospray ionization (ESI), instrumentation for which is often widely available. For example, treatment of a metal carbonyl with alkoxide generates an anionic metallaformate that is amenable to analysis by ESI-MS:
LnM(CO) + RO− → [LnM−C(=O)OR]−
Some metal carbonyls react with azide to give isocyanato complexes with release of nitrogen. By adjusting the cone voltage or temperature, the degree of fragmentation can be controlled. The molar mass of the parent complex can be determined, as well as information about structural rearrangements involving loss of carbonyl ligands under ESI-MS conditions.
Mass spectrometry combined with infrared photodissociation spectroscopy can provide vibrational informations for ionic carbonyl complexes in gas phase.
Occurrence in nature
In the investigation of the infrared spectrum of the Galactic Center of the Milky Way, monoxide vibrations of iron carbonyls in interstellar dust clouds were detected. Iron carbonyl clusters were also observed in Jiange H5 chondrites identified by infrared spectroscopy. Four infrared stretching frequencies were found for the terminal and bridging carbon monoxide ligands.
In the oxygen-rich atmosphere of the Earth, metal carbonyls are subject to oxidation to the metal oxides. It is discussed whether in the reducing hydrothermal environments of the prebiotic prehistory such complexes were formed and could have been available as catalysts for the synthesis of critical biochemical compounds such as pyruvic acid. Traces of the carbonyls of iron, nickel, and tungsten were found in the gaseous emanations from the sewage sludge of municipal treatment plants.
The hydrogenase enzymes contain CO bound to iron. It is thought that the CO stabilizes low oxidation states, which facilitates the binding of hydrogen. The enzymes carbon monoxide dehydrogenase and acetyl-CoA synthase also are involved in bioprocessing of CO. Carbon monoxide containing complexes are invoked for the toxicity of CO and signaling.
Synthesis
The synthesis of metal carbonyls is a widely studied subject of organometallic research. Since the work of Mond and then Hieber, many procedures have been developed for the preparation of mononuclear metal carbonyls as well as homo- and heterometallic carbonyl clusters.
Direct reaction of metal with carbon monoxide
Nickel tetracarbonyl and iron pentacarbonyl can be prepared according to the following equations by reaction of finely divided metal with carbon monoxide:
Ni + 4 CO → Ni(CO)4 (1 bar, 55 °C)
Fe + 5 CO → Fe(CO)5 (100 bar, 175 °C)
Nickel tetracarbonyl is formed with carbon monoxide already at 80 °C and atmospheric pressure, finely divided iron reacts at temperatures between 150 and 200 °C and a carbon monoxide pressure of 50–200 bar. Other metal carbonyls are prepared by less direct methods.
Reduction of metal salts and oxides
Some metal carbonyls are prepared by the reduction of metal halides in the presence of high pressure of carbon monoxide. A variety of reducing agents are employed, including copper, aluminum, hydrogen, as well as metal alkyls such as triethylaluminium. Illustrative is the formation of chromium hexacarbonyl from anhydrous chromium(III) chloride in benzene with aluminum as a reducing agent, and aluminum chloride as the catalyst:
CrCl3 + Al + 6 CO → Cr(CO)6 + AlCl3
The use of metal alkyls, such as triethylaluminium and diethylzinc, as the reducing agent leads to the oxidative coupling of the alkyl radical to form the dimer alkane:
WCl6 + 6 CO + 2 Al(C2H5)3 → W(CO)6 + 2 AlCl3 + 3 C4H10
Tungsten, molybdenum, manganese, and rhodium salts may be reduced with lithium aluminium hydride. Vanadium hexacarbonyl is prepared with sodium as a reducing agent in chelating solvents such as diglyme.
VCl3 + 4 Na + 6 CO + 2 diglyme → Na(diglyme)2[V(CO)6] + 3 NaCl
[V(CO)6]− + H+ → H[V(CO)6] → H2 + V(CO)6
In the aqueous phase, nickel or cobalt salts can be reduced, for example by sodium dithionite. In the presence of carbon monoxide, cobalt salts are quantitatively converted to the tetracarbonylcobalt(−1) anion:
Co2+ + + 6 OH− + 4 CO → + 3 + 3 H2O
Some metal carbonyls are prepared using CO directly as the reducing agent. In this way, Hieber and Fuchs first prepared dirhenium decacarbonyl from the oxide:
Re2O7 + 17 CO → Re2(CO)10 + 7 CO2
If metal oxides are used carbon dioxide is formed as a reaction product. In the reduction of metal chlorides with carbon monoxide phosgene is formed, as in the preparation of osmium carbonyl chloride from the chloride salts. Carbon monoxide is also suitable for the reduction of sulfides, where carbonyl sulfide is the byproduct.
Photolysis and thermolysis
Photolysis or thermolysis of mononuclear carbonyls generates di- and polymetallic carbonyls such as diiron nonacarbonyl (Fe2(CO)9). On further heating, the products decompose eventually into the metal and carbon monoxide.
2 Fe(CO)5 → Fe2(CO)9 + CO
The thermal decomposition of triosmium dodecacarbonyl (Os3(CO)12) provides higher-nuclear osmium carbonyl clusters such as Os4(CO)13, Os6(CO)18 up to Os8(CO)23.
Mixed ligand carbonyls of ruthenium, osmium, rhodium, and iridium are often generated by abstraction of CO from solvents such as dimethylformamide (DMF) and 2-methoxyethanol. Typical is the synthesis of IrCl(CO)(PPh3)2 from the reaction of iridium(III) chloride and triphenylphosphine in boiling DMF solution.
Salt metathesis
Salt metathesis reaction of salts such as KCo(CO)4 with [Ru(CO)3Cl2]2 leads selectively to mixed-metal carbonyls such as RuCo2(CO)11.
4 KCo(CO)4 + [Ru(CO)3Cl2]2 → 2 RuCo2(CO)11 + 4 KCl + 11 CO
Metal carbonyl cations and carbonylates
The synthesis of ionic carbonyl complexes is possible by oxidation or reduction of the neutral complexes. Anionic metal carbonylates can be obtained for example by reduction of dinuclear complexes with sodium. A familiar example is the sodium salt of iron tetracarbonylate (Na2Fe(CO)4, Collman's reagent), which is used in organic synthesis.
The cationic hexacarbonyl salts of manganese, technetium and rhenium can be prepared from the carbonyl halides under carbon monoxide pressure by reaction with a Lewis acid.
Mn(CO)5Cl + AlCl3 + CO → [][]
The use of strong acids succeeded in preparing gold carbonyl cations such as [Au(CO)2]+, which is used as a catalyst for the carbonylation of alkenes. The cationic platinum carbonyl complex [Pt(CO)4]2+ can be prepared by working in so-called superacids such as antimony pentafluoride. Although CO is considered generally as a ligand for low-valent metal ions, the tetravalent iron complex [Cp*2Fe]2+ (16-valence electron complex) quantitatively binds CO to give the diamagnetic Fe(IV)-carbonyl [Cp*2FeCO]2+ (18-valence electron complex).
Reactions
Metal carbonyls are important precursors for the synthesis of other organometallic complexes. Common reactions are the substitution of carbon monoxide by other ligands, the oxidation or reduction reactions of the metal center, and reactions at the carbon monoxide ligand.
CO substitution
The substitution of CO ligands can be induced thermally or photochemically by donor ligands. The range of ligands is large, and includes phosphines, cyanide (CN−), nitrogen donors, and even ethers, especially chelating ones. Alkenes, especially dienes, are effective ligands that afford synthetically useful derivatives. Substitution of 18-electron complexes generally follows a dissociative mechanism, involving 16-electron intermediates.
Substitution proceeds via a dissociative mechanism:
M(CO)n → M(CO)n−1 + CO
M(CO)n−1 + L → M(CO)n−1L
The dissociation energy is for nickel tetracarbonyl and for chromium hexacarbonyl.
Substitution in 17-electron complexes, which are rare, proceeds via associative mechanisms with a 19-electron intermediates.
M(CO)n + L → M(CO)nL
M(CO)nL → M(CO)n−1L + CO
The rate of substitution in 18-electron complexes is sometimes catalysed by catalytic amounts of oxidants, via electron transfer.
Reduction
Metal carbonyls react with reducing agents such as metallic sodium or sodium amalgam to give carbonylmetalate (or carbonylate) anions:
Mn2(CO)10 + 2 Na → 2 Na[Mn(CO)5]
For iron pentacarbonyl, one obtains the tetracarbonylferrate with loss of CO:
Fe(CO)5 + 2 Na → Na2[Fe(CO)4] + CO
Mercury can insert into the metal–metal bonds of some polynuclear metal carbonyls:
Co2(CO)8 + Hg → (CO)4Co−Hg−Co(CO)4
Nucleophilic attack at CO
The CO ligand is often susceptible to attack by nucleophiles. For example, trimethylamine oxide and potassium bis(trimethylsilyl)amide convert CO ligands to CO2 and CN−, respectively. In the "Hieber base reaction", hydroxide ion attacks the CO ligand to give a metallacarboxylic acid, followed by the release of carbon dioxide and the formation of metal hydrides or carbonylmetalates. A well-studied example of this nucleophilic addition is the conversion of iron pentacarbonyl to hydridoiron tetracarbonyl anion:
Fe(CO)5 + NaOH → Na[Fe(CO)4CO2H]
Na[Fe(CO)4COOH] + NaOH → Na[HFe(CO)4] + NaHCO3
Hydride reagents also attack CO ligands, especially in cationic metal complexes, to give the formyl derivative:
[Re(CO)6]+ + H− → Re(CO)5CHO
Organolithium reagents add with metal carbonyls to acylmetal carbonyl anions. O-Alkylation of these anions, such as with Meerwein salts, affords Fischer carbenes.
With electrophiles
Despite being in low formal oxidation states, metal carbonyls are relatively unreactive toward many electrophiles. For example, they resist attack by alkylating agents, mild acids, and mild oxidizing agents. Most metal carbonyls do undergo halogenation. Iron pentacarbonyl, for example, forms ferrous carbonyl halides:
Fe(CO)5 + X2 → Fe(CO)4X2 + CO
Metal–metal bonds are cleaved by halogens. Depending on the electron-counting scheme used, this can be regarded as an oxidation of the metal atoms:
Mn2(CO)10 + Cl2 → 2 Mn(CO)5Cl
Compounds
Most metal carbonyl complexes contain a mixture of ligands. Examples include the historically important IrCl(CO)(P(C6H5)3)2 and the antiknock agent (CH3C5H4)Mn(CO)3. The parent compounds for many of these mixed ligand complexes are the binary carbonyls, those species of the formula [Mx(CO)n]z, many of which are commercially available. The formulae of many metal carbonyls can be inferred from the 18-electron rule.
Charge-neutral binary metal carbonyls
Group 2 elements calcium, strontium, and barium can all form octacarbonyl complexes M(CO)8 (M = Ca, Sr, Ba). The compounds were characterized in cryogenic matrices by vibrational spectroscopy and in gas phase by mass spectrometry.
Group 4 elements with 4 valence electrons are expected to form heptacarbonyls; while these are extremely rare, substituted derivatives of Ti(CO)7 are known.
Group 5 elements with 5 valence electrons, again are subject to steric effects that prevent the formation of M–M bonded species such as V2(CO)12, which is unknown. The 17-VE V(CO)6 is however well known.
Group 6 elements with 6 valence electrons form hexacarbonyls Cr(CO)6, Mo(CO)6, W(CO)6, and Sg(CO)6. Group 6 elements (as well as group 7) are also well known for exhibiting the cis effect (the labilization of CO in the cis position) in organometallic synthesis.
Group 7 elements with 7 valence electrons form pentacarbonyl dimers Mn2(CO)10, Tc2(CO)10, and Re2(CO)10.
Group 8 elements with 8 valence electrons form pentacarbonyls Fe(CO)5, Ru(CO)5 and Os(CO)5. The heavier two members are unstable, tending to decarbonylate to give Ru3(CO)12, and Os3(CO)12. The two other principal iron carbonyls are Fe3(CO)12 and Fe2(CO)9.
Group 9 elements with 9 valence electrons and are expected to form tetracarbonyl dimers M2(CO)8. In fact the cobalt derivative of this octacarbonyl is the only stable member, but all three tetramers are well known: Co4(CO)12, Rh4(CO)12, Rh6(CO)16, and Ir4(CO)12. Co2(CO)8 unlike the majority of the other 18 VE transition metal carbonyls is sensitive to oxygen.
Group 10 elements with 10 valence electrons form tetracarbonyls such as Ni(CO)4. Curiously Pd(CO)4 and Pt(CO)4 are not stable.
Anionic binary metal carbonyls
Group 3 elements scandium and yttrium, as well as lanthanum, form the 20-electron monoanions [Sc(CO)8]−, [Y(CO)8]−, and [La(CO)8]−.
Group 4 elements as dianions resemble neutral group 6 derivatives: [Ti(CO)6]2−.
Group 5 elements as monoanions resemble again neutral group 6 derivatives: [V(CO)6]−.
Group 7 elements as monoanions resemble neutral group 8 derivatives: [Mn(CO)5]−, [Tc(CO)5]−, [Re(CO)5]−.
Group 8 elements as dianaions resemble neutral group 10 derivatives: [Fe(CO)4]2−, [Ru(CO)4]2−, [Os(CO)4]2−. Condensed derivatives are also known.
Group 9 elements as monoanions resemble neutral group 10 metal carbonyl. [Co(CO)4]− is the best studied member.
Large anionic clusters of nickel, palladium, and platinum are also well known. Many metal carbonyl anions can be protonated to give metal carbonyl hydrides.
Cationic binary metal carbonyls
Group 2 elements form [M(CO)8]+ (M = Ca, Sr, Ba), characterized in gas phase by mass spectrometry and vibrational spectroscopy.
Group 3 elements form [Sc(CO)7]+ and [Y(CO)8]+ in gas phase.
Group 7 elements as monocations resemble neutral group 6 derivative [M(CO)6]+ (M = Mn, Tc, Re).
Group 8 elements as dications also resemble neutral group 6 derivatives [M(CO)6]2+ (M = Fe, Ru, Os).
Nonclassical carbonyl complexes
Nonclassical describes those carbonyl complexes where νCO is higher than that for free carbon monoxide. In nonclassical CO complexes, the C-O distance is shorter than free CO (113.7 pm). The structure of [Fe(CO)6]2+, with dC-O = 112.9 pm, illustrates this effect. These complexes are usually cationic, sometimes dicationic.
Applications
Metallurgical uses
Metal carbonyls are used in several industrial processes. Perhaps the earliest application was the extraction and purification of nickel via nickel tetracarbonyl by the Mond process (see also carbonyl metallurgy).
By a similar process carbonyl iron, a highly pure metal powder, is prepared by thermal decomposition of iron pentacarbonyl. Carbonyl iron is used inter alia for the preparation of inductors, pigments, as dietary supplements, in the production of radar-absorbing materials in the stealth technology, and in thermal spraying.
Catalysis
Metal carbonyls are used in a number of industrially important carbonylation reactions. In the oxo process, an alkene, hydrogen gas, and carbon monoxide react together with a catalyst (such as dicobalt octacarbonyl) to give aldehydes. Illustrative is the production of butyraldehyde from propylene:
CH3CH=CH2 + H2 + CO → CH3CH2CH2CHO
Butyraldehyde is converted on an industrial scale to 2-ethylhexanol, a precursor to PVC plasticizers, by aldol condensation, followed by hydrogenation of the resulting hydroxyaldehyde. The "oxo aldehydes" resulting from hydroformylation are used for large-scale synthesis of fatty alcohols, which are precursors to detergents. The hydroformylation is a reaction with high atom economy, especially if the reaction proceeds with high regioselectivity.
Another important reaction catalyzed by metal carbonyls is the hydrocarboxylation. The example below is for the synthesis of acrylic acid and acrylic acid esters:
Also the cyclization of acetylene to cyclooctatetraene uses metal carbonyl catalysts:
In the Monsanto and Cativa processes, acetic acid is produced from methanol, carbon monoxide, and water using hydrogen iodide as well as rhodium and iridium carbonyl catalysts, respectively. Related carbonylation reactions afford acetic anhydride.
CO-releasing molecules (CO-RMs)
Carbon monoxide-releasing molecules are metal carbonyl complexes that are being developed as potential drugs to release CO. At low concentrations, CO functions as a vasodilatory and an anti-inflammatory agent. CO-RMs have been conceived as a pharmacological strategic approach to carry and deliver controlled amounts of CO to tissues and organs.
Related compounds
Many ligands are known to form homoleptic and mixed ligand complexes that are analogous to the metal carbonyls.
Nitrosyl complexes
Metal nitrosyls, compounds featuring NO ligands, are numerous. In contrast to metal carbonyls, however, homoleptic metal nitrosyls are rare. NO is a stronger π-acceptor than CO. Well known nitrosyl carbonyls include CoNO(CO)3 and Fe(NO)2(CO)2, which are analogues of Ni(CO)4.
Thiocarbonyl complexes
Complexes containing CS are known but uncommon. The rarity of such complexes is partly attributable to the fact that the obvious source material, carbon monosulfide, is unstable. Thus, the synthesis of thiocarbonyl complexes requires indirect routes, such as the reaction of disodium tetracarbonylferrate with thiophosgene:
Na2Fe(CO)4 + CSCl2 → Fe(CO)4CS + 2 NaCl
Complexes of CSe and CTe have been characterized.
Isocyanide complexes
Isocyanides also form extensive families of complexes that are related to the metal carbonyls. Typical isocyanide ligands are methyl isocyanide and t-butyl isocyanide (Me3CNC). A special case is CF3NC, an unstable molecule that forms stable complexes whose behavior closely parallels that of the metal carbonyls.
Toxicology
The toxicity of metal carbonyls is due to toxicity of carbon monoxide, the metal, and because of the volatility and instability of the complexes, any inherent toxicity of the metal is generally made much more severe due to ease of exposure. Exposure occurs by inhalation, or for liquid metal carbonyls by ingestion or due to the good fat solubility by skin resorption. Most clinical experience were gained from toxicological poisoning with nickel tetracarbonyl and iron pentacarbonyl due to their use in industry. Nickel tetracarbonyl is considered as one of the strongest inhalation poisons.
Inhalation of nickel tetracarbonyl causes acute non-specific symptoms similar to a carbon monoxide poisoning, such as nausea, cough, headache, fever, and dizziness. After some time, severe pulmonary symptoms such as cough, tachycardia, and cyanosis, or problems in the gastrointestinal tract occur. In addition to pathological alterations of the lung, such as by metalation of the alveoli, damages are observed in the brain, liver, kidneys, adrenal glands, and spleen. A metal carbonyl poisoning often necessitates a lengthy recovery.
Chronic exposure by inhalation of low concentrations of nickel tetracarbonyl can cause neurological symptoms such as insomnia, headaches, dizziness and memory loss. Nickel tetracarbonyl is considered carcinogenic, but it can take 20 to 30 years from the start of exposure to the clinical manifestation of cancer.
History
Initial experiments on the reaction of carbon monoxide with metals were carried out by Justus von Liebig in 1834. By passing carbon monoxide over molten potassium he prepared a substance having the empirical formula KCO, which he called Kohlenoxidkalium. As demonstrated later, the compound was not a carbonyl, but the potassium salt of benzenehexol (K6C6O6) and the potassium salt of acetylenediol (K2C2O2).
The synthesis of the first true heteroleptic metal carbonyl complex was performed by Paul Schützenberger in 1868 by passing chlorine and carbon monoxide over platinum black, where dicarbonyldichloroplatinum (Pt(CO)2Cl2) was formed.
Ludwig Mond, one of the founders of Imperial Chemical Industries, investigated in the 1890s with Carl Langer and Friedrich Quincke various processes for the recovery of chlorine which was lost in the Solvay process by nickel metals, oxides, and salts. As part of their experiments the group treated nickel with carbon monoxide. They found that the resulting gas colored the gas flame of a burner in a greenish-yellowish color; when heated in a glass tube it formed a nickel mirror. The gas could be condensed to a colorless, water-clear liquid with a boiling point of 43 °C. Thus, Mond and his coworker had discovered the first pure, homoleptic metal carbonyl, nickel tetracarbonyl (Ni(CO)4). The unusual high volatility of the metal compound nickel tetracarbonyl led Kelvin to the statement that Mond had "given wings to the heavy metals".
The following year, Mond and Marcellin Berthelot independently discovered iron pentacarbonyl, which is produced by a similar procedure as nickel tetracarbonyl. Mond recognized the economic potential of this class of compounds, which he commercially used in the Mond process and financed more research on related compounds. Heinrich Hirtz and his colleague M. Dalton Cowap synthesized metal carbonyls of cobalt, molybdenum, ruthenium, and diiron nonacarbonyl. In 1906 James Dewar and H. O. Jones were able to determine the structure of diiron nonacarbonyl, which is produced from iron pentacarbonyl by the action of sunlight. After Mond, who died in 1909, the chemistry of metal carbonyls fell for several years in oblivion. BASF started in 1924 the industrial production of iron pentacarbonyl by a process which was developed by Alwin Mittasch. The iron pentacarbonyl was used for the production of high-purity iron, so-called carbonyl iron, and iron oxide pigment. Not until 1927 did A. Job and A. Cassal succeed in the preparation of chromium hexacarbonyl and tungsten hexacarbonyl, the first synthesis of other homoleptic metal carbonyls.
Walter Hieber played in the years following 1928 a decisive role in the development of metal carbonyl chemistry. He systematically investigated and discovered, among other things, the Hieber base reaction, the first known route to metal carbonyl hydrides and synthetic pathways leading to metal carbonyls such as dirhenium decacarbonyl. Hieber, who was since 1934 the Director of the Institute of Inorganic Chemistry at the Technical University Munich published in four decades 249 papers on metal carbonyl chemistry.
Also in the 1930s Walter Reppe, an industrial chemist and later board member of BASF, discovered a number of homogeneous catalytic processes, such as the hydrocarboxylation, in which olefins or alkynes react with carbon monoxide and water to form products such as unsaturated acids and their derivatives. In these reactions, for example, nickel tetracarbonyl or cobalt carbonyls act as catalysts. Reppe also discovered the cyclotrimerization and tetramerization of acetylene and its derivatives to benzene and benzene derivatives with metal carbonyls as catalysts. BASF built in the 1960s a production facility for acrylic acid by the Reppe process, which was only superseded in 1996 by more modern methods based on the catalytic propylene oxidation.
For the rational design of new complexes the concept of the isolobal analogy has been found useful. Roald Hoffmann was awarded the Nobel Prize in chemistry for the development of the concept. This describes metal carbonyl fragments of M(CO)n as parts of octahedral building blocks in analogy to the tetrahedral CH3–, CH2– or CH– fragments in organic chemistry. In example dimanganese decacarbonyl is formed in terms of the isolobal analogy of two d7 Mn(CO)5 fragments, that are isolobal to the methyl radical . In analogy to how methyl radicals combine to form ethane, these can combine to dimanganese decacarbonyl. The presence of isolobal analog fragments does not mean that the desired structures can be synthesized. In his Nobel Prize lecture Hoffmann emphasized that the isolobal analogy is a useful but simple model, and in some cases does not lead to success.
The economic benefits of metal-catalysed carbonylations, such as Reppe chemistry and hydroformylation, led to growth of the area. Metal carbonyl compounds were discovered in the active sites of three naturally occurring enzymes.
See also
Alkaline earth octacarbonyl complex
References
External links
metal carbonyls at Louisiana State University
Organometallic chemistry
Transition metals
Carbon monoxide | Metal carbonyl | [
"Chemistry"
] | 8,245 | [
"Organometallic chemistry"
] |
3,072,928 | https://en.wikipedia.org/wiki/Reciprocal%20liking | Reciprocal liking, also known as reciprocity of attraction, is the act of a person feeling an attraction to someone only upon learning or becoming aware of that person's attraction to themselves. Reciprocal liking has a significant impact on human attraction and the formation of relationships. People that reciprocally have a liking for each other typically initiate or develop a friendship or romantic relationship. Feelings of admiration, affection, love, and respect are characteristics for reciprocal liking between the two individuals. When there is reciprocal liking there is strong mutual attraction or strong mutual liking, but with others there is not. The feelings of warmth and intimacy also play a role. The consideration and desire to spend time with one another is another strong indicator for reciprocal liking.
Early studies
Studies in psychology show that people tend to like the people that like them. For example, in an early psychological study the participants subtly found out that a stranger liked them. Elliot Aronson and Phillip Worchel conducted the study, which required pairs of participants to have a simple conversation with one another. After the conversation, they privately rated how much they liked their partners. However, one of the individuals in each of the pairs was not actually part of the experiment, but instead was someone working with the researchers, acting as if they were a participant. Each conversation in the study occurred between a real participant and a trained actor. After their conversation, the participants were asked to write a brief statement about what they thought of their partner. After they had written these statements, the experimenters allowed them to read what their respective partners had written. Once the participants had read that their partners liked them, they then reported liking their partners more than when they had read that their partners did not like them.
Attraction and relationships
Attraction is a process in which two people interact, one person transmits verbal, visual, or other stimuli, and on the other hand, the other person responds more or less positively to the stimuli. Reciprocal liking can affect our choice of whom we have relationships with, including romantic, sexual, and platonic. According to the reciprocity principle, people tend to favor the potential partners who return the interest. Experts have claimed that when people select potential mates, they look for someone whose status, physical attractiveness, and personal qualities are about the same as their own. According to a theory, a person will select a potential partner who will better his or her self-image or persona. Researchers acknowledge a set of flirting behaviors, that have been employed by both sexes to attract each other. Conversations that are started by romantic attraction are typically light and include laughter. There have been years of research that have established many principles of attraction, one being an experiment by Aron and his colleagues, conducted in 1989, that found that most people repeatedly mentioned reciprocal liking, personality, and appearance as factors that influenced them to fall in love. People are naturally more attracted to those who express positive emotions towards them and simply knowing that someone is attracted to them can induce this reciprocal interest.
Reciprocal liking can be indicated non-verbally, such as through body languages (for example maintaining eye contact or leaning forward). Reciprocal liking and desirability of a person appear to be the most influential when falling in love. Aron et Al (1989) reported that in their sample of Canadian college students who recently fell in love, approximately 90% of them mentioned some indicator of thinking that the other person was attracted to them and the study also showed that maintaining eye contact was the most common clue. It has also been shown that people often flatter and praise people whose favour they are trying to win, and people said that they even modify their self-presentation to better fit the expectations or preferences of the person to whom they are attracted, or from whom they are seeking attention or affection.
Reciprocal liking has been observed in schools, and amongst the younger generation in general. For example, children evaluate their peers' behaviours, relationships, and interactions and then construct their own interpretations. Students tend to choose friends that are similar to themselves, meaning those who share the same likes and interests. There are two psychological reasons as to why this seems to happen, the first being social pressure and the other being the set of assumptions people tend to make about those who are similar to themselves. Students are often socially pressured to form friendships depending on the person's age, gender, social class, or racial-ethnic background. Parents and other adults involved in a child's life can also have a large influence on the friendships that children choose to have, this being because they teach children to select "appropriate" friends who will not pass on bad morals or inappropriate traits.
Self-esteem
A person's self-esteem also has a significant impact on the frequency and mannerisms of reciprocal liking. While those with positive self-esteem respond to reciprocal liking, those with negative self-esteem seem to prefer working with people who are critical of them. Nathaniel Branden stated that "self-esteem creates a set of implicit expectations about what is possible and appropriate to us", and further said that "one's reality confirms and strengthens one's original belief". This explains why self-esteem plays a role in reciprocal liking.
Cultural influences
People from different cultures can experience and understand different effects of reciprocal liking since some people take in verbal or non-verbal communication differently due to their cultural backgrounds. In high-context cultures (HCC) and low-context cultures (LCC), this can have an impact on how people perceive others depending on a number of factors to do with how they grew up. In HCCs, such as China and Korea, people tend to use vague and ambiguous language, while in LCCs people will be clear and direct in their communication. These two types of cultures can have an effect on reciprocal liking because if one person from each of these two cultures were to be conversing, the person from an LCC might believe that the person from an HCC does not like them due to the fact that they are using ambiguous language while speaking. As a result, the person from a low-context culture may conclude that their high-context culture conversation partner dislikes them, and following the rules of reciprocal liking, they will return this dislike or disinterest.
Culture plays a particular role in reciprocal liking, and cultures that operate independently from other cultures is also an important factor for individuals reciprocally liking each other. Goals of personal fulfillment and emotional intimacy in relationships are often a principal in independent cultures. An example of this may be that love should be the primary basis for two people to get married. The ethic of reciprocal liking is adopted by nearly every major religion, and if this were to stop human culture would not be able to prosper because people routinely exchange goods, services, and other things with one another. On the other hand, it has been proven that there seems to be no signs of romantic love in some cultures such as some non-western countries based on anthropologists and historians.
Social media
Reciprocal liking can also refer to the act of a user liking a social media post, image or article from a different user who initially liked the first user's content. This is often to return the act of kindness.
See also
References
Interpersonal relationships | Reciprocal liking | [
"Biology"
] | 1,462 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
3,072,930 | https://en.wikipedia.org/wiki/Roy%20McWeeny | Roy McWeeny (19 May 1924 – 29 April 2021) was a British academic physicist and chemist.
McWeeny was born in Bradford, Yorkshire in May 1924. His first degree was in physics from the University of Leeds. He then obtained a D.Phil. in mathematical physics and quantum theory under the supervision of Charles Coulson at the Mathematical Institute, University of Oxford.
From 1948 to 1957 he was lecturer in physical chemistry at King's College, University of Durham (King's College is now the University of Newcastle upon Tyne). From 1957 to 1965 he was at the University of Keele rising to Professor of Theoretical Physics and Theoretical Chemistry. From 1966 to 1982 he was Professor of Theoretical Chemistry at the University of Sheffield. In 1982 he moved to the University of Pisa, Italy where he remained an Emeritus Professor until his death.
In 1996 a celebratory festschrift volume was published in his honour containing original
papers by 132 scientists from 19 countries. He was awarded the 2006 Spiers Memorial Medal by the Faraday Division of the Royal Society of Chemistry and the Medal Lecture, "Quantum chemistry: The first seventy years", was published in Faraday Discussions. He has served on the editorial board of Molecular Physics, Chemical Physics Letters and International Journal of Quantum Chemistry.
He has written many scientific papers and seven books, of which perhaps the best known are Coulson's Valence in 1979, an update of the famous book by Charles Coulson originally written in 1951, and the two editions of Methods of Molecular Quantum Mechanics, (the first edition with B. T. Sutcliffe in 1969 and the second edition alone in 1989). He wrote several chapters in the three volumes of the Handbook of Molecular Physics and Quantum Chemistry. In 1963 he wrote Symmetry: an introduction to group theory and its applications.
From 2002 he edited an open-access series of Basic Books in Science, several of which he authored himself. He was also an elected member of the International Academy of Quantum Molecular Science and the European Academy of Arts, Sciences and the Humanities.
McWeeny died in Pisa, Italy in April 2021 at the age of 96.
References
His International Academy of Quantum Molecular Science page
External links
Interview at Early Ideas in the History of Quantum Chemistry.
Involvement with Learning Development Institute
His Learning Development Institute resume
International Journal of Quantum Chemistry, (1996), Volume 60, 3. His autobiography.
1924 births
2021 deaths
Academics of Durham University
Academics of Keele University
Academics of the University of Sheffield
Alumni of University College, Oxford
Alumni of the University of Leeds
British physical chemists
Members of the International Academy of Quantum Molecular Science
People from Bradford
Theoretical chemists | Roy McWeeny | [
"Chemistry"
] | 536 | [
"Quantum chemistry",
"Theoretical chemistry",
"Theoretical chemists",
"Physical chemists"
] |
3,073,029 | https://en.wikipedia.org/wiki/Agitator%20%28device%29 | An agitator is a device or mechanism to put something into motion by shaking or stirring. There are several types of agitation machines, including washing machine agitators (which rotate back and forth) and magnetic agitators (which contain a magnetic bar rotating in a magnetic field). Agitators can come in many sizes and varieties, depending on the application.
In general, agitators usually consist of an impeller and a shaft. An impeller is a rotor located within a tube or conduit attached to the shaft. It helps enhance the pressure in order for the flow of a fluid be done. Modern industrial agitators incorporate process control to maintain better control over the mixing process.
Washing machine agitator
In a top load washing machine the agitator projects from the bottom of the wash basket and creates the wash action by rotating back and forth, rolling garments from the top of the load, down to the bottom, then back up again.
There are several types of agitators with the most common being the "straight-vane" and "dual-action" agitators. The "straight-vane" is a one-part agitator with bottom and side fins that usually turns back and forth. The Dual-action is a two-part agitator that has bottom washer fins that move back and forth and a spiral top that rotates clockwise to help guide the clothes to the bottom washer fins.
The modern agitator, which is dual-action, was first made in Kenmore Appliances washing machines in the 1980s to present. These agitators are known by the company as dual-rollover and triple-rollover action agitators.
Magnetic agitator
This is a device formed by a metallic bar (called the agitation bar) which is normally covered by a plastic layer, and a sheet that has underneath it a rotatory magnet or a series of electromagnets arranged in a circular form to create a magnetic rotatory field. Commonly, the sheet has an arrangement of electric resistances that can heat some chemical solutions.
During the operation of a typical magnetic agitator, the agitator bar is moved inside a container such as to dissolve a substance in a liquid. The container must be placed on the sheet, so that the magnetic field influences the agitation bar and makes it rotate. This allows it to mix different substances at high speeds.
Agitation rack
An agitation rack is a special form of agitator used to store platelets. It is composed of a series of clasps attached to motorised bars, that rock the specimens of platelets gently back-and-forth. This prevents them from becoming activated and adhering to one another, which cannot be reversed by any current means.
See also
Impeller
Tedder
Mixing (disambiguation)
Mixing paddle
References
1. Uses of Agitators, June 26, 2012
2. Agitator, May 30, 2016
3. Agitator tank device and drag reduction agent evaluation October, 23, 2018
4. Slurry Agitators October, 23, 2018
Specific
Mechanical engineering
Fluid dynamics
Laundry washing equipment | Agitator (device) | [
"Physics",
"Chemistry",
"Engineering"
] | 636 | [
"Applied and interdisciplinary physics",
"Chemical engineering",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
3,073,134 | https://en.wikipedia.org/wiki/Orthometric%20height | The orthometric height (symbol H) is the vertical distance along the plumb line from a point of interest to a reference surface known as the geoid, the vertical datum that approximates mean sea level. Orthometric height is one of the scientific formalizations of a layman's "height above sea level", along with other types of heights in Geodesy.
In the US, the current NAVD88 datum is tied to a defined elevation at one point rather than to any location's exact mean sea level. Orthometric heights are usually used in the US for engineering work, although dynamic height may be chosen for large-scale hydrological purposes. Heights for measured points are shown on National Geodetic Survey data sheets, data that was gathered over many decades by precise spirit leveling over thousands of miles.
Alternatives to orthometric height include dynamic height and normal height, and various countries may choose to operate with those definitions instead of orthometric. They may also adopt slightly different but similar definitions for their reference surface.
Since gravity is not constant over large areas the orthometric height of a level surface (equipotential) other than the reference surface is not constant, and orthometric heights need to be corrected for that effect. For example, gravity is 0.1% stronger in the northern United States than in the southern, so a level surface that has an orthometric height of 1000 meters in one place will be 1001 meters high in other places. In fact, dynamic height is the most appropriate height measure when working with the level of water over a large geographic area.
Orthometric heights may be obtained from differential leveling height differences by correcting for gravity variations.
Practical applications must use a model rather than measurements to calculate the change in gravitational potential versus depth in the earth, since the geoid is below most of the land surface (e.g., the Helmert orthometric heights of NAVD88).
GPS measurements give earth-centered coordinates, usually displayed as ellipsoidal height h above the reference ellipsoid. It can be related to orthometric height H above the geoid by subtraction of the geoid height N:
The geoid determination requires accurate gravity data for that location; in the US, the NGS has undertaken the GRAV-D ten-year program to obtain such data with a goal of releasing a new geoid model as part of the Datum of 2022.
See also
Physical geodesy
References
Surveying
Geodesy
Vertical position | Orthometric height | [
"Physics",
"Mathematics",
"Engineering"
] | 523 | [
"Vertical position",
"Physical quantities",
"Distance",
"Applied mathematics",
"Surveying",
"Civil engineering",
"Geodesy"
] |
3,073,172 | https://en.wikipedia.org/wiki/363%20%28number%29 | 363 (three hundred [and] sixty-three) is the natural number following 362 and preceding 364.
In mathematics
It is an odd, composite, positive, real integer, composed of a prime (3) and a prime squared (112).
363 is a deficient number and a perfect totient number.
363 is a palindromic number in bases 3, 10, 11 and 32.
363 is a repdigit (BB) in base 32.
The Mertens function returns 0.
Any subset of its digits is divisible by three.
363 is the sum of nine consecutive primes (23 + 29 + 31 + 37 + 41 + 43 + 47 + 53 + 59).
363 is the sum of five consecutive powers of 3 (3 + 9 + 27 + 81 + 243).
363 can be expressed as the sum of three squares in four different ways: 112 + 112 + 112, 52 + 72 + 172, 12 + 12 + 192, and 132 + 132 + 52.
363 cubits is the solution given to Rhind Mathematical Papyrus question 50 – find the side length of an octagon with the same area as a circle 9 khet in diameter .
References
Integers | 363 (number) | [
"Mathematics"
] | 257 | [
"Mathematical objects",
"Number stubs",
"Elementary mathematics",
"Integers",
"Numbers"
] |
3,073,462 | https://en.wikipedia.org/wiki/Elliptic%20surface | In mathematics, an elliptic surface is a surface that has an elliptic fibration, in other words a proper morphism with connected fibers to an algebraic curve such that almost all fibers are smooth curves of genus 1. (Over an algebraically closed field such as the complex numbers, these fibers are elliptic curves, perhaps without a chosen origin.) This is equivalent to the generic fiber being a smooth curve of genus one. This follows from proper base change.
The surface and the base curve are assumed to be non-singular (complex manifolds or regular schemes, depending on the context). The fibers that are not elliptic curves are called the singular fibers and were classified by Kunihiko Kodaira. Both elliptic and singular fibers are important in string theory, especially in F-theory.
Elliptic surfaces form a large class of surfaces that contains many of the interesting examples of surfaces, and are relatively well understood in the theories of complex manifolds and smooth 4-manifolds. They are similar to (have analogies with, that is), elliptic curves over number fields.
Examples
The product of any elliptic curve with any curve is an elliptic surface (with no singular fibers).
All surfaces of Kodaira dimension 1 are elliptic surfaces.
Every complex Enriques surface is elliptic, and has an elliptic fibration over the projective line.
Kodaira surfaces
Dolgachev surfaces
Shioda modular surfaces
Kodaira's table of singular fibers
Most of the fibers of an elliptic fibration are (non-singular) elliptic curves. The remaining fibers are called singular fibers: there are a finite number of them, and each one consists of a union of rational curves, possibly with singularities or non-zero multiplicities (so the fibers may be non-reduced schemes). Kodaira and Néron independently classified the possible fibers, and Tate's algorithm can be used to find the type of the fibers of an elliptic curve over a number field.
The following table lists the possible fibers of a minimal elliptic fibration. ("Minimal" means roughly one that cannot be factored through a "smaller" one; precisely, the singular fibers should contain no smooth rational curves with self-intersection number −1.) It gives:
Kodaira's symbol for the fiber,
André Néron's symbol for the fiber,
The number of irreducible components of the fiber (all rational except for type I0)
The intersection matrix of the components. This is either a 1×1 zero matrix, or an affine Cartan matrix, whose Dynkin diagram is given.
The multiplicities of each fiber are indicated in the Dynkin diagram.
This table can be found as follows. Geometric arguments show that the intersection matrix of the components of the fiber must be negative semidefinite, connected, symmetric, and have no diagonal entries equal to −1 (by minimality). Such a matrix must be 0 or a multiple of the Cartan matrix of an affine Dynkin diagram of type ADE.
The intersection matrix determines the fiber type with three exceptions:
If the intersection matrix is 0 the fiber can be either an elliptic curve (type I0), or have a double point (type I1), or a cusp (type II).
If the intersection matrix is affine A1, there are 2 components with intersection multiplicity 2. They can meet either in 2 points with order 1 (type I2), or at one point with order 2 (type III).
If the intersection matrix is affine A2, there are 3 components each meeting the other two. They can meet either in pairs at 3 distinct points (type I3), or all meet at the same point (type IV).
Monodromy
The monodromy around each singular fiber is a well-defined conjugacy class in the group SL(2,Z) of 2 × 2 integer matrices with determinant 1. The monodromy describes the way the first homology group of a smooth fiber (which is isomorphic to Z2) changes as we go around a singular fiber. Representatives for these conjugacy classes associated to singular fibers are given by:
For singular fibers of type II, III, IV, I0*, IV*, III*, or II*, the monodromy has finite order in SL(2,Z). This reflects the fact that an elliptic fibration has potential good reduction at such a fiber. That is, after a ramified finite covering of the base curve, the singular fiber can be replaced by a smooth elliptic curve. Which smooth curve appears is described by the j-invariant in the table. Over the complex numbers, the curve with j-invariant 0 is the unique elliptic curve with automorphism group of order 6, and the curve with j-invariant 1728 is the unique elliptic curve with automorphism group of order 4. (All other elliptic curves have automorphism group of order 2.)
For an elliptic fibration with a section, called a Jacobian elliptic fibration, the smooth locus of each fiber has a group structure. For singular fibers, this group structure on the smooth locus is described in the table, assuming for convenience that the base field is the complex numbers. (For a singular fiber with intersection matrix given by an affine Dynkin diagram , the group of components of the smooth locus is isomorphic to the center of the simply connected simple Lie group with Dynkin diagram , as listed here.) Knowing the group structure of the singular fibers is useful for computing the Mordell-Weil group of an elliptic fibration (the group of sections), in particular its torsion subgroup.
Canonical bundle formula
To understand how elliptic surfaces fit into the classification of surfaces, it is important to compute the canonical bundle of a minimal elliptic surface f: X → S. Over the complex numbers, Kodaira proved the following canonical bundle formula:
Here the multiple fibers of f (if any) are written as , for an integer mi at least 2 and a divisor Di whose coefficients have greatest common divisor equal to 1, and L is some line bundle on the smooth curve S. If S is projective (or equivalently, compact), then the degree of L is determined by the holomorphic Euler characteristics of X and S: deg(L) = χ(X,OX) − 2χ(S,OS). The canonical bundle formula implies that KX is Q-linearly equivalent to the pullback of some Q-divisor on S; it is essential here that the elliptic surface X → S is minimal.
Building on work of Kenji Ueno, Takao Fujita (1986) gave a useful variant of the canonical bundle formula, showing how KX depends on the variation of the smooth fibers. Namely, there is a Q-linear equivalence
where the discriminant divisor BS is an explicit effective Q-divisor on S associated to the singular fibers of f, and the moduli divisor MS is , where j: S → P1 is the function giving the j-invariant of the smooth fibers. (Thus MS is a Q-linear equivalence class of Q-divisors, using the identification between the divisor class group Cl(S) and the Picard group Pic(S).) In particular, for S projective, the moduli divisor MS has nonnegative degree, and it has degree zero if and only if the elliptic surface is isotrivial, meaning that all the smooth fibers are isomorphic.
The discriminant divisor in Fujita's formula is defined by
,
where c(p) is the log canonical threshold . This is an explicit rational number between 0 and 1, depending on the type of singular fiber. Explicitly, the lct is 1 for a smooth fiber or type , and it is 1/m for a multiple fiber , 1/2 for , 5/6 for II, 3/4 for III, 2/3 for IV, 1/3 for IV*, 1/4 for III*, and 1/6 for II*.
The canonical bundle formula (in Fujita's form) has been generalized by Yujiro Kawamata and others to families of Calabi–Yau varieties of any dimension.
Logarithmic transformations
A logarithmic transformation (of order m with center p) of an elliptic surface or fibration turns a fiber of multiplicity 1 over a point p of the base space into a fiber of multiplicity m. It can be reversed, so fibers of high multiplicity can all be turned into fibers of multiplicity 1, and this can be used to eliminate all multiple fibers.
Logarithmic transformations can be quite violent: they can change the Kodaira dimension, and can turn algebraic surfaces into non-algebraic surfaces.
Example:
Let L be the lattice Z+iZ of C, and let E be the elliptic curve C/L. Then the projection map from E×C to C is an elliptic fibration. We will show how to replace the fiber over 0 with a fiber of multiplicity 2.
There is an automorphism of E×C of order 2 that maps (c,s) to (c+1/2, −s). We let X be the quotient of E×C by this group action. We make X into a fiber space over C by mapping (c,s) to s2. We construct an isomorphism from X minus the fiber over 0 to E×C minus the fiber over 0 by mapping (c,s) to (c-log(s)/2πi,s2). (The two fibers over 0 are non-isomorphic elliptic curves, so the fibration X is certainly not isomorphic to the fibration E×C over all of C.)
Then the fibration X has a fiber of multiplicity 2 over 0, and otherwise looks like E×C. We say that X is obtained by applying a logarithmic transformation of order 2 to E×C with center 0.
See also
Enriques–Kodaira classification
Néron minimal model
Notes
References
Complex surfaces
Birational geometry
Algebraic surfaces
Mathematical classification systems | Elliptic surface | [
"Mathematics"
] | 2,110 | [
"nan"
] |
3,073,484 | https://en.wikipedia.org/wiki/Productionisation | Productionisation (Commonwealth English) or productionization (American English) is the process of turning a prototype of a design into a version that can be more easily mass-produced. It is mostly a necessary step in the development of any product, since it is rare that the initial design is free from flaws or construction methods which make it difficult or more expensive to manufacture.
Prototypes are very often constructed by hand, or with more limited tooling. This is done to save costs where the design may not even be subsequently approved for manufacture. Once the go-ahead for a production run is given, the much more costly production tooling can be ordered. At this stage, the design itself may need to be reworked or altered to streamline production. The goal is to reduce costs as much as possible at the assembly stage, since costs will be multiplied by the number of units produced. For example, a prototype might be assembled using nuts and bolts, but in production such fasteners might be replaced by captive nuts or threaded holes built into the parts, making assembly much faster, easier and therefore cheaper.
Sometimes limited runs of a design might be manufactured without full productionisation.
Other examples of productionisation include:
plastic mouldings instead of hand-constructed parts
built-in fasteners
snap-together or machine welded parts instead of using fasteners
custom integrated circuits instead of discrete electronic components
customised IT solutions released into a live environment.
Productionisation is a term that is increasingly prevalent in Software Development. One reason for this is the popularity of agile type development methods, which often focus on building a prototype solution to develop and refine the product to the business requirements. Prior to putting the system into production, the developers need to ensure the system is robust enough for the target environment with regard to aspects such as error handling, stability, usability, scalability and performance. This process of making the prototype ‘production or enterprise grade’ is often referred to as ‘Productionisation’.
References
Industrial design
Product development | Productionisation | [
"Engineering"
] | 403 | [
"Industrial design",
"Design engineering",
"Design"
] |
3,073,557 | https://en.wikipedia.org/wiki/Why%27s%20%28poignant%29%20Guide%20to%20Ruby | why's (poignant) Guide to Ruby, sometimes called w(p)GtR or just "the poignant guide", is an introductory book to the Ruby programming language, written by why the lucky stiff. The book is distributed under the Creative Commons Attribution-ShareAlike license.
The book is unusual among programming books in that it includes much strange humor and many narrative side tracks which are sometimes completely unrelated to the topic. Many motifs have become inside jokes in the Ruby community, such as references to the words "chunky bacon". The book includes many characters which have become popular as well, particularly the cartoon foxes and Trady Blix, a large black feline friend of why's, who acts as a guide to the foxes (and occasionally teaches them some Ruby).
The book is published in HTML and PDF. Chapter three was reprinted in The Best Software Writing I: Selected and Introduced by Joel Spolsky (Apress, 2005).
Contents
About this book
Kon'nichi wa, Ruby
A Quick (and Hopefully Painless) Ride Through Ruby (with Cartoon Foxes): basic introduction to central Ruby concepts
Floating Little Leaves of Code: evaluation and values, hashes and lists
Them What Make the Rules and Them What Live the Dream: case/when, while/until, variable scope, blocks, methods, class definitions, class attributes, objects, modules, introspection in IRB, dup, self, module
Downtown: metaprogramming, regular expressions
When You Wish Upon a Beard: send method, new methods in existing classes
The following chapters are "Expansion Packs":
The Tiger's Vest (with a Basic Introduction to IRB): discusses IRB, the interactive Ruby interpreter.
References
External links
Original Site
Actively maintained fork
3rd-party PDF version: Ruby Inside
Computer programming books
Creative Commons-licensed books
Ruby (programming language)
Books about free software | Why's (poignant) Guide to Ruby | [
"Technology"
] | 390 | [
"Computing stubs",
"Computer book stubs"
] |
3,073,675 | https://en.wikipedia.org/wiki/Push%20Proxy%20Gateway | A Push Proxy Gateway is a component of WAP Gateways that pushes URL notifications to mobile handsets. Notifications typically include MMS, email, IM, ringtone downloads, and new device firmware notifications. Most notifications will have an audible alert to the user of the device. The notification will typically be a text string with a URL link. Note that only a notification is pushed to the device; the device must do something with the notification in order to download or view the content associated with it.
Technical specifications
PUSH to PPG
A push message is sent as an HTTP POST to the Push Proxy Gateway. The POST will be a multipart XML document, with the first part being the PAP (Push Access Protocol) Section and the second part being either a Service Indication or a Service Loading.
+---------------------------------------------+
| HTTP POST | \
+---------------------------------------------+ | WAP
| PAP XML | | PUSH
+---------------------------------------------+ | Flow
| Service Indication or Service Loading XML | /
+---------------------------------------------+
POST
The POST contains at a minimum the URL being posted to (this is not standard across different PPG vendors), and the content type.
An example of a PPG POST:
PAP
The PAP XML contains at the minimum, a <pap> element, a <push-message> element, and an <address> element.
An example of a PAP XML:
--someboundarymesg
Content-Type: application/xml
The important parts of this PAP message are the address value and type. The value is typically a MSISDN and type indicates whether to send to an MSISDN (typical case) or to an IP Address. The TYPE is almost always MSISDN as the Push Initiator (PI) will not typically have the Mobile Station's IP address - which is generally dynamic. In the case of IP Address:
TYPE=USER@a.b.c.d
Additional capability of PAP can be found in the PAP article.
Service Indication
A PUSH Service Indication (SI) contains at a minimum an <si> element and an <indication> element.
An example of a Service Indication:
PPG delivery to mobile station
Once a push message is received from the Push Initiator, the PPG has two avenues for delivery. If the IP address of the Mobile Station is known to the PPG, the PPG can deliver directly to the mobile station over an IP bearer. This is known as "Connection Oriented Push". If the IP address of the mobile station is not known to the PPG, the PPG will deliver over an SMS bearer. Delivery over an SMS bearer is known as "Connectionless Push".
Connectionless Push
In Connectionless Push, an SMSC BIND is required for the PPG to deliver its push message to the mobile station. Typically, a PPG will have a local SMS queuing mechanism running locally that it BINDs to, and which in turn BINDs to the carrier's SMSC. This mechanism should allow for queuing in the event of an SMS infrastructure outage, and also provide for message throttling.
Since a WAP Push message can be larger than a single SMS message can contain, the push message may be broken up into multiple SMS messages, as a multipart SMS.
Connection Oriented Push
In Connection Oriented pushes (where the device supports it), an SMSC BIND is not required if the gateway is aware of the handsets IP Address. If the gateway is unable to determine the IP Address of the handset, or is unable to connect to the device, the push notification will be encoded and sent as an SMS.
Connection Oriented Push is used less frequently than Connectionless Push for several reasons including:
Devices while registered to the network, may not have a data session (PDP Context in the GSM world) established.
A separate IP->MSISDN table has to be maintained in Connection Oriented Push.
Typically, the PPG or another part of the gateway has to receive RADIUS or other accounting packets in order to support Connection Oriented Push.
Other PUSH Attributes
Push notifications can be confirmed or unconfirmed. Most carriers use unconfirmed pushes due to the high volume and resource constraints related to confirmed push. This is controlled by setting confirmed in the quality-of-service tag element.
Push notifications can be set to expire if not delivered before a certain time. This is controlled by setting deliver-before-timestamp in the pushmessage element.
Many other attributes exist and are detailed in the specifications at the Open Mobile Alliance and other sites.
PPG Vendors
PPG vendors include Nokia Siemens Networks, Ericsson, Gemini Mobile Technologies, Openwave, Acision, Huawei, Azetti, Alcatel,WIT Software, ZTE, and open source Kannel.
See also
PO-TCP
References
OMA WAP Specifications:
Push Message (Version 22, March 2001 - ref WAP-251-PushMessage-20010322-a)
Service Indication (Version 31, July 2001 - ref WAP-167-ServiceInd-20010731-a)
Service Loading (Version 31, July 2001 - ref WAP-168-ServiceLoad-20010731-a)
Mobile technology
Proxy Gateway
Wireless Application Protocol | Push Proxy Gateway | [
"Technology"
] | 1,257 | [
"Wireless networking",
"Wireless Application Protocol",
"nan"
] |
3,074,037 | https://en.wikipedia.org/wiki/ARGUS%20distribution | In physics, the ARGUS distribution, named after the particle physics experiment ARGUS, is the probability distribution of the reconstructed invariant mass of a decayed particle candidate in continuum background.
Definition
The probability density function (pdf) of the ARGUS distribution is:
for . Here and are parameters of the distribution and
where and are the cumulative distribution and probability density functions of the standard normal distribution, respectively.
Cumulative distribution function
The cumulative distribution function (cdf) of the ARGUS distribution is
.
Parameter estimation
Parameter c is assumed to be known (the kinematic limit of the invariant mass distribution), whereas χ can be estimated from the sample X1, …, Xn using the maximum likelihood approach. The estimator is a function of sample second moment, and is given as a solution to the non-linear equation
.
The solution exists and is unique, provided that the right-hand side is greater than 0.4; the resulting estimator is consistent and asymptotically normal.
Generalized ARGUS distribution
Sometimes a more general form is used to describe a more peaking-like distribution:
where Γ(·) is the gamma function, and Γ(·,·) is the upper incomplete gamma function.
Here parameters c, χ, p represent the cutoff, curvature, and power respectively.
The mode is:
The mean is:
where M(·,·,·) is the Kummer's confluent hypergeometric function.
The variance is:
p = 0.5 gives a regular ARGUS, listed above.
References
Further reading
Experimental particle physics
Continuous distributions | ARGUS distribution | [
"Physics"
] | 317 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
3,074,135 | https://en.wikipedia.org/wiki/Orbital%20plane | The orbital plane of a revolving body is the geometric plane in which its orbit lies. Three non-collinear points in space suffice to determine an orbital plane. A common example would be the positions of the centers of a massive body (host) and of an orbiting celestial body at two different times/points of its orbit.
The orbital plane is defined in relation to a reference plane by two parameters: inclination (i) and longitude of the ascending node (Ω).
By definition, the reference plane for the Solar System is usually considered to be Earth's orbital plane, which defines the ecliptic, the circular path on the celestial sphere that the Sun appears to follow over the course of a year.
In other cases, for instance a moon or artificial satellite orbiting another planet, it is convenient to define the inclination of the Moon's orbit as the angle between its orbital plane and the planet's equatorial plane.
The coordinate system defined that uses the orbital plane as the plane is known as the perifocal coordinate system.
Artificial satellites around the Earth
For launch vehicles and artificial satellites, the orbital plane is a defining parameter of an orbit; as in general, it will take a very large amount of propellant to change the orbital plane of an object. Other parameters, such as the orbital period, the eccentricity of the orbit and the phase of the orbit are more easily changed by propulsion systems.
Orbital planes of satellites are perturbed by the non-spherical nature of the Earth's gravity. This causes the orbital plane of the satellite's orbit to slowly rotate around the Earth, depending on the angle the plane makes with the Earth's equator. For planes that are at a critical angle this can mean that the plane will track the Sun around the Earth, forming a Sun-synchronous orbit.
A launch vehicle's launch window is usually determined by the times when the target orbital plane intersects the launch site.
See also
Earth-centered inertial coordinate system
ECEF, Earth-Centered Earth-fixed coordinate system
Invariable plane, a weighted average of all orbital planes in a system
Orbital elements
Orbital state vectors
Perifocal coordinate system
References
Plane
Planes (geometry) | Orbital plane | [
"Mathematics"
] | 448 | [
"Planes (geometry)",
"Mathematical objects",
"Infinity"
] |
3,074,358 | https://en.wikipedia.org/wiki/Vinyl%20ester%20resin | Vinyl ester resin, or often just vinyl ester, is a resin produced by the esterification of an epoxy resin with acrylic or methacrylic acids. The "vinyl" groups refer to these ester substituents, which are prone to polymerize and thus an inhibitor is usually added. The diester product is then dissolved in a reactive solvent, such as styrene, to approximately 35–45 percent content by weight. Polymerization is initiated by free radicals, which are generated by UV-irradiation or peroxides.
This thermoset material can be used as an alternative to polyester and epoxy materials as the thermoset polymer matrix in composite materials, where its characteristics, strengths, and bulk cost are intermediate between polyester and epoxy. Vinyl ester has lower resin viscosity (approx. 200 cps) than polyester (approx. 500cps) and epoxy (approx. 900cps).
Uses
In homebuilt airplanes, the Glasair and Glastar kit planes made extensive use of vinylester fiberglass-reinforced structures. It is a common resin in the marine industry due to its corrosion resistance and ability to withstand water absorption. Vinyl ester resin is extensively used to manufacture FRP tanks and vessels as per BS4994.
For laminating process, vinyl ester is usually initiated with methyl ethyl ketone peroxide. It has greater strength and mechanical properties than polyester and less than epoxy resin.
Renewable precursors to vinyl ester resins have been developed.
Vinyl resins are often used in repair materials and laminating because they are waterproof and reliable.
Bisphenol A is a precursor in production of major classes of resins, including the vinyl ester resins along with epoxy resins and polycarbonate. This application usually begins with alkylation of BPA with epichlorohydrin.
References
ResinNavigator.org's Epoxy-based vinyl esters benefits
Synthetic resins
Thermosetting plastics | Vinyl ester resin | [
"Chemistry"
] | 434 | [
"Synthetic materials",
"Synthetic resins"
] |
3,074,923 | https://en.wikipedia.org/wiki/Leak | A leak is a way (usually an opening) for fluid to escape a container or fluid-containing system, such as a tank or a ship's hull, through which the contents of the container can escape or outside matter can enter the container. Leaks are usually unintended and therefore undesired. The word leak usually refers to a gradual loss; a sudden loss is usually called a spill.
The matter leaking in or out can be gas, liquid, a highly viscous paste, or even a solid such as a powdered or granular solid or other solid particles.
Sometimes the word "leak" is used in a figurative sense. For example, in a news leak secret information becomes public.
According to ASTM D7053-17, water leakage is defined as the passage of (liquid) water through a material or system designed to prevent passage of water.
Types and possible causes
Types of leak openings include a puncture, gash, rust or other corrosion hole, very tiny pinhole leak (possibly in imperfect welds), crack or microcrack, or inadequate sealing between components or parts joined together. When there is a puncture, the size and shape of the leak can often be seen, but in many other cases, the size and shape of the leak opening may not be so obvious. In many cases, the location of a leak can be determined by seeing material drip out at a certain place, although the leak opening itself is not obvious. In some cases, it may be known or suspected there is a leak, but even the location of the leak is not known. Since leak openings are often irregular shapes or extended cracks, leaks are sometimes sized by the leakage rate, as in volume of fluid leaked per time, rather than the size of the opening.
Common types of leaks for many people include leaks in vehicle tires, which allows air to leak out and results in flat tires, and leaks in containers, which spills the contents. Leaks can occur or develop in many different kinds of household, building, vehicle, marine, aircraft, or industrial fluid systems, whether the fluid is a gas or liquid. Leaks in vehicle hydraulic systems such as brake or power steering lines could cause loss of brake or power steering fluid, resulting in failure of the brakes, power steering, or other hydraulic system. Also possible are leaks of engine coolant - particularly in the radiator and at the water pump seal, transmission fluid, motor oil, and refrigerant in the air conditioning system. Some of these vehicle fluids have different colors to help identify the type of leaking fluid.
Batteries are at risk of leakage, because their operation inherently involves chemical corrosion. A zinc-carbon battery is an example of a commonly-seen leaking component; the electrolytes inside the cell sometimes leak out of the cell casing and cause damage to an electronic appliance.
Water leaks occur when there is damage to the water supply system or wastewater system on a property that causes a drip or flow to release. Gas leaks, e.g. in natural gas lines allow flammable and potentially explosive gas to leak out, resulting in a hazardous situation. Leaks of refrigerant may occur in refrigerators or air conditioning systems, large and small.
Some industrial plants, especially chemical and power plants, have numerous fluid systems containing many types of liquid or gas chemicals, sometimes at high temperature and/or pressure. An example of a possible industrial location of a leak between two fluid systems includes a leak between the shell and tube sides in a heat exchanger, potentially contaminating either or both fluid systems with the other fluid. A system holding a full or partial vacuum may have a leak causing inleakage of air from the outside. Hazmat procedures and/or teams may become involved when leakage or spillage of hazardous materials occurs. Leaks while transporting hazardous materials could result in danger; for example, when accidents occur. However, even leakage of steam can be dangerous because of the high temperature and energy of the steam.
Leakage of air or other gas out of hot air balloons, dirigibles, or cabins of airplanes could present dangerous situations.
A leak could even be inside a living body, such as a hole in the septum between heart ventricles causing an exchange of oxygenated and deoxygenated blood, or a fistula between bodily cavities such as between vagina and rectum.
There can be numerous causes of leaks. Leaks can occur from the outset even during construction or initial manufacture/assembly of fluid systems. Pipes, tubing, valves, fittings, or other components may be improperly joined or welded together. Components with threads may be improperly screwed together. Leaks can be caused by damage; for example, punctures or fracture. Often leaks are the result of deterioration of materials from wear or aging, such as rusting or other corrosion or decomposition of elastomers or similar polymer materials used as gaskets or other seals. For example, wearing out of faucet washers causes water to leak at the faucets. Cracks may result from either outright damage, or wearing out by stress such as fatigue failure or corrosion such as stress corrosion cracking. Wearing out of a surface between a disk and its seat in a valve could cause a leak between ports (valve inlets or outlets). Wearing out of packing around a turning valve stem or rotating centrifugal pump shaft could develop into fluid outleakage into the environment. For some frequently operating centrifugal pumps, such leakage is so expected that provisions are made for carrying away the leakage. Similarly, wearing out of seals or packing around piston-driven pumps could also develop into outleakage to the environment.
The pressure difference between both sides of the leak can affect the movement of material through the leak. Fluids will commonly move from the higher pressure side to the lower pressure side. The larger the pressure difference, the more leakage there will typically be. The fluid pressures on both sides include the hydrostatic pressure, which is pressure due to the weight from the height of fluid level above the leak. When the pressures are about equal, there can be an exchange of fluids between both sides, or little to no net movement of fluid across the leak.
Testing
Containers, vessels, enclosures, or other fluid system are sometimes tested for leaks - to see if there is any leakage and to find where the leaks are so corrective action can be taken. There are several methods for leak testing, depending on the situation. Sometimes leakage of fluid may make a sound which can be detected. Tires, engine radiators, and maybe some other smaller vessels may be tested by pressurizing them with air and submerging them in water to see where air bubbles come out to indicate a leak.
If submerging in water is not possible, then pressurization with air followed by covering the area to be tested with a soap solution is done to see if soap bubbles form, which indicate a leak. Other types of testing for gas leaks may involve testing for the outleaking gases with sensors which can detect that gas, for example - special sensing instruments for detecting natural gas. US federal safety law now requires natural gas companies to conduct testing for gas leaks upstream of their customer's gas meters. Where liquids are used, special color dyes may be added to help see the leakage. Other detectable substances in one of the liquids may be tested, such as saline to find a leak in a sea water system, or detectable substances may even be deliberately added to test for leakage.
Newly constructed, fabricated, or repaired systems or other vessels are sometimes tested to verify satisfactory production or repair. Plumbers often test for leaks after working on a water or other fluid system. A vessel or system is sometimes pressure tested by filling with air and the pressure monitored to see if it drops, indicating a leak.
A very commonly used test after new construction or repair is a hydrostatic test, sometimes called a pressure test. In a hydrostatic test, a system is pressurized with water to look for a drop in pressure or to see where it leaks out. Helium testing may be done to detect for any very small leakage such as when testing certain diaphragm or bellows valves, made for high purity and utra high purity service, requiring low leak rate capability. Helium and hydrogen have very small molecules which can go through very small leaks.
Leak testing is part of the non-destructive test NDT portfolio that can be applied to a part to verify its conformity; depending on material, pressure, leak tightness specifications, different methods can be applied. International standards has been defined to assist in these choices. For example, BS EN 1779:1999; it applies to assessment of leak tightness by indication or measurement of gas leakage, but excludes hydrostatic, ultrasonic or electromagnetic methods.
Other standards also apply:
BS EN 13184:2001 Non-destructive testing. Leak testing. Pressure change process
BS EN 13185:2001 Non-destructive testing. Leak testing. Tracer gas method
BS EN 13192:2002 Non-destructive testing. Leak testing. Calibration of reference leaks for gases
In shell and tube heat exchangers, Eddy current testing is sometimes done in the tubes to find locations on tubes where there may be leaks or damage which may eventually develop into a leak.
Corrective action
In complex plants with multiple fluid systems, many interconnecting units holding fluids have isolation valves between them. If there is a leak in a unit, its isolation valves can be shut to "isolate" the unit from the rest of the plant.
Leaks are often repaired by plugging the leaking holes or using a patch to cover them. Leaking tires are often fixed this way. Leaking gaskets, seals, washers, or packing can be replaced. Use of welding, soldering, sealing, or gluing may be other ways to fix leaks. Sometimes, the most practical solution is to replace the leaking unit. Leaking water heaters are often replaced by home or building owners.
If there is a leak in one of the tubes of a shell and tube heat exchanger, that tube can be plugged at both ends with specially sized plugs to isolate the leak. This is done in the plenum(s) at the points where the tube ends connect to the tubesheet(s). Sometimes a damaged but not yet leaking tube is pre-emptively plugged to prevent future leakage. The heat transfer capacity of that tube is lost, but there are usually plenty of other tubes to pick up the heat transfer load.
See also
Explosion
Fugitive emissions
Non-revenue water
Seal (mechanical)
References
Plumbing | Leak | [
"Engineering"
] | 2,186 | [
"Construction",
"Plumbing"
] |
3,075,265 | https://en.wikipedia.org/wiki/Polyporus | Polyporus is a genus of poroid fungi in the family Polyporaceae.
Taxonomy
Italian botanist Pier Antonio Micheli introduced the genus in 1729 to include 14 species featuring fruit bodies with centrally-placed stipes, and pores on the underside of the cap. The generic name combines the Ancient Greek words ("many") and ("pore").
Elias Fries divided Polyporus into three subgenera in his 1855 work Novae Symbol Mycologici: Eupolyporus, Fomes, and Poria. In a 1995 monograph, Maria Núñez and Leif Ryvarden grouped 32 Polyporus species into 6 morphologically-based infrageneric groups: Admirabilis, Dendropolyporus, Favolus, Polyporellus, Melanopus, and Polyporus sensu stricto.
The identity of the type species of Polyporus has long been a matter of contention among mycologists. Some have preferred P. brumalis, some P. squamosus, while others have preferred P. tuberaster.
Selected species
There are almost 250 species recognised including:
Polyporus australiensis
Polyporus gayanus
Polyporus leprieurii
Polyporus minutosquamosus – French Guiana
Polyporus phyllostachydis
Polyporus radicatus
Polyporus tuberaster, tuberous polypore (type species)
Polyporus umbellatus
References
External links
Lentinoid and Polyporoid Fungi, Two Generic Conglomerates Containing Important Medicinal Mushrooms in Molecular Perspective at Molecular Phylogeny
Polyporales genera
Bioluminescent fungi
Fungi described in 1763
Taxa named by Michel Adanson
Fungus species
de:Stielporlinge#Systematik | Polyporus | [
"Biology"
] | 363 | [
"Fungi",
"Fungus species"
] |
3,075,539 | https://en.wikipedia.org/wiki/Anharmonicity | In classical mechanics, anharmonicity is the deviation of a system from being a harmonic oscillator. An oscillator that is not oscillating in harmonic motion is known as an anharmonic oscillator where the system can be approximated to a harmonic oscillator and the anharmonicity can be calculated using perturbation theory. If the anharmonicity is large, then other numerical techniques have to be used. In reality all oscillating systems are anharmonic, but most approximate the harmonic oscillator the smaller the amplitude of the oscillation is.
As a result, oscillations with frequencies and etc., where is the fundamental frequency of the oscillator, appear. Furthermore, the frequency deviates from the frequency of the harmonic oscillations. See also intermodulation and combination tones. As a first approximation, the frequency shift is proportional to the square of the oscillation amplitude :
In a system of oscillators with natural frequencies , , ... anharmonicity results in additional oscillations with frequencies .
Anharmonicity also modifies the energy profile of the resonance curve, leading to interesting phenomena such as the foldover effect and superharmonic resonance.
General principle
An oscillator is a physical system characterized by periodic motion, such as a pendulum, tuning fork, or vibrating diatomic molecule. Mathematically speaking, the essential feature of an oscillator is that for some coordinate of the system, a force whose magnitude depends on will push away from extreme values and back toward some central value , causing to oscillate between extremes. For example, may represent the displacement of a pendulum from its resting position . As the absolute value of increases, so does the restoring force acting on the pendulums weight that pushes it back towards its resting position.
In harmonic oscillators, the restoring force is proportional in magnitude (and opposite in direction) to the displacement of from its natural position . The resulting differential equation implies that must oscillate sinusoidally over time, with a period of oscillation that is inherent to the system. may oscillate with any amplitude, but will always have the same period.
Anharmonic oscillators, however, are characterized by the nonlinear dependence of the restorative force on the displacement x. Consequently, the anharmonic oscillator's period of oscillation may depend on its amplitude of oscillation.
As a result of the nonlinearity of anharmonic oscillators, the vibration frequency can change, depending upon the system's displacement. These changes in the vibration frequency result in energy being coupled from the fundamental vibration frequency to other frequencies through a process known as parametric coupling.
Treating the nonlinear restorative force as a function of the displacement of x from its natural position, we may replace by its linear approximation at zero displacement. The approximating function F1 is linear, so it will describe simple harmonic motion. Further, this function is accurate when is small. For this reason, anharmonic motion can be approximated as harmonic motion as long as the oscillations are small.
Examples in physics
There are many systems throughout the physical world that can be modeled as anharmonic oscillators in addition to the nonlinear mass-spring system. For example, an atom, which consists of a positively charged nucleus surrounded by a negatively charged electronic cloud, experiences a displacement between the center of mass of the nucleus and the electronic cloud when an electric field is present. The amount of that displacement, called the electric dipole moment, is related linearly to the applied field for small fields, but as the magnitude of the field is increased, the field-dipole moment relationship becomes nonlinear, just as in the mechanical system.
Further examples of anharmonic oscillators include the large-angle pendulum; nonequilibrium semiconductors that possess a large hot carrier population, which exhibit nonlinear behaviors of various types related to the effective mass of the carriers; and ionospheric plasmas, which also exhibit nonlinear behavior based on the anharmonicity of the plasma, transversal oscillating strings. In fact, virtually all oscillators become anharmonic when their pump amplitude increases beyond some threshold, and as a result it is necessary to use nonlinear equations of motion to describe their behavior.
Anharmonicity plays a role in lattice and molecular vibrations, in quantum oscillations, and in acoustics. The atoms in a molecule or a solid vibrate about their equilibrium positions. When these vibrations have small amplitudes they can be described by harmonic oscillators. However, when the vibrational amplitudes are large, for example at high temperatures, anharmonicity becomes important. An example of the effects of anharmonicity is the thermal expansion of solids, which is usually studied within the quasi-harmonic approximation. Studying vibrating anharmonic systems using quantum mechanics is a computationally demanding task because anharmonicity not only makes the potential experienced by each oscillator more complicated, but also introduces coupling between the oscillators. It is possible to use first-principles methods such as density-functional theory to map the anharmonic potential experienced by the atoms in both molecules and solids. Accurate anharmonic vibrational energies can then be obtained by solving the anharmonic vibrational equations for the atoms within a mean-field theory. Finally, it is possible to use Møller–Plesset perturbation theory to go beyond the mean-field formalism.
Period of oscillations
Consider a mass moving in a potential well . The oscillation period may be derived
where the extremes of the motion are given by and .
See also
Inharmonicity
Harmonic oscillator
Musical acoustics
Nonlinear resonance
Transmon
References
External links
Classical mechanics | Anharmonicity | [
"Physics"
] | 1,239 | [
"Mechanics",
"Classical mechanics"
] |
3,075,977 | https://en.wikipedia.org/wiki/NASA%20Clean%20Air%20Study |
The NASA Clean Air Study was a project led by the National Aeronautics and Space Administration (NASA) in association with the Associated Landscape Contractors of America (ALCA) in 1989, to research ways to clean the air in sealed environments such as space stations. Its results suggested that, in addition to absorbing carbon dioxide and releasing oxygen through photosynthesis, certain common indoor plants may also provide a natural way of removing volatile organic pollutants (benzene, formaldehyde, and trichloroethylene were tested).
These results are not applicable to typical buildings, where outdoor-to-indoor air exchange already removes VOCs at a rate that could only be matched by the placement of 10–1000 plants/m of a building's floor space.
The results also failed to replicate in future studies, with a 2014 review stating that:
List of plants studied
The following plants were tested during the initial 1989 study:
Variegated snake plant / mother-in-law's tongue (Sansevieria trifasciata laurentii)
English ivy (Hedera helix)
Peace lily (Spathiphyllum 'Mauna Loa')
Chinese evergreen (Aglaonema modestum)
Bamboo palm (Chamaedorea seifrizii)
Red-edged dracaena, marginata (Dracaena marginata)
Cornstalk dracaena, mass cane/corn cane (Dracaena fragrans 'Massangeana')
Weeping fig (Ficus benjamina)
Barberton daisy, gerbera daisy (Gerbera jamesonii)
Florist's chrysanthemum, pot mum (Chrysanthemum morifolium)
Janet Craig (Dracaena deremensis "Janet Craig")
Warneckei (Dracaena deremensis "Warneckei")
Additional research
Since the release of the initial 1989 study, titled A study of interior landscape plants for indoor air pollution abatement: An Interim Report, further research has been done including a 1993 paper and 1996 book by B. C. Wolverton, the primary researcher on the original NASA study, that listed additional plants and focused on the removal of specific chemicals. A different study in 2004 has also shown that the micro-organisms in the soil of a potted plant remove benzene from the air, and that some plant species themselves also contribute to removing benzene.
Other studies
Plants studied in various similar studies on air filtration:
See also
Dracaena reflexa
Green wall
Indoor air quality
Phytoremediation
Rain garden
Sick building syndrome
References
External links
'Interior Landscape Plants for Indoor Air Pollution'
How to Grow Your Own Fresh Air – TED 2009. An extension of the TED Talk.
Soil science-related lists
Lists of plants
Building biology
1989 works
NASA
Indoor air pollution | NASA Clean Air Study | [
"Engineering",
"Biology"
] | 568 | [
"Lists of plants",
"Plants",
"Building engineering",
"Lists of biota",
"Building biology"
] |
3,076,341 | https://en.wikipedia.org/wiki/Canine%20influenza | Canine influenza (dog flu) is influenza occurring in canine animals. Canine influenza is caused by varieties of influenzavirus A, such as equine influenza virus H3N8, which was discovered to cause disease in canines in 2004. Because of the lack of previous exposure to this virus, dogs have no natural immunity to it. Therefore, the disease is rapidly transmitted between individual dogs. Canine influenza may be endemic in some regional dog populations of the United States. It is a disease with a high morbidity (incidence of symptoms) but a low incidence of death.
A newer form was identified in Asia during the 2000s and has since caused outbreaks in the US as well. It is a mutation of H3N2 that adapted from its avian influenza origins. Vaccines have been developed for both strains.
The two strains of Type A influenza virus found in canines are A(H3N2) and A(H3N8). Over time, there has been a discovery of sources of transmissions, identification of specific symptoms and the creation of vaccines.
History
The highly contagious equine influenza A virus subtype H3N8 was found to have been the cause of Greyhound race dog fatalities from a respiratory illness at a Florida racetrack in January 2004. The exposure and transfer apparently occurred at horse-racing tracks, where dog racing had also occurred. This was the first evidence of an influenza A virus causing disease in dogs. However, serum collected from racing Greyhounds between 1984 and 2004 and tested for canine influenza virus (CIV) in 2007 had positive tests going as far back as 1999. CIV possibly caused some of the respiratory disease outbreaks at tracks between 1999 and 2003.
H3N8 was also responsible for a major dog-flu outbreak in New York state in all breeds of dogs. From January to May 2005, outbreaks occurred at 20 racetracks in 10 states (Arizona, Arkansas, Colorado, Florida, Iowa, Kansas, Massachusetts, Rhode Island, Texas, and West Virginia). As of August 2006, dog flu has been confirmed in 22 U.S. states, including pet dogs in Wyoming, California, Connecticut, Delaware, and Hawaii. Three areas in the United States may now be considered endemic for CIV due to continuous waves of cases: New York, southern Florida, and northern Colorado/southern Wyoming. No evidence shows the virus can be transferred to people, cats, or other species.
H5N1 (avian influenza) was also shown to cause death in one dog in Thailand, following ingestion of an infected duck.
The H3N2 virus made its first appearance in Canada at the start of 2018, following the importation of two unknowingly infected canines from South Korea. In 2006-2007 canine H3N2 first had reports in South Korea and was thought to be transferred to dogs from avian origins (avian influenza H3N2). It was not until 2015 that the canine H3N2 strain was discovered in the United States after there was an outbreak of dogs having respiratory infections in Chicago. As canine H3N2 influenza began to spread through the United States, in 2016 cats in an Indiana began to show symptoms of the disease as well, it is believed they were infected by coming in to contact with sick dogs.
Following this incidence, reports of the virus possibly spreading, with two other canines reporting alarming symptoms, were made public. By March 5, 25 cases of infection were reportedly spread, although the number is thought to be closer to approximately 100.
Influenza A viruses are enveloped, negative sense, single-stranded RNA viruses. Genome analysis has shown that H3N8 was transferred from horses to dogs and then adapted to dogs through point mutations in the genes. The incubation period is two to five days, and viral shedding may occur for seven to ten days following the onset of symptoms. It does not induce a persistent carrier state.
In late 2022, together with Bordetella bronchiseptica and other respiratory pathogens, the H3N2 canine flu virus experienced a surge in canine infections. This was partially due to increased human travel and reopened offices following the relaxation of COVID-19 pandemic public health measures, leading to large numbers of dogs being placed together in kennels and doggy day care centers. Changing pet ownership behaviors also led to overcrowded animal shelters, which had been emptied at the height of the pandemic.
Transmissions
The infection of canine influenza can be transmitted from animal to animal and almost all dogs that come in contact with the virus will contract it. This makes canine influenza most common among dogs but can also be transmitted to cats in a shelter or a household. Canine influenza is an airborne disease, when a dog coughs or sneezes they secrete respiratory droplets that are then inhaled by other animals causing infection. Kennels, dog parks, grooming parlors, and things alike are high risk areas for infections.
Symptoms
About 80% of infected dogs with H3N8 show symptoms, usually mild (the other 20% have subclinical infections), and the fatality rate for Greyhounds in early outbreaks was 5 to 8%, although the overall fatality rate in the general pet and shelter population is probably less than 1%. Most animals infected with canine influenza will show symptoms such as coughing, runny nose, fever, lethargy, eye discharge, and a reduced appetite lasting anywhere from 2–3 weeks.
Symptoms of the mild form include a cough that lasts for 10 to 30 days and possibly a greenish nasal discharge. Dogs with the more severe form may have a high fever and pneumonia. Pneumonia in these dogs is not caused by the influenza virus, but by secondary bacterial infections. The fatality rate of dogs that develop pneumonia secondary to canine influenza can reach 50% if not given proper treatment. Necropsies in dogs that die from the disease have revealed severe hemorrhagic pneumonia and evidence of vasculitis.
Diagnosis
The presence of an upper respiratory tract infection in a dog that has been vaccinated for the other major causes of kennel cough increases suspicion of infection with canine influenza, especially in areas where the disease has been documented. A serum sample from a dog suspected of having canine influenza can be submitted to a laboratory that performs PCR tests for this virus.
Vaccine
In June 2009, the United States Department of Agriculture (USDA) Animal and Plant Health Inspection Service (APHIS) approved the first canine influenza vaccine. This veterinarian provided vaccine help fight the infection and are preventative measures for dogs who are constantly facing exposure of the H3N8 and H3N2 strain. This vaccine must be given twice initially with a two-week break, then annually thereafter.
H3N2 version
A second form of canine influenza was first identified during 2006 in South Korea and southern China. The virus is an H3N2 variant that adapted from its avian influenza origins. An outbreak in the US was first reported in the Chicago area during 2015. Outbreaks were reported in several US states during the spring and summer of 2015 and had been reported in 25 states by late 2015.
As of April 2015, the question of whether vaccination against the earlier strain offered protection had not been resolved. The US Department of Agriculture granted conditional approval for a canine H3N2-protective vaccine in December 2015.
In March 2016, researchers reported that this strain had infected cats and suggested that it may be transmitted between them.
Human risk
The H3N2 virus as a stand-alone virus is deemed harmless to humans. According to the Windsor-Essex County Health Unit, it is only when the H3N2 virus strain combines with a human strain of flu, "those strains could combine to create a new virus." The possibility of this is unlikely; however, if an infected dog contracts a human flu, there stands a slight chance.
See also
Avian influenza
Equine influenza
Human flu
Swine influenza
Cat flu
References
Further reading
A New Deadly, Contagious Dog Flu Virus Is Detected in 7 States The New York Times
States that have identified dogs with CIV
Canine Influenza from The Pet Health Library
Canine Influenza from Veterinary Partner
Canine Influenza Fact Sheet Center for Food Security and Public Health (CFSPH), Iowa State University College of Veterinary Medicine
AVMA: canine influenza
Influenza
Influenza
Animal viral diseases
Vaccine-preventable diseases | Canine influenza | [
"Biology"
] | 1,711 | [
"Vaccination",
"Vaccine-preventable diseases"
] |
3,076,573 | https://en.wikipedia.org/wiki/Succinonitrile | Succinonitrile, also butanedinitrile, is a nitrile, with the formula of C2H4(CN)2. It is a colorless waxy solid which melts at 58 °C.
Succinonitrile is produced by the addition of hydrogen cyanide to acrylonitrile (hydrocyanation):
CH2=CHCN + HCN → NCCH2CH2CN
Hydrogenation of succinonitrile yields putrescine (1,4-diaminobutane).
See also
Malononitrile - A di-nitrile with 3 carbon atoms
Glutaronitrile - A di-nitrile with 5 carbon atoms
Adiponitrile - A di-nitrile with 6 carbon atoms
References
External links
WebBook page for C4H4N2
CDC - NIOSH Pocket Guide to Chemical Hazards
Alkanedinitriles | Succinonitrile | [
"Chemistry"
] | 199 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
3,076,586 | https://en.wikipedia.org/wiki/Glutaronitrile | Glutaronitrile, also pentanedinitrile, is a nitrile, with formula C3H6(CN)2.
References
External links
WebBook page for glutaronitrile
Alkanedinitriles | Glutaronitrile | [
"Chemistry"
] | 53 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
4,198,294 | https://en.wikipedia.org/wiki/Otto%20Schott | Friedrich Otto Schott (1851–1935) was a German chemist, glass technologist, and the inventor of borosilicate glass. Schott systematically investigated the relationship between the chemical composition of the glass and its properties. In this way, he solved fundamental problems in glass properties, identifying compositions with optical properties that approach the theoretical limit. Schott's findings were a major advance in the optics for microscopy and optical astronomy. His work has been described as "a watershed in the history of glass composition".
Early life and education
Schott was the son of a window glass maker, Simon Schott. His mother was Karoline Schott. From 1870 to 1873 Schott studied chemical technology at the technical college in Aachen and at the University of Würzburg and at the University of Leipzig. He earned a doctorate in chemistry at Friedrich Schiller University of Jena, specializing in glass science. His doctoral thesis was entitled “Contributions to the Theory and Practice of Glass Fabrication” (1875).
Scientific contributions
In 1879, Schott developed a new lithium-based glass that possessed novel optical properties. Schott shared this discovery with Ernst Abbe, a professor of physics at Jena University whose comments on glass had stimulated Schott's interest in the subject.
Not long after Schott had completed his formal university training, he had become aware that Abbe had articulated the deficiencies in glass that was available at the time. The deficiencies were particularly acute in scientific instruments for which optical performance of the glass in lenses such as for telescopes and microscopes is paramount. Scientifically, as the magnification power of the lenses were increased, chromatic aberration became large. Chromatic aberration causes the optical quality of the visual image to become dependent on the color of the light, resulting in a significant limitation of the scientific instrument.
In response to Abbe's scientific provocation, Schott began a systematic investigation of the properties of glass as the properties varied with the chemical composition. Schott substituted one element for another, such as borate and phosphate for a portion of the silica in the glass and substituting fluoride for oxygen.
Schott's 1879 letter to Abbe was the beginning of a long collaboration between the two scientists.
Abbe was already working with Carl Zeiss, an instrument-maker, on the making of glass for microscopes. Zeiss participated in the three-way collaboration by testing improved glass compositions that Schott and Abbe identified in actual optical instruments, such as telescopes.
In 1882, Schott moved to Jena, where he could work more closely with Abbe and Zeiss.
They created types of glass and examined their properties using silica, soda, potash, lime, lead oxide and 28 other elements. Lacking a theoretical basis for the work, they relied on careful and systematic observation and measurement. The addition of elements that had no direct effect on optical properties might help to correct other properties of a glass such as the occurrence of surface staining when exposed to air.
By 1886, Schott had completed thorough investigations of structure-property relationships in glass compositions. Through these investigations, Schott discovered that the refractive index of a glass (important to its ability to function as a magnifying lens) could be disconnected from its chromatic aberration. In this way, Schott settled on a lithium-containing glass that could perform close to its theoretical limit in scientific instruments, which was a significant advance in optical instrumentation such as for microscopy and astronomy.
By mastering the process of small-scale melt-stirring, Schott was able to create a homogeneous product, whose refractive index and dispersion could be exactly measured and characterized. Through systematic experiment, he applied this to the creation of an array of different glass types. Based on his experiments, Schott worked with A. Winkelmann to develop the first composition-property model for the calculation of glass properties.
Glass compositions
Schott systematized the chemical composition of a significant range of glass compositions. Representative examples are summarized in the table.
Business interests
In 1884, in association with Dr. Ernst Abbe and Carl Zeiss, Otto founded Glastechnische Laboratorium Schott & Genossen (Schott & Associates Glass Technology Laboratory) in Jena. It was here, during the period 1887 through to 1893, that Schott developed borosilicate glass. Borosilicate glass is distinguished for its high tolerance to heat and a substantial resistance to thermal shock resulting from sudden temperature changes and resistance to degradation when exposed to corrosive chemicals. This type of glass initially became known under the brand name Duran. Their business enterprise also commercialized apochromatic lenses that had low chromatic aberration and was based on Schott's systematic investigations of the composition and properties of glass.
Schott used borosilicate glass to make laboratory and medical supplies, including thermometers, glassware for laboratory use, medicine vials and pharmaceutical tubing. Schott produced domestic glassware under the brandname "Jenaer Glas". He also produced heat resistant lamp cylinders for use in gas lighting. Carl Auer's incandescent gas lamps were first sold in 1894 and became a lucrative source of income for Schott's glassworks. In late 1890s he was also involved in the electrification of the industry in Jena. Schott's business enterprise held a near monopoly on global optical glass from its inception until the start of World War I.
In 1919, Schott & Associates became wholly owned by the Carl Zeiss Foundation, although Schott & Associates is known in the early 21st century as Schott AG. The Schott Company's brand became associated with high quality and specialty optics.
As of 2020, vials made of glass from Schott AG were being used in vaccination efforts against COVID-19 disease.
Personal life
In 1917, Otto Schott's eldest son, Rolf Schott, was killed in World War I. Shortly thereafter, Otto's son Erich Schott joined Schott & Gen. In 1926, Otto Schott retired from active work at Schott & Gen. Shortly thereafter, Erich Schott took over Otto Schott's responsibilities in managing the company.
Awards and legacy
In 1909, Schott received the Liebig Medal from the Association of German Chemists.
Otto-Schott-Straße in Jena, Germany, the location of Schott's home, was renamed in Schott's honor. The Schott Glass Museum is on the same premises. Both can be visited. The Schott Glass Museum displays developments in glass science beginning with the innovations of Otto Schott.
Since 1991, the Otto Schott Research Award has been presented every two years to meritorious researchers in the field of glass science and ceramics science. The award is organized and funded by the Abbe Fund of the Carl Zeiss Foundation.
References
External links
SCHOTT Corporate Archives, Jena, Germany
1851 births
1935 deaths
People from Witten
Glass makers
History of glass
Glass chemistry
19th-century German chemists
19th-century German inventors
People from the Province of Westphalia
Glass engineering and science | Otto Schott | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,495 | [
"Glass engineering and science",
"Glass chemistry",
"Materials science"
] |
4,198,386 | https://en.wikipedia.org/wiki/Beate%20Uhse%20Erotic%20Museum | The Beate Uhse Erotic Museum () (1996 – 2014) was a sex museum in the Charlottenburg district of Berlin, Germany.
The museum was opened in 1996 near Berlin Zoologischer Garten station by Beate Uhse, the early stunt pilot and entrepreneur, who in 1962 started the world's first sex shop. The collection features historic Asian and European erotic art including several lithographs by Heinrich Zille as well as early pornographic films. It claims to be "the world's largest erotic museum".
The museum was closed in September 2014. Initially the museum was looking for new premises, but due to the market development in Berlin the museum never reopened. For the exhibits, a loss in value of €1.2 million was recorded in the 2015 annual report.
See also
List of sex museums
References
"In Berlin, the Art of Sex", by Marianna Beck and Jack Hafferkamp. The Washington Post, April 18, 1999.
Museums in Berlin
Defunct museums in Germany
Museums established in 1996
Museums disestablished in 2014
Buildings and structures in Charlottenburg-Wilmersdorf
Sex museums in Germany
Uhse, Beate
Women's museums
Women and sexuality
1996 establishments in Germany
2014 disestablishments in Germany
History of women in Germany | Beate Uhse Erotic Museum | [
"Biology"
] | 261 | [
"Behavior",
"Sexuality stubs",
"Sexuality"
] |
4,198,464 | https://en.wikipedia.org/wiki/Manual%20scavenging | Manual scavenging is a term used mainly in India for "manually cleaning, carrying, disposing of, or otherwise handling, human excreta in an insanitary latrine or in an open drain or sewer or in a septic tank or a pit". Manual scavengers usually use hand tools such as buckets, brooms and shovels. The workers have to move the excreta, using brooms and tin plates, into baskets, which they carry to disposal locations sometimes several kilometers away. The practice of employing human labour for cleaning of sewers and septic tanks is also prevalent in Bangladesh and Pakistan. These sanitation workers, called "manual scavengers", rarely have any personal protective equipment. The work is regarded as a dehumanizing practice.
The occupation of sanitation work is intrinsically linked with caste in India. All kinds of cleaning are considered lowly and are assigned to people from the lowest rung of the social hierarchy. In the caste-based society, it is mainly the Dalits who work as sanitation workers - as manual scavengers, cleaners of drains, as garbage collectors and sweepers of roads. It was estimated in 2019 that between 40 and 60 percent of the six million households of Dalit sub-castes are engaged in sanitation work. The most common Dalit caste performing sanitation work is the Valmiki (also Balmiki) caste.
The construction of dry toilets and employment of manual scavengers to clean such dry toilets was prohibited in India in 1993. The law was extended and clarified to include ban on use of human labour for direct cleaning of sewers, ditches, pits and septic tanks in 2013. However, despite the laws, manual scavenging was reported in many states including Maharashtra, Gujarat, Madhya Pradesh, Uttar Pradesh, and Rajasthan in 2014. In 2021, the NHRC observed that eradication of manual scavenging as claimed by state and local governments is far from over. Government data shows that in the period 1993–2021, 971 people died due to cleaning of sewers and septic tanks.
The term "manual scavenging" differs from the stand-alone term "scavenging", which is one of the oldest economic activities and refers to the act of sorting though and picking from discarded waste. Sometimes called waste pickers or ragpickers, scavengers usually collect from the streets, dumpsites, or landfills. They collect reusable and recyclable material to sell, reintegrating it into the economy's production process. The practice exists in cities and towns across the Global South.
Definition
Manual scavenging refers to the unsafe and manual removal of raw (fresh and untreated) human excreta from buckets or other containers that are used as toilets or from the pits of simple pit latrines. The safe and controlled emptying of pit latrines, on the other hand, is one component of fecal sludge management.
The official definition of a manual scavenger in Indian law from 1993 is as follows:"manual scavenger" means a person engaged in or employed for manually carrying human excreta and the expression "manual scavenging" shall be construed accordinglyIn 2013, the definition of manual scavenger was expanded to include persons employed in cleaning of septic tanks, open drains and railway tracks. It reads:
"Manual scavenger" means a person engaged or employed, at the commencement of this Act or at any time thereafter, by an individual or a local authority or an agency or a contractor, for manually cleaning, carrying, disposing of, or otherwise handling in any manner, human excreta in an insanitary latrine or in an open drain or pit into which the human excreta from the insanitary latrines is disposed of, or railway track or in such other spaces or premises, as the Central Government or a State Government may notify, before the excreta fully decomposes in such manner as may be prescribed, and the expression “manual scavenging” shall be construed accordingly.
The definition ignores many other sanitation workers like fecal sludge handlers, community and public toilet cleaners, workers cleaning storm water drains, waste segregators, etc. Such workers are not required to handle excreta directly, but get in contact due to poor working conditions, lack of segregation, and the interconnectedness of excreta management with solid waste management and storm water management, states notable sanitation crusader and investigative journalist Pragya Akhilesh. The 2013 Act adds that a person engaged or employed to clean excreta with the help of equipment and using the protective gear as notified by the Union government shall not be deemed to be a manual scavenger. Bhasha Singh argues that this clause gives the government an escape clause as all forms of manual scavenging can be kept outside the purview of the law by arguing that the individual is using protective gear.
In 2021, the National Human Rights Commission (NHRC) of India advocated for the term to include other types of hazardous cleaning.
There is a very clear gender division of various types of work that is called manual scavenging in India. The cleaning of dry toilets and carrying of waste to points of disposal is generally done by women, while men are involved in the cleaning of septic tanks and sewers. There is an economic reason for this distribution - the municipality employs workers to clean sewers and septic tanks and hence the salary is better. Cleaning private toilets, on the other hand, pays little and is therefore handed over to the women. The women involved are referred to differently - 'dabbu-wali' in Bengal, 'balti-wali' in Kanpur, 'tina-wali in Bihar, tokri-wali in Punjab and Haryana, 'thottikar' in Andhra Pradesh and Karnataka, 'paaki' or 'peeti' in Odisha, 'vaatal' in Kashmir. These names directly refer to the tools (dabbu, balti, tokri) used by the women to carry waste or dustbin (thottikar) or excreta (paaki, peeti).
Manual scavenging is done with basic tools like thin boards and either buckets or baskets lined with sacking and carried on the head. Due to the hazardous nature of the job, many of the workers have related health problems. Scavengers risk suffering from respiratory disorders, typhoid, and cholera. Scavengers may also contract skin and blood infections, eye and respiratory infections due to exposure to pollutants, skeletal disorder caused by the lifting of heavy storage containers, and burns due to coming into contact with hazardous chemicals combined with waste. The data obtained by Safai Karmachari Andolan for 2017-2018 found the average age of deceased sewer workers to be around they do not reach the age of retirement and the family often loses their breadwinner early.
Not all forms of dry toilets involve "manual scavenging" to empty them, but only those that require unsafe handling of raw excreta. If on the other hand the excreta is already treated or pre-treated in the dry toilet itself, as is the case for composting toilets, and urine-diverting dry toilets for example, then emptying these types of toilets is not classified as "manual scavenging". Container-based sanitation is another system that does not require manual scavenging to function even though it does involve the emptying of excreta from containers.
Also, emptying the pits of twin-pit toilets is not classified as manual scavenging in India, as if used and emptied appropriately, the excreta is already treated.
The International Labour Organization describes three forms of manual scavenging in India:
Removal of human excrement from public streets and "dry latrines" (meaning simple pit latrines without a water seal, but not dry toilets in general)
Cleaning septic tanks
Cleaning gutters and sewers
Manual cleaning of railway lines of excreta dropped from toilets of trains is another form of manual scavenging in India.
The Hindi phrase safai karamchari defines not only "manual scavengers" but also other sanitation workers.
History
The practice of manual scavenging in India dates back to ancient times. According to the contents of sacred scriptures and other literature, scavenging by some specific castes of India has existed since the beginning of civilization. One of the fifteen duties of slaves enumerated in Naradiya Samhita was of manual scavenging. This continues during the Buddhist and Maurya period also.
Scholars have suggested that the Mughal women with purdah required enclosed toilets that needed to be scavenged. It is pointed out that the Bhangis (Chuhra) share some of the clan names with Rajputs, and propose that the Bhangis are descendants of those captured in wars. There are many legends about the origin of Bhangis, who have traditionally served as manual scavengers. One of them, associated with Lal Begi Bhangis, describes the origin of Bhangis from Mehtar.
Manual scavenging is historically linked to the caste system in India. Not only cleaning of toilets, but all types of cleaning jobs are considered lowly in India. The elites assigned the most lowly and polluting jobs for members of the Dalit community. The caste-based assignment of cleaning jobs can be traced back to the rise of Hinduism and revival of the Brahmanical order during the Gupta period, considered the golden era in the history of the Indian sub-continent. The workers usually belonged to the Balmiki (or Valmiki) or Hela (or Mehtar) subcastes; considered at the bottom of the hierarchy within the Dalit community itself.
Before the passage of the 1993 Act that prohibit employment for manual scavengers, local governments employed 'scavengers' to clean dry latrines in private houses and community or public facilities. These jobs were institutionalised by the British. In London, cesspits containing human waste were called 'gongs' or 'jakes' and men employed to clean them 'Gongfermours' or 'Gongfarmers'. They emptied such pits only in the night and dumped it outside the city. They had designated areas to live and were allowed to use only certain roads and by lanes to carry the waste. The British organized systems for removing the excreta and employed Bhangis as manual scavengers. They also brought Dalits working as agricultural labourers in the rural areas for the job in urban areas. This formal employment of Bhangis and Chamars for waste management by the British reinforced the caste based assignment. Even today, sanitation department jobs are almost unofficially 100% reserved for people from the Scheduled caste groups.
Current prevalence
Despite the passage of two pieces of legislation, the prevalence of manual scavenging is an open secret. According to the Socio Economic Caste Census 2011, 180,657 households within India are engaged in manual scavenging for a livelihood. The 2011 Census of India found 794,000 cases of manual scavenging across India. The state of Maharashtra, with 63,713, tops the list with the largest number of households working as manual scavengers, followed by the states of Madhya Pradesh, Uttar Pradesh, Tripura and Karnataka. Manual scavenging still survives in parts of India without proper sewage systems or safe fecal sludge management practices. It is thought to be prevalent in Maharashtra, Gujarat, Madhya Pradesh, Uttar Pradesh, and Rajasthan.
In March 2014, the Supreme Court of India declared that there were 96 lakh (9.6 million) dry latrines being manually emptied but the exact number of manual scavengers is disputed – official figures put it at less than 700,000. An estimate in 2018 put the number of "sanitation workers" in India at 5million with 50% of them being women. However, not all sanitation workers are manual scavengers. Another estimate from 2018 put the figure at one million manual scavengers, stating that the number is "unknown and declining" and that 90% of them are women.
The biggest violator of this law in India is the Indian Railways where many train carriages have toilets dropping the excreta from trains on the tracks and who employ scavengers to clean the tracks manually. The situation is being improved in 2018 by the addition of on-train treatment systems for the toilet waste.
Bezwada Wilson, an activist, at the forefront in the battle to eradicate manual scavenging, argues that the practice continues due to its casteist nature. He also argues that the failure of implementation of the 1993 Act is a collective failure of the leadership, judiciary, the administration, and the Dalit movements to address the concerns of the most marginalized community. Unlike infrastructure projects like metros, the issue receives little or no priority from the Government and hence the deadline to comply with the 1993 Act has been continuously postponed. An example that demonstrates the apathy of the government is the fact that none of the Rupees 100 Crore (1,000 million) allocated in the budgets for financial years 2011-12 and 2012-13 was spent. Such is the stigma attached to manual scavengers that even professionals who work for their emancipation get labelled. For example, prolific investigative journalists like Pragya Akhilesh, who is one of the most notable sanitation crusaders in India, was labelled 'Toiletwoman of India'. Bhasha Singh also was labelled a 'manual scavenging journalist'.
Threats and harassment
In India, women who practice manual scavenging face pressure from their respective communities if they miss a day since toilets are cleaned every day. Many women have no choice but to turn up to clean the toilets. The practical requirement that they do not miss a day prevents them from pursuing alternate occupations like agricultural labor. And in the event that they are able to find the means and support to stop manual scavenging, women still face extreme pressure from the community.
Initiatives for eradication
Legislation
In the late 1950s, freedom fighter G. S. Lakshman Iyer banned manual scavenging when he was the chairman of Gobichettipalayam Municipality, which became the first local body to ban it officially. Sanitation is a State subject as per entry 6 of the Constitution. Under this, in February 2013 Delhi announced that they were banning manual scavenging, making them the first state in India to do so. District magistrates are responsible for ensuring that there are no manual scavengers working in their district. Within three years of the ruling municipalities, railways and cantonments were required to make sufficient sanitary latrines available.
But by using Article 252 of the constitution which empowers Parliament to legislate for two or more States by consent and adoption of such legislation by any other State, the Government of India has enacted various laws. The continuance of such discriminatory practice is violation of ILO's Convention 111 (Discrimination in Employment and Occupation). The United Nations human rights chief welcomed in 2013 the movement in India to eradicate manual scavenging.
In 2007 the Self Employment Scheme for Rehabilitation of Manual Scavengers was passed to help in transition to other occupations.
The Employment of Manual Scavengers and Construction of Dry Latrines (Prohibition) Act, 1993
After six states passed resolutions requesting the Central Government to frame a law, "The Employment of Manual Scavengers and Construction of Dry Latrines (Prohibition) Act, 1993", drafted by the Ministry of Urban Development under the Narasimha Rao government, was passed by Parliament in 1993. The act punishes the employment of scavengers or the construction of dry (non-flush) latrines with imprisonment for up to one year and/or a fine of Rs 2,000. No convictions were obtained under the law during the 20 years it was in force.
The Prohibition of Employment as Manual Scavengers and their Rehabilitation Act 2013 or M.S. Act 2013
Government has passed the new legislation in September 2013 and issued Government notification for the same. In December, 2013 Government also formulated Rules-2013 called as "The Prohibition of Employment as Manual Scavengers and their Rehabilitation Rules 2013" or "M.S. Rules 2013". The hearing on 27 March 2014 was held on manual scavenging of writ petition number 583 of 2003, and Supreme Court has issued final orders and case is disposed of with various directions to the Government. The broad objectives of the act are to eliminate unsanitary latrines, prohibit the employment of manual scavengers and the hazardous manual cleaning of sewer and septic tanks, and to maintain a survey of manual scavengers and their rehabilitation.
Prohibition of Employment as Manual Scavengers and their Rehabilitation (Amendment) Bill, 2020
The Bill calls for a complete mechanization of cleaning sewers and septic tanks.
Activism
In India in the 1970s, Bindeshwar Pathak introduced his "Sulabh" concept for building and managing public toilets in India, which has introduced hygienic and well-managed public toilet systems. Activist Bezwada Wilson founded a group in 1994, Safai Karmachari Andolan, to campaign for the demolition of then newly illegal 'dry latrines' (pit latrines) and the abolition of manual scavenging. Despite the efforts of Wilson and other activists, the practice persists two decades later. In July 2008 "Mission Sanitation" was a fashion show held by the United Nations as part of its International Year of Sanitation. On the runway were 36 previous workers, called scavengers, and top models to help bring awareness of the issue of manual scavenging.
The Movement for Scavenger Community (MSC) is an NGO founded in 2009 by Vimal Kumar with young people, social activists, and like-minded people from the scavenger community. MSC is committed to working towards the social and economic empowerment of the scavenger community through the medium of education.
The "Campaign for Dignity" (Garima Abhiyan) in Madhya Pradesh in India has assisted more than 20,000 women to stop doing manual scavenging as an occupation.
Pragya Akhilesh is an investigative journalist who is called as the 'sanitation woman of India' for her prolific contribution highlighting SBM's irregularities focusing on merely infrastructure building rather than protecting the rights of thousands of sanitation workers in India. Since 2010 she has highlighted the government's failure to recognise the labour movement of sanitation workers and the failure to eradicate and rehabilitate manual scavengers in India.
Other countries
Historically, manual emptying of toilets also took place in Europe. The excreta was known as night soil and in Tudor England the workers were called gong farmers.
In Pakistan, municipalities mostly rely on Christian sweepers. In the city of Karachi, sweepers keep the sewer system flowing, using their bare hands to unclog crumbling drainpipes of feces, plastic bags and hazardous hospital refuse, part of the 1,750 million litres of waste the city's 20 million residents produce daily. Christians make up a small percentage of Pakistan's population, and they fill majority of the sweeper jobs. When Karachi's municipality tried to recruit Muslims to unclog gutters, they refused to get down into the sewers, instead sweeping the streets. The job was left to Christians and lower-caste Hindus.
In Sierra Leone, waste storage practices in homes are poor, adding to collection difficulties. Unsorted waste is often stored in old plastic bags and leaky buckets instead of plastic bag-lined bins. Like most African countries, waste collection in Sierra Leone is a problem. Garbage collected by collection workers, who are not provided with personal protective equipment like gloves, from communal skips is moved straight for the city's two disposal sites. Scavengers try to earn a living from scouring through rotting rubbish, plastic bags and raw sewage for discarded things they can sell.
See also
Sanitation worker
Swachh Bharat Abhiyan (Clean India Mission)
Waste collector
Water supply and sanitation in India
References
Sewerage
Toilets
Cleaning and maintenance occupations | Manual scavenging | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 4,190 | [
"Excretion",
"Water pollution",
"Sewerage",
"Environmental engineering",
"Toilets"
] |
4,198,535 | https://en.wikipedia.org/wiki/GRB%20060218 | GRB 060218 (and SN 2006aj) was a gamma-ray burst (abbreviated as GRB) with unusual characteristics never seen before. This GRB was detected by the Swift satellite on February 18, 2006, and its name is derived from the date. It was located in the constellation Aries.
GRB 060218's duration (almost 2000 seconds) and its origin in a galaxy 440 million light years away are far longer and closer, respectively, than typical gamma-ray bursts seen before, and the burst was also considerably dimmer than average despite its close distance.
As of February 2006, the phenomenon was not yet well understood. However, an optical afterglow to the gamma-ray burst has been detected and is brightening, and some scientists believe that the appearance of a supernova (SN 2006aj) may be ongoing.
Four different groups of researchers, led by Sergio Campana, Elena Pian, Alicia Soderberg and Paolo Mazzali respectively, carried out the investigation of the phenomenon and presented their results in Nature on August 31, 2006. They found the strongest evidence yet that supernovae and GRBs might be linked, because GRB 060218 showed signs of both the GRB and the supernova. The exploding star is believed to have had the boundary mass (about 20 Solar masses) for supernovae to leave either a black hole or a neutron star after its explosion.
References
External links
Light curves and spectra on the Open Supernova Catalog
http://www.space.com/scienceastronomy/060223_explosion.html
http://skyandtelescope.com/news/article_1683_1.asp
http://www.newscientistspace.com/article/dn8776.html
http://sabbe.fragzone.se/KPO/grb060218.htm
YahooNews
Finder Charts for GRB 060218 shows the area of the sky prior to the incident.
More NASA observational reports at the Burst Information (Current and Archives)
Simbad
Image SN 2006aj
060218
Aries (constellation)
20060218
February 2006 | GRB 060218 | [
"Astronomy"
] | 463 | [
"Aries (constellation)",
"Constellations"
] |
4,198,568 | https://en.wikipedia.org/wiki/Coverage%20map | Coverage maps are designed to indicate the service areas of radiocommunication transmitting stations. Typically these may be produced for radio or television stations, for mobile telephone networks and for satellite networks. For satellite networks, a coverage map is often known as a footprint.
Definition of coverage
Typically a coverage map will indicate the area within which the user can expect to obtain good reception of the service in question using standard equipment under normal operating conditions. Additionally, the map may also separately denote supplementary service areas where good reception may be obtained but other stations may be stronger, or where the reception may be variable but the service may still be usable.
Technical details
The field strength that the marked service boundary on a coverage map represents will be defined by whoever produces the map, but typical examples are as follows:
VHF(FM) / Band II
For VHF(FM) / Band II, the BBC defines the service area boundary for stereo services as corresponding to an average field strength of 54 dB (relative to 1 μV/m) at a height of 10 m above ground level. For mono it is 48 dB (relative to 1 μV/m).
The receiving antenna height of 10m dates from the 1950s when receivers were relatively insensitive and used rooftop antennas. Although this may seem unrealistic for typical situations today, when combined with the above threshold it is considered a good proxy for providing coverage to more sensitive modern receivers used without external rooftop antennas.
MF / Mediumwave
For MF / Mediumwave, the BBC defines the daytime service area boundary as a minimum field strength of 2 mV/m. At night, the service area of mediumwave services can be drastically reduced by co-channel interference from distant stations.
Limitations
Often coverage maps show general coverage for large regions and therefore any boundary indicated should not be interpreted as a rigid limit. The biggest cause of uncertainty for a coverage map is the quality (mainly sensitivity) of receiving apparatus used. A coverage map may be produced to indicate the area in which a certain signal strength is delivered. Even if it is 100% accurate (which it never is), a major factor on whether a signal is receivable depends very much on whether the receiving apparatus is sensitive enough to use a signal of that level. Commercial receivers can vary widely in their sensitivity, thus perception of coverage can vary widely.
The quality of reception can be very different at places only short distances apart, and this phenomenon is more apparent as the transmission frequency increases. Inevitably small pockets of poor reception may exist within the main service area that cannot be shown on the map due to scale issues. Conversely, the use of sensitive equipment, high gain antennas, or simply being located on high ground can yield good signal strengths well outside the indicated area.
The significance of local geographical conditions cannot be over emphasised and this was underlined by an experiment which revealed the signal reception conditions around a typical house. The site did not have the critical "line-of-sight propagation" to the transmitter. Average signal levels, taken at the same height, varied by up to 6 dB, and for individual frequencies by up to 14 dB. In RF reception terms these figures are huge differences.
Although carriers and broadcasters attempt to design their networks to eliminate dead zones, no network is perfect, so coverage breaks within the general coverage areas are still possible.
There are limitations inherent to the way in which data collection for coverage maps is carried out. Traditional coverage maps are based on models, constructed from readings taken by dedicated network testers. This often means that coverage maps show the theoretical capacity of the network rather than its real-world performance. In recent years companies such as OpenSignal and Sensorly have emerged that provide coverage maps based on information crowdsourced from consumer applications. The advantage of this approach is that the coverage maps show network reach and performance as it is experienced by its users.
Often companies will construct low power satellite stations to fill in bad reception areas that become apparent once the high power transmitter's coverage map has identified where the network is deficient.
References
External links
The Transmission Gallery: Index of UK TV coverage maps
TV Fool: Index of US TV coverage maps
An example of a crowd-sourced coverage map
Map types
Mobile technology
Radio
Broadcasting | Coverage map | [
"Technology"
] | 847 | [
"nan"
] |
4,198,731 | https://en.wikipedia.org/wiki/Dalteparin%20sodium | Dalteparin is a low molecular weight heparin. It is marketed as Fragmin. Like other low molecular weight heparins, dalteparin is used for prophylaxis or treatment of deep vein thrombosis and pulmonary embolism to reduce the risk of a stroke or heart attack. Dalteparin acts by potentiating the activity of antithrombin III, inhibiting formation of both Factor Xa and thrombin. It is normally administered by self-injection.
The CLOT study, published in 2003, showed that in patients with malignancy and acute venous thromboembolism (VTE), dalteparin was more effective than warfarin in reducing the risk of recurrent embolic events. Dalteparin is not superior to unfractionated heparin in preventing blood clots.
Heparins are cleared by the kidneys, but studies have shown that dalteparin does not accumulate even if kidney function is reduced. Approximately 70% of dalteparin is excreted through kidneys based on animal studies.
In May 2019, the U.S. Food and Drug Administration (FDA) approved Fragmin injection to reduce the recurrence of symptomatic VTE in pediatric patients one month of age and older. It is on the World Health Organization's List of Essential Medicines.
References
External links
Heparins
Drugs developed by Pfizer
Polysaccharides | Dalteparin sodium | [
"Chemistry"
] | 306 | [
"Carbohydrates",
"Polysaccharides"
] |
4,199,006 | https://en.wikipedia.org/wiki/Bone%20wax | Bone wax is a waxy substance used to help mechanically control bleeding from bone surfaces during surgical procedures.
It is generally made of beeswax with a softening agent such as paraffin or petroleum jelly and is smeared across the bleeding edge of the bone, blocking the holes and causing immediate bone hemostasis through a tamponade effect. Bone wax is most commonly supplied in sterile sticks, and usually requires softening before it can be applied.
History
A note by Victor Horsley published in the British Medical Journal in 1892 described a formulation of "antiseptic wax" having seven parts beeswax, one part almond oils, and 1% salicylic acid. The material was useful for controlling bleeding when pressed into the pores and channels of cut or damaged bone. The wax was sterilized by boiling and kept in stoppered bottles. This material soon became the standard of care for bleeding control in bone for general orthopedics, craniomaxillofacial surgery, and cardiothoracic surgery, where the sternum is often split longitudinally to provide access to the heart.
Action
Ordinary bone wax is effective by virtue of its tamponade action, but is considered to have no active hemostatic properties (i.e. does not activate the blood clotting cascade). In addition, bone wax is not soluble in the bodily fluids and thus remains at the site of implantation for long periods of time, if not indefinitely. The portion of traditional bone wax that departs the implant site is most likely carried away through the action of the foreign body response and is associated with a low-grade inflammatory response at and near the implant site. The residual product can also potentially serve as a nidus (breeding site) for post-operative infection.
Modern formulations
Modern day bone wax is commercially available in substantially non-absorbable formulations similar to Horsley's original composition, as well as in absorbable/resorbable formats. Most are available as a firm wax in stick form that must be softened by kneading prior to use.
More recent advances have led to the introduction of a bone hemostat in putty format. Hemostatic putties act via tamponade in the same way as the stick waxes, but are ready to use and eliminate the requirement to soften the product prior to use.
References
Horsley, V. Antiseptic Wax. Brit. M. J. 1165, 1892
Hemasorb 510(k) summary
Ostene 510(k) summary
Implants (medicine)
Biomaterials | Bone wax | [
"Physics",
"Biology"
] | 531 | [
"Biomaterials",
"Materials",
"Matter",
"Medical technology"
] |
4,199,484 | https://en.wikipedia.org/wiki/SN%201986G | SN 1986G was a supernova that was observed on May 3, 1986 by Robert Evans. Its host galaxy, Centaurus A, is about 15 million light-years away in the constellation Centaurus. Since Centaurus A is about 15 million light-years away from us, this supernova happened 15 million years ago.
SN 1986G was a bright blue-green star in the middle of the left part of the dust belt of Centaurus A. The blue-green color occurs because David Malin could take the red plate used in this composite image only one year after the supernova occurred, and it had faded away at that time.
See also
Centaurus A
External links
Light curves and spectra on the Open Supernova Catalog
Radio Observations of the Type Ia SN 1986G and Constraints on the Symbiotic-Star Progenitor Scenario
Centaurus
Supernovae
Astronomical objects discovered in 1986 | SN 1986G | [
"Chemistry",
"Astronomy"
] | 184 | [
"Supernovae",
"Astronomical events",
"Centaurus",
"Constellations",
"Explosions"
] |
4,200,042 | https://en.wikipedia.org/wiki/Critical%20heat%20flux | In the study of heat transfer, critical heat flux (CHF) is the heat flux at which boiling ceases to be an effective form of transferring heat from a solid surface to a liquid.
Description
Boiling systems are those in which liquid coolant absorbs energy from a heated solid surface and undergoes a change in phase. In flow boiling systems, the saturated fluid progresses through a series of flow regimes as vapor quality is increased. In systems that utilize boiling, the heat transfer rate is significantly higher than if the fluid were a single phase (i.e. all liquid or all vapor). The more efficient heat transfer from the heated surface is due to heat of vaporization and sensible heat. Therefore, boiling heat transfer has played an important role in industrial heat transfer processes such as macroscopic heat transfer exchangers in nuclear and fossil power plants, and in microscopic heat transfer devices such as heat pipes and microchannels for cooling electronic chips.
The use of boiling as a means of heat removal is limited by a condition called critical heat flux (CHF). The most serious problem that can occur around CHF is that the temperature of the heated surface may increase dramatically due to significant reduction in heat transfer. In industrial applications such as electronics cooling or instrumentation in space, the sudden increase in temperature may possibly compromise the integrity of the device.
Two-phase heat transfer
The convective heat transfer between a uniformly heated wall and the working fluid is described by Newton's law of cooling:
where represents the heat flux, represents the proportionally constant called the heat transfer coefficient, represents the wall temperature and represents the fluid temperature. If decreases significantly due to the occurrence of the CHF condition, will increase for fixed and while will decrease for fixed .
Modes of CHF
The understanding of CHF phenomenon and an accurate prediction of the CHF condition are important for safe and economic design of many heat transfer units including nuclear reactors, fossil fuel boilers, fusion reactors, electronic chips, etc. Therefore, the phenomenon has been investigated extensively over the world since Nukiyama first characterized it. In 1950 Kutateladze suggested the hydrodynamical theory of the burnout crisis. Much of significant work has been done during the last decades with the development of water-cooled nuclear reactors. Now many aspects of the phenomenon are well understood and several reliable prediction models are available for conditions of common interests.
The use of the term critical heat flux (CHF) is inconsistent among authors. The United States Nuclear Regulatory Commission has suggested using the term “critical boiling transition” (CBT) to indicate the phenomenon associated with a significant reduction in two-phase heat transfer. For a single species, the liquid phase generally has considerably better heat transfer properties than the vapor phase, namely thermal conductivity. So in general CBT is the result of some degree of liquid deficiency to a local position along a heated surface. The two mechanisms that result in reaching CBT are: departure from nucleate boiling (DNB) and liquid film dryout.
DNB
Departure from nucleate boiling (DNB) occurs in sub-cooled flows and bubbly flow regimes. DNB happens when many bubbles near the heated surface coalesce and impede the ability of local liquid to reach the surface. The mass of vapor between the heated surface and local liquid may be referred to as a vapor blanket.
Dryout
Dryout means the disappearance of liquid on the heat transfer surface which results in the CBT. Dryout of liquid film occurs in annular flow. Annular flow is characterized by a vapor core, liquid film on the wall, and liquid droplets entrained within the core. Shear at the liquid-vapor interface drives the flow of the liquid film along the heated surface. In general, the two-phase HTC increases as the liquid-film thickness decreases. The process has been shown to occur over many instances of dryout events, which span a finite duration and are local to a position. The CBT occurs when the fraction of time a local position is subjected to dryout becomes significant. A single dryout event, or even several dryout events, may be followed by periods of sustained contact between the liquid film and the previously dry region . Many dryout events (hundreds or thousands) occurring in sequence are the mechanism for significant reduction in heat transfer-associated dryout CBT.
Post-CHF
Post-CHF is used to denote the general heat transfer deterioration in flow boiling process, and liquid could be in the form of dispersed spray of droplets, continuous liquid core, or transition between the former two cases. Post-dryout can be specifically used to denote the heat transfer deterioration in the condition when liquid is only in the form of dispersed droplets, and denote the other cases by the term Post-DNB.
Correlations
The critical heat flux is an important point on the boiling curve and it may be desirable to operate a boiling process near this point. However, one could become cautious of dissipating heat in excess of this amount. Zuber, through a hydrodynamic stability analysis of the problem has developed an expression to approximate this point.
Units: critical flux: kW/m; h: kJ/kg; σ: N/m; ρ: kg/m; g: m/s.
It is independent of the surface material and is weakly dependent upon the heated surface geometry described by the constant C. For large horizontal cylinders, spheres and large finite heated surfaces, the value of the Zuber constant . For large horizontal plates, a value of is more suitable.
The critical heat flux depends strongly on pressure. At low pressures (including atmospheric pressure), the pressure dependence is mainly through the change in vapor density leading to an increase in the critical heat flux with pressure. However, as pressures approach the critical pressure, both the surface tension and the heat of vaporization converge to zero, making them the dominant sources of pressure dependency.
For water at 1atm, the above equation calculates a critical heat flux of approximately 1000 kW/m.
See also
Leidenfrost effect
Nucleate boiling
References
External links
Modeling of the boiling crisis
Film dryout near critical heat flux - video
Thermodynamics | Critical heat flux | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,248 | [
"Thermodynamics",
"Dynamical systems"
] |
4,200,196 | https://en.wikipedia.org/wiki/Entomopathogenic%20nematode | Entomopathogenic nematodes (EPN) are a group of nematodes (thread worms), that cause death to insects. The term entomopathogenic has a Greek origin, with entomon, meaning insect, and pathogenic, which means causing disease. They are animals that occupy a bio control middle ground between microbial pathogens and predator/parasitoids. Although many other parasitic thread worms cause diseases in living organisms (sterilizing or otherwise debilitating their host), entomopathogenic nematodes are specific in only infecting insects. Entomopathogenic nematodes (EPNs) live parasitically inside the infected insect host, and so they are termed as endoparasitic. They infect many different types of insects living in the soil like the larval forms of moths, butterflies, flies and beetles as well as adult forms of beetles, grasshoppers and crickets. EPNs have been found all over the world in a range of ecologically diverse habitats. They are highly diverse, complex and specialized. The most commonly studied entomopathogenic nematodes are those that can be used in the biological control of harmful insects, the members of Steinernematidae and Heterorhabditidae. They are the only insect-parasitic nematodes possessing an optimal balance of biological control attributes.
Classification
Life cycle
Because of their economic importance, the life cycles of the genera belonging to families Heterorhabditidae and Steinernematidae are well studied. Although not closely related, phylogenetically, both share similar life histories (Poinar 1993). The cycle begins with an infective juvenile, whose only function is to seek out and infect new hosts. When a host has been located, the nematodes penetrate into the insect body cavity, usually via natural body openings (mouth, anus, spiracles) or areas of thin cuticle. (Shapiro-Ilan, David I., and Randy Gaugler. "Nematodes.") After entering an insect, infective juveniles release an associated mutualistic bacterium from their gut which multiplies rapidly. These bacteria of the genus Xenorhabdus or Photorhabdus, for steinerernematides and heterorhabditids, respectively—cause host mortality within 24–48 hours. The nematodes provide shelter to the bacteria, which, in return, kill the insect host and provide nutrients to the nematode. Without this mutualism no nematode is able to act as an entomoparasite. Together, the nematodes and bacteria feed on the liquefying host, and reproduce for several generations inside the cadaver maturing through the growth stages of J2-J4 into adults. Steinernematids infective juveniles may become males or females, whereas heterorhabditids develop into self-fertilizing hermaphrodites with later generations producing two sexes. When food resources in the host become scarce, the adults produce new infective juveniles adapted to withstand the outside environment. The life cycles of the EPNs are completed within a few days.(Shapiro-Ilan, David I., and Randy Gaugler. "Nematodes.") After about a week, hundreds of thousands of infective juveniles emerge and leave in search of new hosts, carrying with them an inoculation of mutualistic bacteria, received from the internal host environment (Boemare 2002, Gaugler 2006). Their growth and reproduction depends upon conditions established in the host cadaver by the bacterium. The nematodes bacterium contributes anti-immune proteins to assist in overcoming their host defenses (Shapiro-Ilan, David I., and Randy Gaugler. "Nematodes.").
Foraging strategies
The foraging strategies of entomopathogenic nematodes vary between species, influencing their soil depth distributions and host preferences. Infective juveniles use strategies to find hosts that vary from ambush and cruise foraging (Campbell 1997). In order to ambush prey, some Steinernema species nictate, or raise their bodies off the soil surface so they are better poised to attach to passing insects, which are much larger in size (Campbell and Gaugler 1993). Many Steinernema are able to jump by forming a loop with their bodies that creates stored energy which, when released, propels them through the air (Campbell and Kaya 2000). Other species adopt a cruising strategy and rarely nictate. Instead, they roam through the soil searching for potential hosts. These foraging strategies influence which hosts the nematodes infect. For example, ambush predators such as Steinernema carpocapsae infect more insects on the surface, while cruising predators like Heterorhabditis bacteriophora infect insects that live deep in the soil (Campbell and Gaugler 1993).
Population ecology
Competition and coexistence
Inside their insect hosts, EPNs experience both intra and interspecific competition. Intraspecific competition takes place among nematodes of the same species when the number of infective juveniles penetrating a host exceeds the amount of resources available. Interspecific competition occurs when different species compete for resources. In both cases, the individual nematodes compete with each other indirectly by consuming the same resource, which reduces their fitness and may result in the local extinction of one species inside the host (Koppenhofer and Kaya 1996). Interference competition, in which species compete directly, can also occur. For example, a steinernematid species that infects a host first usually excludes a heterorhabditid species. The mechanism for this superiority may be antibiotics produced by Xenorhabdus, the symbiotic bacterium of the steinernematid. These antibiotics prevent the symbiotic bacterium of the heterorhabditid from multiplying (Kaya and Koppenhofer1996). In order to avoid competition, some species of infective juveniles are able to judge the quality of a host before penetration. The infective juveniles of S. carpocapsae are repelled by 24-hour-old infections, likely by the smell of their own species' mutualistic bacteria (Grewal et al. 1997).
Interspecific competition between nematode species can also occur in the soil environment outside of hosts. Millar and Barbercheck (2001) showed that the introduced nematode Steinernema riobrave survived and persisted in the environment for up to a year after its release. S. riobrave significantly depressed detection of the endemic nematode H. bacteriophora, but never completely displaced it, even after two years of continued introductions. S. riobrave had no effect on populations of the native nematode, S. carpocapsae, though, which suggests that coexistence is possible. Niche differentiation appears to limit competition between nematodes. Different foraging strategies allow two species to co-exist in the same habitat. Different foraging strategies separate the nematodes in space and enable them to infect different hosts. EPNs also occur in patchy distributions, which may limit their interactions and further support coexistence (Kaya and Koppenhofer 1996).
Population distribution
Entomopathogenic nematodes are typically found in patchy distributions, which vary in space and time, although the degree of patchiness varies between species (reviewed in Lewis 2002). Factors responsible for this aggregated distribution may include behavior, as well as the spatial and temporal variability of the nematodes natural enemies, like nematode trapping fungus. Nematodes also have limited dispersal ability. Many infective juveniles are produced from a single host which could also produce aggregates. Patchy EPN distributions may also reflect the uneven distribution of host and nutrients in the soil (Lewis et al. 1998; Stuart and Gaugler 1994; Campbell et al. 1997, 1998). EPNs may persist as metapopulations, in which local population fragments are highly vulnerable to extinction, and fluctuate asynchronously (Lewis et al. 1998). The metapopulation as a whole can persist as long as the rate of colonization is greater or equal to the rate of population extinction (Lewis et al. 1998). The founding of new populations and movement between patches may depend on the movement of infective juveniles or the movement of infected hosts (Lewis et al. 1998). Recent studies suggest that EPNs may also use non-host animals, such as isopods and earthworms for transport (Eng et al.2005, Shapiro et al. 1993) or can be scavengers (San-Blas and Gowen, 2008).
Community ecology
Parasites can significantly affect their hosts, as well as the structure of the communities to which they and their hosts belong (Minchella and Scott 1991). Entomopathogenic nematodes have the potential to shape the populations of plants and host insects, as well as the species composition of the surrounding animal soil community.
Entomopathogenic nematodes affect populations of their insect hosts by killing and consuming individuals. When more EPNs are added to a field environment, typically at concentrations of , the population of host insects measurably decreases (Campbell et al. 1998, Strong et al. 1996). Agriculture exploits this finding, and the inundative release of EPNs can effectively control populations of soil insect pests in citrus, cranberries, turfgrass, and tree fruit (Lewis et al. 1998).
If entomopathogenic nematodes suppress the population of insect root herbivores, they indirectly benefit plants by freeing them from grazing pressure. This is an example of a trophic cascade in which consumers at the top of the food web (nematodes) exert an influence on the abundance of resources (plants) at the bottom. The idea that plants can benefit from the application of their herbivore's enemies is the principle behind biological control. Consequently, much of EPN biological research is driven by agricultural applications.
Examples of the top-down effects of entomopathogenic nematodes are not restricted to agricultural systems. Researchers at the Bodega Marine Laboratory examined the strong top-down effects that naturally occurring EPNs can have on their ecosystem (Strong et al. 1996). In a coastal shrubland food chain the native EPN, Heterorhabditis heplialus, parasitized ghost moth caterpillars, and ghost moth caterpillars consumed the roots of bush lupine. The presence H. heplialus correlated with lower caterpillar numbers and healthier plants. In addition, the researchers observed high mortality of bush lupine in the absence of EPNs. Old aerial photographs over the past 40 years indicated that the stands where nematodes were prevalent had little or no mass die-off of lupine. In stands with low nematode prevalence, however, the photos showed repeated lupine die-offs. These results implied that the nematode, as a natural enemy of the ghost moth caterpillar, protected the plant from damage. The authors even suggested that the interaction was strong enough to affect the population dynamics of bush lupine (Strong et al. 1996).
Not only do entomopathogenic nematodes affect their host insects, they can also change the species composition of the soil community. Many familiar animals like earthworms and insect grubs live in the soil, but smaller invertebrates such as mites, collembolans, and nematodes are also common. Aside from EPNs, the soil ecosystem includes predatory, bacteriovorous, fungivorous and plant parasitic nematode species. Since EPNs are applied in agricultural systems at a rate of , the potential for unintended consequences on the soil ecosystem appears large. EPNs have not had an adverse effect on mite and collembolan populations (Georgis et al. 1991), yet there is strong evidence that they affect the species diversity of other nematodes. In a golf course ecosystem, the application of H. bacteriophora, an introduced nematode, significantly reduced the abundance, species richness, maturity, and diversity of the nematode community (Somaseker et al. 2002). EPNs had no effect on free-living nematodes. However, there was a reduction in the number of genera and abundance of plant-parasitic nematodes, which often remain enclosed within growths on the plant root. The mechanism by which insect parasitic nematodes have an effect on plant parasitic nematodes remains unknown. Although this effect is considered beneficial for agricultural systems where plant parasitic nematodes cause crop damage, it raises the question of what other effects are possible. Future research on the impacts EPNs have on soil communities will lead to greater understanding of these interactions.
In aboveground communities, EPNs have few side effects on other animals. One study reported that Steinernema felidae and Heterorhabditis megidis, when applied in a range of agricultural and natural habitats, had little impact on non-pest arthropods. Some minimal impacts did occur, however, on non-pest species of beetles and flies (Bathon 1996). Unlike chemical pesticides, EPNs are considered safe for humans and other vertebrates.
Disturbance
Frequent disturbance often perturbs agricultural habitats and the response to disturbance varies among EPN species. In traditional agricultural systems, tilling disturbs the soil ecosystem, affecting biotic and abiotic factors. For example, tilled soils have lower microbial, arthropod, and nematode species diversity (Lupwayi et al. 1998). Tilled soil also has less moisture and higher temperatures. In a study examining the tolerances of different EPN species to tillage, the density of a native nematode, H. bacteriophora, was unaffected by tillage, while the density of an introduced nematode, S. carpocapsae, decreased. The density of a third nematode introduced to the system, Steinernema riobrave, increased with tillage (Millar and Barbercheck 2002). Habitat preferences in temperature and soil depth can partially explain the nematodes' different responses to disturbance. S. carpocapsae prefers to remain near the soil surface and so is more vulnerable to soil disturbance than H. bacteriophora, which forages deeper and can escape the effects of tillage. S. riobrave may have responded well to tillage because it is better at surviving and persisting in hotter and drier conditions created by tillage (Millar and Barbercheck 2002). The data showed that Steinernema sp. found in some Indonesia regions showed high adaptive capability when applied on another region or condition (Anton Muhibuddin, 2008). The response of EPNs to other forms of disturbance is less well defined. Nematodes are not affected by certain pesticides and are able to survive flooding. The effects of natural disturbances such as fire have not been examined.
Applications
Although the biological control industry has acknowledged EPNs since the 1980s, relatively little is understood about their biology in natural and managed ecosystems (Georgis 2002). Nematode-host interactions are poorly understood, and more than half of the natural hosts for recognized Steinernema and Heterorhabditis species remain unknown (Akhurst and Smith 2002). Information is lacking because isolates of naturally infected hosts are rare, so native nematodes are often baited using Galleria mellonella, a lepidopteran that is highly susceptible to parasitic infection. Laboratory studies showing wide host ranges for EPNs were often overestimates, because in a laboratory, contact with a host is assured and environmental conditions are ideal; there are no "ecological barriers" to infection (Kaya and Gaugler 1993, Gaugler et al. 1997). Therefore, the broad host range initially predicted by assay results has not always translated into insecticidal success.
Nematodes are open to mass production and don't require specialized application equipment since they are compatible with standard agrochemical equipment, including various sprayers (i.e. backpack, pressurized, mist, electrostatic, fan and aerial) and irrigation systems (Cranshaw, & Zimmerman 2013).
The lack of knowledge about nematode ecology has resulted in unanticipated failures to control pests in the field. For example, parasitic nematodes were found to be completely ineffective against blackflies and mosquitoes due to their inability to swim (Lewis et al.1998). Efforts to control foliage-feeding pests with EPNs were equally unsuccessful, because nematodes are highly sensitive to UV light and desiccation (Lewis et al.1998). Comparing the life histories of nematodes and target pests can often explain such failures (Gaugler et al. 1997). Each nematode species has a unique array of characteristics, including different environmental tolerances, dispersal tendencies, and foraging behaviors (Lewis et al. 1998). Increased knowledge about the factors that influence EPN populations and the impacts they have in their communities will likely increase their efficacy as biological control agents.
Recently, studies have shown utilizing both EPNs (steinernematids and heterorhabditids) in combination for biological control of plum curculio in orchards in Northeast America have reduced populations by as much as 70–90% in the field, depending on insect stage, treatment timing and field conditions. More studies are being conducted for the efficacy of EPNs utilized as a biological control agent for organic growers as an alternative solution to chemistries that aren't as effective at controlling insect infestations.(Agnello, Jentsch, Shield, Testa, and Keller 2014).
See also
Biological insecticides
Entomopathogenic fungus
References
Akhurst R and K Smith 2002. "Regulation and safety". p 311–332 in Gaugler I, editor. Entomopathogenic Nematology. CABI Publishing. New Jersey.
Boemare, N. 2002. "Biology, Taxonomy, and Systematics of Photorabdus and Xenorhabdus". p 57–78 in Gaugler I, editor. Entomopathogenic Nematology. CABI Publishing. New Jersey.
Bathon, H. 1996. "Impact of entomopathogenic nematodes on non-target hosts". Biocontrol Science and Technology 6: 421–434.
Campbell, J.F. and R. Gaugler. 1993. "Nictation behavior and its ecological implications in the host search strategies of enomopathogenic nematodes". Behavior. 126:155–169 Part 3-4
Campbell, J.F. and Gaugler, R.R. 1997. "Inter-specific variation in entomopathogenic nematode foraging strategy: Dichotomy or variation along a continuum?" Fundamental and Applied Nematology 20 (4): 393–398.
Campbell JF; Orza G; Yoder F, Lewis E and Gaugler R. 1998. "Spatial and temporal distribution of endemic and released entomopathogenic nematode populations in turfgrass". Entomologia Experimentalis et Applicata. 86:1–11.
Campbell J.F., and H.K. Kaya. 2000. "Influence of insect associated cues on the jumping behavior of entomopathogenic nematodes (Steinernema spp.)". Behavior 137: 591–609 Part 5.
Eng, M. S., E.L. Preisser, and D.R. Strong. 2005. "Phoresy of the entomopathogenic nematode Heterorhabditis marelatus by a non-host organism, the isopod Porcellio scaber". Journal of Invertebrate Pathology 88(2):173–176
Gaugler. 1-2-06, "Nematodes-Biological Control ", editor-Contact Yaxin Li, Cornell University.
Gaugler R, Lewis E, and RJ Stuart. 1997. "Ecology in the service of biological control: the case of entomopathogenic nematodes". Oecologia. 109:483–489.
Georgis R., H.K. Kaya, and R. Gaugler. 1991. "Effect of Steinernematid and Heterorhabditid nematodes (Rhabditida, Steinternematidae and Heterorhabditidae) on Nontarget Arthropods". Environmental Entomology. 20(3): 815–822.
Georgis, R. 2002. "The Biosys Experiment: an Insider's Perspective". p 357–371 in Gaugler I, editor. Entomopathogenic Nematology. CABI Publishing. New Jersey.
Grewal P.S., E.E. Lewis and R.Gaugler. 1997. "Response of infective stage parasites (Nematoda: Steinernematidae) to volatile cues from infected hosts". Journal of Chemical Ecology. 23(2): 503–515.
Koppenhofer AM, and H.K. Kaya. 1996. "Coexistence of two steinernematid nematode species (Rhabditida: Steinernematidae) in the presence of two host species". Applied Soil Ecology. 4(3): 221–230.
Kaya H.K., and A.M. Koppenhofer. 1996. "Effects of microbial and other antagonistic organism and competition on entomopathogenic nematodes". Biocontrol Science and Technology. 6(3): 357–371.
Lewis EE, Campbell JF and R Gaugler. 1998. "A conservation approach to using entomopathogenic nematodes in turf and landscapes". p 235–254 in P Barbosa Editor. Conservation Biological Control. Academic Press. San Diego.
Lewis EE. 2002. Behavioural Ecology. p 205–224 in Gaugler I, editor. Entomopathogenic Nematology. CABI Publishing. New Jersey.
Lupwayi, N.Z., W.A. Rice, and G.W. Clayton. 1998. "Soil microbial diversity and community structure under wheat as influenced by tillage and crop rotation". Soil Biological Biochemistry. 30: 1733–1741.
Millar LC and ME Barbercheck. 2001. "Interaction between endemic and introduced entomopathogenic nematodes in conventional-till and no-till corn". Biological Control. 22: 235–245.
Millar LC and ME Barbercheck.2002. "Effects of tillage practices on entomopathogenic nematodes in a corn agroecosystem". Biological control 25: 1–11.
Minchella, D.J. and M.E. Scott. 1991. "Parasitism-A cryptic determinant of animal community structure". Trends in Ecology and Evolution 6(8): 250–254.
Muhibuddin,A. 2008."Some Important Entomopathogenic Agens on Indonesia Region".Irtizaq Press-Surabaya, Indonesia.
Poinar, GO. 1993. "Origins and phylogenetic relationships of the entomophilic rhabditis, Heterorhabditis and Steinernema". Fundamental and Applied Nematology 16(4): 333–338.
San-Blas, E. and S.R. Gowen. 2008. "Facultative scavenging as a survival strategy of entomopathogenic nematodes". International Journal for Parasitology 38:85–91.
Shapiro, D.I.; Berry, E. C.; Lewis, L. C. 1993. "Interactions between nematodes and earthworms: Enhanced dispersal of Steinernema carpocapsae". Journal of Nematology 25(2): 189–192.
Somasekar N, Grewal PS, De Nardo EAB, and BR Stinner. 2002. "Non-target effects of entomopathogenic nematodes on the soil community". Journal of Applied Ecology. 39: 735–744.
Stuart RJ and R Gaugler. 1994. "Patchiness in populations of entomopathogenic nematodes". Journal of Invertebrate Pathology. 64: 39–45.
Strong, D. R., H.K. Kaya, A.V. Whipple, A.L, Child, S. Kraig, M. Bondonno, K. Dyer, and J.L. Maron. 1996. "Entomopathogenic nematodes: natural enemies of root-feeding caterpillars on bush lupine". Oecologia (Berlin) 108(1): 167–173.
Agnello, Art, Peter Jentsch, Elson Shield, Tony Testa, and Melissa Keller. "Evaluation of Persistent Entomopathogenic Nematodes." Evaluation of Persistent Entomopathogenic Nematodes for Biological Control of Plum Curculio 22.1 (Spring 2014): 21–23. Cornell University Dept. of Entomology. Web.
Cranshaw, W.S., and R. Zimmerman. "Insect Parasitic Nematodes." Insect Parasitic Nematodes. Colorado State University Extension, June 2013. Web. 3 July 2015.
External links
Entomopathogenic Nematodes
Nematodes as Biological Control Agents of Insects
Parasitic Nematodes Home Page
Entomopathogenic nematodes on the UF / IFAS Featured Creatures website.
>
>.revised. March 2015
[(https://bb.its.iastate.edu/bbcswebdav/pid-2172361-dt-content-rid-24641608_1/courses/12015-PL_P_-574_-XW/Lecture%204-2015.pdf]
Parasitic nematodes of animals
Biological control agents of pest insects
Soil biology | Entomopathogenic nematode | [
"Biology"
] | 5,484 | [
"Soil biology"
] |
4,200,350 | https://en.wikipedia.org/wiki/List%20of%20captive-bred%20meat%20animals | The following is a list of animals that are or may have been raised in captivity for consumption by people. For other animals commonly eaten by people, see Game (food).
See also
Game (food)
List of meat dishes
Marine mammals as food
References
Meat
Meat
Meat animals | List of captive-bred meat animals | [
"Biology"
] | 56 | [
"Lists of biota",
"Lists of animals",
"Animals"
] |
4,201,016 | https://en.wikipedia.org/wiki/Georg%20Baur | Georg Baur (1859–1898) was a German vertebrate paleontologist and Neo-Lamarckian who studied reptiles of the Galapagos Islands, particularly the Galápagos tortoises, in the 1890s. He is perhaps best known for his subsidence theory of the origin of the Galapagos Islands, where he postulated the islands were the remains of a former landmass, connected to South America via Cocos Island.
Early life and education
Baur was born in Weisswasser, Bohemia in 1859. He spent his early years Hohenheim near Stuttgart. As his father was a professor of forestry, Baur initially planned to study forestry where his father was a professor. However, while at university he became interested in the fields of geology, paleontology, and botany instead.
Career
Prior to his work on the Galapagos Islands, Baur was an assistant to Othniel Charles Marsh at Yale University from 1884 until 1890. Baur undertook an expedition to the Galápagos Islands in 1891, leaving New York on May 1, arriving in the Galápagos on June 9, and departing the islands on August 26 for Guayaquil, Panama, and the return to New York. Baur named several subspecies of Galápagos tortoise, including Chelonoidis nigra guentheri (Baur, 1889), and Chelonoidis nigra galapagoensis (Baur, 1889). Not all of Baur's tortoise taxa are still considered valid.
He also studied turtles of the southern United States, naming several species new to science. The following species and subspecies of reptiles were named in his honor by other herpetologists: Kinosternon baurii, Phyllodactylus baurii (one of the leaf-toed geckos of the Galápagos Islands), Coelophysis bauri and ''Terrapene carolina bauri.
He held the position of Docent (lecturer) in osteology and paleontology, Clark University, from 1890 to 1892, and after that, professor and chairman of the osteology and vertebrate paleontology department at the University of Chicago until his death in 1898 at age 39.
References
External links
Lefalophodon
1859 births
1898 deaths
People from Bělá pod Bezdězem
German Bohemian people
German paleontologists
American paleontologists
American science teachers
Lamarckism
Clark University faculty
University of Chicago faculty
Emigrants from the Austrian Empire
German emigrants to the United States | Georg Baur | [
"Biology"
] | 520 | [
"Non-Darwinian evolution",
"Biology theories",
"Obsolete biology theories",
"Lamarckism"
] |
4,201,372 | https://en.wikipedia.org/wiki/Lazar%20Lyusternik | Lazar Aronovich Lyusternik (also Lusternik, Lusternick, Ljusternik; ; 31 December 1899 – 22 July 1981) was a Soviet mathematician. He is famous for his work in topology and differential geometry, to which he applied the variational principle. Using the theory he introduced, together with Lev Schnirelmann, he proved the theorem of the three geodesics, a conjecture by Henri Poincaré that every convex body in 3-dimensions has at least three simple closed geodesics. The ellipsoid with distinct but nearly equal axis is the critical case with exactly three closed geodesics.
The Lusternik–Schnirelmann theory, as it is called now, is based on the previous work by Poincaré, David Birkhoff, and Marston Morse. It has led to numerous advances in differential geometry and topology. For this work Lyusternik received the Stalin Prize in 1946. In addition to serving as a professor of mathematics at Moscow State University, Lyusternik also worked at the Steklov Mathematical Institute (RAS) from 1934 to 1948 and the Lebedev Institute of Precise Mechanics and Computer Engineering (IPMCE) from 1948 to 1955.
He was a student of Nikolai Luzin. In 1930 he became one of the initiators of the Egorov affair and then one of the participants in the notorious political persecution of his teacher Nikolai Luzin known as the Luzin affair.
See also
Lusternik–Schnirelmann category
Lyusternik's generalization of the Brunn–Minkowski theorem
References
Pavel Alexandrov et al., LAZAR' ARONOVICH LYUSTERNIK (on the occasion of his 60th birthday), Russ. Math. Surv. 15 (1960), 153–168.
Pavel Alexandrov, In memory of Lazar Aronovich Lyusternik, Russ. Math. Surv. 37 (1982), 145-147
External links
1899 births
1981 deaths
20th-century Polish Jews
20th-century Polish mathematicians
20th-century Russian mathematicians
People from Zduńska Wola
People from Kalisz Governorate
Academic staff of Moscow State University
Corresponding Members of the USSR Academy of Sciences
Moscow State University alumni
Recipients of the Order of the Badge of Honour
Recipients of the Order of Lenin
Recipients of the Order of the Red Banner of Labour
Recipients of the Stalin Prize
Differential geometers
Topologists
Soviet mathematicians
Burials at Kuntsevo Cemetery | Lazar Lyusternik | [
"Mathematics"
] | 507 | [
"Topologists",
"Topology"
] |
4,201,766 | https://en.wikipedia.org/wiki/Starlet%20sea%20anemone | The starlet sea anemone (Nematostella vectensis) is a species of small sea anemone in the family Edwardsiidae native to the east coast of the United States, with introduced populations along the coast of southeast England and the west coast of the United States (class Anthozoa, phylum Cnidaria, a sister group of Bilateria). Populations have also been located in Nova Scotia, Canada. This sea anemone is found in the shallow brackish water of coastal lagoons and salt marshes where its slender column is usually buried in the mud and its tentacles exposed. Its genome has been sequenced and it is cultivated in the laboratory as a model organism, but the IUCN has listed it as being a "Vulnerable species" in the wild.
Description
The starlet sea anemone has a bulbous basal end and a contracting column that ranges in length from less than . There is a fairly distinct division between the scapus, the main part of the column, and the capitulum, the part just below the crown of tentacles. The outer surface of the column has a loose covering of mucus to which particles of sediment tend to adhere. At the top of the column is an oral disk containing the mouth surrounded by two rings of long slender tentacles. Typically there are fourteen but sometimes as many as twenty tentacles, the outermost being longer than the inner whorl. The starlet sea anemone is translucent and largely colourless but usually has a pattern of white markings on the column and white banding on the tentacles.
Distribution and habitat
The starlet sea anemone occurs on the eastern and westward seaboard of North America. Its range extends from Nova Scotia to Louisiana on the east coast and from Washington to California on the west coast. It is also known from three locations in the United Kingdom—two in East Anglia and one on the Isle of Wight. Its typical habitat is brackish ponds, brackish lagoons and ditches and pools in salt marshes. It is found in positions with little water flow and seldom occurs more than one metre (yard) below the surface. It can tolerate a wide range of salinities, 2 to 52 parts per thousand in southern England, and seems to breed best at around 11 parts per thousand. It is typically buried up to the crown in fine silt or sand, with its tentacles flared out on the surface of the sediment. When not feeding, the tentacles are retracted into the column.
Ecology
The starlet sea anemone sometimes occurs at high densities (as many as 2,700 per square metre has been recorded). Other macrofauna found alongside it in England include the lagoon cockle (Cerastoderma glaucum), the lagoon sandworm Armandia cirrhosa, the isopod Idotea chelipes and the amphipods Monocorophium insidiosum and Gammarus insensibilis. Plants in its habitat include foxtail stonewort, Lamprothamniun papulosum, green algae Chaetomorpha spp., and ditch grass (Ruppia) spp. In North America it is found among the saltmarsh grasses Spartina patens and Spartina alterniflora and the green algae Chaetomorpha spp. and Cladophora.
The starlet sea anemone feeds on ostracods, copepods, small molluscs, chironomid larvae, nematodes, polychaetes, small crustaceans and egg masses. The only known predator of this sea anemone is the grass shrimp Palaemonetes pugio.
Life cycle
On the east coast of the United States, reproduction is mostly by sexual means. The anemones become mature at about three to four months with a column length of or more. Up to two thousand eggs are laid in a gelatinous clump. The spherical planula larvae that hatch about two days later spend around a week in the water column before settling on the sediment and undergoing metamorphosis into juveniles. In southern England all individuals seem to be female and reproduction is by budding. Two-crowned anemones are common in this location and these individuals later undergo fission into separate sea anemones. On the west coast of the United States, all individuals are also female while in Nova Scotia, all are male, and reproduction in both these populations is likely to be by asexual means. In areas of low population density, members of this species are more likely to reproduce asexually via transverse fission, which enables them to maintain and expand their populations under environmental duress and in areas with limited opportunities for sexual reproduction.
Research
Cnidarians are the simplest animals in which the cells are organized into tissues. Specialist cells include epithelial cells, neurons, muscle fibres and stem cells, and there is a complex extracellular matrix. Nematostella vectensis is used as a model organism for the study of evolution, genomics, reproductive biology, developmental biology and ecology. It is easy to care for in the laboratory, even in inland locations, and a protocol has been developed for the induction of gametogenesis which can yield large numbers of embryos on a daily basis. Its genome has been sequenced. Analysis of expressed sequence tags and the whole genome have shown a remarkable degree of similarity in the gene sequence conservation and complexity between the sea anemone and vertebrates. Recent sequencing of its complex genome has shown that it has an estimated complement of 18,000 protein-coding genes. Its repertoire, structure, and organization is very conserved when compared with that of vertebrates but surprisingly different from that of fruit flies and nematodes, which have lost many genes and introns and have experienced genome rearrangements, indicating the genome of their common ancestor also was a complex genome
Researchers at the Sars International Centre for Marine Molecular Biology have found that genes concerned in the formation of the head in higher animals are also present in Nematostella vectensis. The larva swims with the end with its main sense organ in front, and at metamorphosis this end becomes the lower end of the column. The "head" gene is concerned in the development of this lower end rather than the oral crown and tentacles.
References
Further reading
Uhlinger, K. R. (1997). Sexual reproduction and early development in the estuarine sea anemone, Nematostella vectensis Stephenson, 1935. Thesis. University of California, Davis.
External links
StellaBase
Nematostella. Tree of Life.
JGI's Nematostella Genome Project
Edwardsiidae
Cnidarians of the Atlantic Ocean
Cnidarians of the Pacific Ocean
Marine fauna of Europe
Marine fauna of North America
Western North American coastal fauna
Anthozoa of the United States
Animal models
Animal developmental biology
Animals described in 1935
Vulnerable animals
Vulnerable biota of Europe
Taxa named by Thomas Alan Stephenson | Starlet sea anemone | [
"Biology"
] | 1,425 | [
"Model organisms",
"Animal models"
] |
4,202,244 | https://en.wikipedia.org/wiki/Comparison%20of%20communication%20satellite%20operators | The following is a list of the world's largest fixed service satellite operators in the world. Comparison data is from different time periods and sources and may not be directly comparable.
Note: Revenue in U.S. Dollars
References
Link to 2005 numbers as pdf
Link to 2007 numbers as pdf
Link to 2008 numbers as pdf
External links
2001 numbers
2002 numbers
2003 numbers together with other space firms (total 50) firms reviewed
2004 numbers reviewed in this page from Space News
Largest fixed satellite operators
Outer space lists
Communication satellite operators | Comparison of communication satellite operators | [
"Astronomy"
] | 103 | [
"Outer space",
"Outer space lists"
] |
4,202,498 | https://en.wikipedia.org/wiki/John%20R.%20Fox | John Robert Fox (May 18, 1915 – December 26, 1944) was a United States Army first lieutenant who was killed in action after calling in artillery fire on the enemy during World War II. In 1997, he was posthumously awarded the Medal of Honor, the nation's highest military decoration for valor, for his actions on December 26, 1944, in the vicinity of Sommocolonia, Italy. It is believed that he called in his own coordinates because he was in an area overrun with German soldiers.
Fox and six other African Americans who served in World War II were awarded the Medal of Honor on January 12, 1997. The Medal of Honor was posthumously presented to Fox by President Bill Clinton on January 13, 1997, during a Medal of Honor ceremony for the seven recipients at the White House in Washington, D.C. The seven recipients awarded in 1997 are the only Black Americans to be awarded the Medal of Honor for World War II.
Biography
Fox was born in Cincinnati, Ohio, on May 18, 1915, the eldest of three children. He was raised in Wyoming, Ohio, and attended Ohio State University. He transferred to Wilberforce University, participating in ROTC under Captain Aaron R. Fisher, a highly decorated World War I veteran. Fox graduated with a degree in engineering and received a commission as a U.S. Army second lieutenant in 1941.
Military service
During World War II, Fox was in the 92nd Infantry Division, known as the Buffalo Soldiers, a segregated African American division. Lt. Fox was a forward observer of the 598th Artillery Battalion, supporting the 366th Infantry Regiment of the division. On December 26, 1944, Fox was part of a small forward observer party that volunteered to stay behind in the Italian village of Sommocolonia, in the Serchio River Valley. American forces had been forced to withdraw from the village after it had been overrun by the Germans. From his position on the second floor of a house, Fox called in defensive artillery fire. As the Wehrmacht soldiers continued attacking, Fox radioed the artillery to bring its fire closer to his position, eventually ordering to fire directly on his position.
The soldier who received the message, Fox's close friend, Lt. Otis Zachary (1917–2009), was stunned, knowing that Fox had little chance to survive, but Fox said, "Fire it! There's more of them than there are of us. Give them hell!" The resulting artillery barrage killed Fox and approximately 100 German soldiers surrounding his position. Fox's sacrifice gained time for U.S. forces to organize a counterattack. The village was recaptured by January 1, 1945.
Fox was buried in Colebrook Cemetery in Whitman, Massachusetts. On April 15, 1982, Fox was posthumously awarded the Distinguished Service Cross; the initial award recommendation had been lost.
Medal of Honor
In the early 1990s, the US Army determined that black soldiers had been denied consideration for the Medal of Honor in World War II because of race discrimination. In 1993, the U.S. Army commissioned Shaw University in Raleigh, North Carolina, to research and determine if there was racial disparity in the Medal of Honor nomination and awarding process. The study found that there was systematic discrimination; it recommended in 1996 that ten African American veterans of World War II be awarded the Medal of Honor. In October 1996, Congress passed a bill to allow President Bill Clinton to award the Medal of Honor to these former soldiers. Seven of the ten, including Lt. Fox, were approved, and awarded the Medal of Honor (six had Distinguished Service Crosses revoked and upgraded to the Medal of Honor) on January 12, 1997.
A day later, President Clinton awarded the Medal of Honor to the seven soldiers in a formal ceremony, but six awards were made posthumously and received by family members. Fox's widow accepted the Medal of Honor on his behalf. Vernon Baker was the only living recipient of the medal at the time.
Other honors
After the war, the citizens of Sommocolonia erected a monument to nine men who were killed during the artillery barrage: eight Italian soldiers and Lt. Fox.
In 2005, the toy company Hasbro introduced a 12-inch action figure "commemorating Lt. John R. Fox as part of its G.I. Joe Medal-of-Honor series."
On July 16, 2000, Sommocolonia dedicated a peace park in memory of Fox and his unit.
American Legion Post 631, located in Fox's birthplace of Cincinnati, Ohio, is named for Lt. Fox.
Military awards
Fox's decorations and awards include:
Medal of Honor citation
Fox's Medal of Honor citation reads:
The President of the United States in the name of The Congress takes pride in presenting the Medal of Honor posthumously to
Citation:
For conspicuous gallantry and intrepidity at the risk of his life above and beyond the call of duty: First Lieutenant John R. Fox distinguished himself by extraordinary heroism at the risk of his own life on 26 December 1944 in the Serchio River Valley Sector, in vicinity of Sommocolonia, Italy. Lieutenant Fox was a member of Cannon Company, 366th Infantry, 92nd Infantry Division, acting as a forward observer, while attached to the 598th Field Artillery Battalion. Christmas Day in the Serchio Valley was spent in positions which had been occupied for some weeks. During Christmas night, there was a gradual influx of enemy soldiers in civilian clothes and by early morning the town was largely in enemy hands. An organized attack by uniformed German formations was launched around 0400 hours, 26 December 1944. Reports were received that the area was being heavily shelled by everything the Germans had, and although most of the U.S. infantry forces withdrew from the town, Lieutenant Fox and members of his observation party remained behind on the second floor of a house, directing defensive fires. Lieutenant Fox reported at 0800 hours that the Germans were in the streets and attacking in strength, He called for artillery fire increasingly close to his own position. He told his battalion commander, "That was just where I wanted it. Bring it 60 yards!" His commander protested that there was a heavy barrage in the area and bombardment would be too close. Lieutenant Fox gave his adjustment, requesting that the barrage be fired. The distance was cut in half. The Germans continued to press forward in large numbers, surrounding the position. Lieutenant Fox again called for artillery fire with the commander protesting again stating, "Fox, that will be on you!" The last communication from Lieutenant Fox was. "Fire it! There's more of them than there are of us. Give them hell!" The bodies of Lieutenant Fox and his party were found in the vicinity of his position when his position was taken. This action, by Lieutenant Fox, at the cost of his own life, inflicted heavy casualties, causing deaths of approximately 100 Germans, thereby delaying the advance of the enemy until infantry and artillery units could be reorganized to meet the attack. Lieutenant Fox's extraordinary valorous actions exemplify the highest traditions of the military service.
See also
Vernon Baker
Edward A. Carter Jr.
Willy F. James Jr.
Ruben Rivers
Charles L. Thomas
George Watson
List of African-American Medal of Honor recipients
Final protective fire
Winter Line
Notes
External links
1997 Medal of Honor Ceremony
1915 births
1944 deaths
United States Army Medal of Honor recipients
Recipients of the Distinguished Service Cross (United States)
United States Army officers
United States Army personnel killed in World War II
Wilberforce University alumni
Military personnel from Cincinnati
World War II recipients of the Medal of Honor
African Americans in World War II
African-American United States Army personnel
People who have sacrificed their lives to save others | John R. Fox | [
"Biology"
] | 1,546 | [
"Behavior",
"Altruism",
"Human behavior",
"People who have sacrificed their lives to save others"
] |
4,202,769 | https://en.wikipedia.org/wiki/Benefield%20Anechoic%20Facility | Benefield Anechoic Facility (BAF) is an anechoic chamber located at the southwest side of the Edwards Air Force Base main base. It is currently the world's largest anechoic chamber. The BAF supports installed systems testing for avionics test programs requiring a large, shielded chamber with radio frequency (RF) absorption capability that simulates free space.
The facility is named after Rockwell test pilot and flight commander Tommie Douglas "Doug" Benefield, who was killed in a crash northeast of Edwards Air Force Base in the desert east of Boron on August 29, 1984 during a USAF B-1 Lancer flight test.
Purpose
The BAF is a ground test facility to investigate and evaluate anomalies associated with Electronic Warfare systems, avionics, tactical missiles and their host platforms. Tactical-sized, single or multiple, or large vehicles can be operated in a controlled electromagnetic (EM) environment with emitters on and sensors stimulated while RF signals are recorded and analyzed. The largest platforms tested at the BAF have been the B-52 and C-17 aircraft. The BAF supports testing of other types of systems such as spacecraft, tanks, satellites, air defense systems, drones and armored vehicles.
The BAF equipment generates RF signals with a wide variety of characteristics, simulating red/blue/gray (unfriendly/friendly/unknown) surface-based, sea-based, and airborne systems. With the combination of signals and control functions available, a wide variety of test conditions can be emulated. Many conditions that are not available on outdoor ranges can be easily generated from the aspect of signal density, pulse density and number of simultaneous types.
Through the use of environmental monitoring systems, an independent agency captures, records, and verifies RF generated signals. These systems have the capabilities for real-time and post-test RF signal parameter measurement, instrument display recording, data analysis and test coordination, as well as providing the data for signal verification.
Some aircraft tested at the BAF include:
F-22 Raptor
C-130 Hercules
NC-130H
F-16 Fighting Falcon
B-1 Lancer
X-43A
MH-47 Chinook
V-22 Osprey
KC-46A Tanker
F-15SG Eagle
F-15SA Saudi Advanced Eagle
Special use
In 2003, BMW tested levels of electromagnetic interference on then-upcoming 2004 models of the 530i, 545i and debut model, 645i.
References
External links
Edwards AFB homepage
Satellite image on Google Maps
Avionics
Edwards Air Force Base | Benefield Anechoic Facility | [
"Technology"
] | 522 | [
"Avionics",
"Aircraft instruments"
] |
4,204,003 | https://en.wikipedia.org/wiki/Franz%E2%80%93Keldysh%20effect | The Franz–Keldysh effect is a change in optical absorption by a semiconductor when an electric field is applied. The effect is named after the German physicist Walter Franz and Russian physicist Leonid Keldysh.
Karl W. Böer observed first the shift of the optical absorption edge with electric fields during the discovery of high-field domains and named this the Franz-effect. A few months later, when the English translation of the Keldysh paper became available, he corrected this to the Franz–Keldysh effect.
As originally conceived, the Franz–Keldysh effect is the result of wavefunctions "leaking" into the band gap. When an electric field is applied, the electron and hole wavefunctions become Airy functions rather than plane waves. The Airy function includes a "tail" which extends into the classically forbidden band gap. According to Fermi's golden rule, the more overlap there is between the wavefunctions of a free electron and a hole, the stronger the optical absorption will be. The Airy tails slightly overlap even if the electron and hole are at slightly different potentials (slightly different physical locations along the field). The absorption spectrum now includes a tail at energies below the band gap and some oscillations above it. This explanation does, however, omit the effects of excitons, which may dominate optical properties near the band gap.
The Franz–Keldysh effect occurs in uniform, bulk semiconductors, unlike the quantum-confined Stark effect, which requires a quantum well. Both are used for electro-absorption modulators. The Franz–Keldysh effect usually requires hundreds of volts, limiting its usefulness with conventional electronics – although this is not the case for commercially available Franz–Keldysh-effect electro-absorption modulators that use a waveguide geometry to guide the optical carrier.
Effect on modulation spectroscopy
The absorption coefficient is related to the dielectric constant (especially the complex part 2). From Maxwell's equation, we can easily find out the relation,
n0 and k0 are the real and complex parts of the refractive index of the material.
We will consider the direct transition of an electron from the valence band to the conduction band induced by the incident light in a perfect crystal and try to take into account of the change of absorption coefficient for each Hamiltonian with a probable interaction like electron-photon, electron-hole, external field. These approach follows from. We put the 1st purpose on the theoretical background of Franz–Keldysh effect and third-derivative modulation spectroscopy.
One electron Hamiltonian in an electro-magnetic field
where A is the vector potential and V(r) is a periodic potential.
(kp and e are the wave vector of em field and unit vector.)
Neglecting the square term and using the relation within the Coulomb gauge , we obtain
Then using the Bloch function (j = v, c that mean valence band, conduction band)
the transition probability can be obtained such that
Power dissipation of the electromagnetic waves per unit time and unit volume gives rise to following equation
From the relation between the electric field and the vector potential, , we may put
And finally we can get the imaginary part of the dielectric constant and surely the absorption coefficient.
2-body(electron-hole) Hamiltonian with EM field
An electron in the valence band(wave vector k) is excited by photon absorption into the conduction band(the wave vector at the band is ) and leaves a hole in the valence band (the wave vector of the hole is ). In this case, we include the electron-hole interaction.()
Thinking about the direct transition, is almost same. But Assume the slight difference of the momentum due to the photon absorption is not ignored and the bound state- electron-hole pair is very weak and the effective mass approximation is valid for the treatment. Then we can make up the following procedure, the wave function and wave vectors of the electron and hole
(i, j are the band indices, and re, rh, ke, kh are the coordinates and wave vectors of the electron and hole respectively)
And we can take the center of mass momentum Q such that
and define the Hamiltonian
Then, Bloch functions of the electron and hole can be constructed with the phase term
If V varies slowly over the distance of the integral, the term can be treated like following.
here we assume that the conduction and valence bands are parabolic with scalar masses and that at the top of the valence band , i.e.
( is the energy gap)
Now, the Fourier transform of entering Eq.(), the effective mass equation for the exciton may be written as
then the solution of eq is given by
is called the envelope function of an exciton. The ground state of the exciton is given in analogy to the hydrogen atom.
then, the dielectric function is
detailed calculation is in.
Franz–Keldysh effect
Franz–Keldysh effect means an electron in a valence band can be allowed to be excited into a conduction band by absorbing a photon with its energy below the band gap. Now we're thinking about the effective mass equation for the relative motion of electron hole pair when the external field is applied to a crystal. But we are not to take a mutual potential of electron-hole pair into the Hamiltonian.
When the Coulomb interaction is neglected, the effective mass equation is
.
And the equation can be expressed,
( where is the value in the direction of the principal axis of the reduced effective mass tensor)
Using change of variables:
then the solution is
where
For example, the solution is given by
The dielectric constant can be obtained inserting this expression into Eq.(), and changing the summation with respect to λ to
The integral with respect to is given by the joint density of states for the two-D band. (the Joint density of states is nothing but the meaning of DOS of both electron and hole at the same time.)
where
Then we put
And think about the case we find , thus with the asymptotic solution for the Airy function in this limit.
Finally,
Therefore, the dielectric function for the incident photon energy below the band gap exist! These results indicate that absorption occurs for an incident photon.
See also
Quantum-confined Stark effect
References
General references
W. Franz, Einfluß eines elektrischen Feldes auf eine optische Absorptionskante, Z. Naturforschung 13a (1958) 484–489.
L. V. Keldysh, Behaviour of Non-Metallic Crystals in Strong Electric Fields, J. Exptl. Theoret. Phys. (USSR) 33 (1957) 994–1003, translation: Soviet Physics JETP 6 (1958) 763–770.
L. V. Keldysh, Ionization in the Field of a Strong Electromagnetic Wave, J. Exptl. Theoret. Phys. (USSR) 47 (1964) 1945–1957, translation: Soviet Physics JETP 20 (1965) 1307–1314.
J. I. Pankove, Optical Processes in Semiconductors, Dover Publications Inc. New York (1971).
H. Haug and S. W. Koch, "Quantum Theory of the Optical and Electronic Properties of Semiconductors", World Scientific (1994).
C. Kittel, "Introduction to Solid State Physics", Wiley (1996).
Optoelectronics
Electronic engineering | Franz–Keldysh effect | [
"Technology",
"Engineering"
] | 1,566 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering"
] |
4,204,370 | https://en.wikipedia.org/wiki/Gepr%C3%BCfte%20Sicherheit | The Geprüfte Sicherheit ("Tested Safety") or GS mark is a voluntary certification mark for technical equipment. It indicates that the equipment meets German and, if available, European safety requirements for such devices. The main difference between GS and CE mark is that the compliance with the European safety requirements has been tested and certified by a state-approved independent body. CE marking, in contrast, is issued for the signing of a declaration that the product is in compliance with European legislation. The GS mark is based on the German Product Safety Act ("Produktsicherheitsgesetz", or "ProdSG").
Testing for the mark is available from many different laboratories, such as, DGUV Test the TÜV, Nemko and IMQ.
Although the GS mark was designed with the German market in mind, it appears on a large proportion of electronic products and machinery sold elsewhere in the world.
See also
CE mark
UL mark
References
External links
UL GS
Certification marks
Product safety | Geprüfte Sicherheit | [
"Mathematics"
] | 208 | [
"Symbols",
"Certification marks"
] |
4,204,718 | https://en.wikipedia.org/wiki/Unnecessary%20Fuss | Unnecessary Fuss is a film produced by People for the Ethical Treatment of Animals (PETA), showing footage shot inside the University of Pennsylvania's Head Injury Clinic in Philadelphia. The raw footage was recorded by the laboratory researchers as they inflicted brain damage to baboons using a hydraulic device. The experiments were conducted as part of a research project into head injuries such as is caused in vehicle accidents.
Sixty hours of audio and video tapes were stolen from the laboratory on May 28, 1984, by the Animal Liberation Front (ALF), described in their press release as the "Watergate tapes of the animal rights movement". ALF handed the tapes over to PETA. The footage was edited down to 26 minutes by Alex Pacheco and narrated by Ingrid Newkirk, then distributed to the media and Congress. Charles McCarthy, director of the Office for Protection from Research Risks (OPRR), wrote that the film had "grossly overstated the deficiencies in the Head Injury Clinic", but that the OPRR had found serious violations of the Guide for Care and Use of Laboratory Animals. Due to the publicity and the results of several investigations and reports, the lab was closed.
The title of the film comes from a statement made to The Globe and Mail by the head of the clinic, neurosurgeon Thomas Gennarelli, before the raid. He declined to describe his research to the newspaper because, he said, it had "the potential to stir up all sorts of unnecessary fuss."
Contents of the film
The film shows at least one sedated but not anesthetized baboon with his wrists and ankles tied, strapped to table, his head secured with dental stone inside a helmet. A hydraulic device slams the baboon's head, intended to simulate whiplash. After one such injury is sustained, the helmet seems stuck and two researchers use a hammer and screwdriver to dislodge the helmet; a researcher is heard to say "Push!", grunts, then "It's a boy!" as the helmet finally comes loose. One sequence shows that a baboon's ear has been damaged as the helmet is removed: "... like I left a little bit of the ear behind." The footage shows researchers performing electrocautery on an inadequately sedated baboon, smoking cigarettes and pipes during surgery, laughing, and playing loud music. A researcher is seen holding a brain-injured baboon up to the camera, while others speak to the animal: "Don't be shy now, sir, nothing to be afraid of". While one baboon was strapped and waiting in the hydraulic device, the photographer pans to a brain-damaged baboon strapped into a high chair in another corner of the room as he says "Cheerleading in the corner, we have B-10. B-10 wishes his counterpart well. As you can see, B-10 is still alive. B-10 is hoping for a good result".
Distribution, reception, result
The film was distributed to major newspapers and new agencies, as well as Congress. The broad distribution and the piteous images in the film stirred public outrage. Journalist Deborah Blum wrote "It is difficult to put into words just how ugly that brief movie is."
The university's president halted its use of animals in experiments in response to a preliminary report by the National Institutes of Health (NIH).
The Secretary of Health and Human Services, Margaret Heckler, after reading the same preliminary report, and after a four-day sit-in by animal rights activists at NIH, ordered the suspension of the annual $1 million NIH grant supporting the baboon research.
Several investigations and favorable assessments of the research have taken place. The NIH report and a university report were delayed because the activists refused to release the tapes for a year. The university report concurred with the NIH reviewers about the scientific merit of the head injury research, while delineating items where there were violations. It was noted in the report that since the raid and resulting media exposure, many of the concerns had already been addressed within the university. But in the end, the research lab was shut down.
The biomedical research community expressed its concerns that the government capitulating to activists would put other research at risk of attack by direct action.
OPRR investigation
An investigation was conducted by 18 veterinarians from the American College of Laboratory Animal Medicine, commissioned by the Office for Protection from Research Risks (OPRR). Charles R. McCarthy, director of the OPRR at the time, wrote that "[d]espite the fact that Unnecessary Fuss grossly overstated the deficiencies in the Head Injury Clinic, OPRR found many extraordinarily serious violations of the Guide for Care and Use of Laboratory Animals ... Furthermore, OPRR found deficiencies in the procedures for care of animals in many other laboratories operated under the auspices of the university."
The violations included that the depth of anesthetic coma was questionable; that most of the animals were not seen by a veterinarian either before or after surgery; survival surgical techniques were not carried out in the required aseptic manner; that the operating theater was not properly cleaned; and that smoking was allowed in the operating theater despite the presence of oxygen tanks.
When PETA made its 26-minute film available, the OPRR initially refused to investigate because the film had been edited from 60 hours of videotape. For over a year PETA refused to release the original footage. When they eventually handed over the unedited material, the OPRR discovered that the footage of the brain damage being inflicted involved just one baboon out of the 150 who had received the whiplash injuries, but the film had given the impression that the brain-damage scenes involved several animals.
The OPRR also found deficiencies in other laboratories operated by the university. The university's chief veterinarian was fired, new training programs were initiated, and the university was placed on probation, with quarterly progress reports to OPRR required.
Notes
References
External links
1984 films
1984 short documentary films
American short documentary films
Documentary films about American politics
Documentary films about animal rights
Anti-modernist films
Animal cruelty incidents
Animal cruelty incidents in film
Animal testing in the United States
Anti-vivisection movement
1980s English-language films
1980s American films
English-language documentary films | Unnecessary Fuss | [
"Chemistry"
] | 1,304 | [
"Animal testing",
"Anti-vivisection movement",
"Vivisection"
] |
4,204,738 | https://en.wikipedia.org/wiki/Voluntary%20Control%20Council%20for%20Interference%20by%20Information%20Technology%20Equipment | The Voluntary Control Council for Interference by Information Technology Equipment or VCCI is the Japanese body governing RF emissions (i.e. electromagnetic interference) standards.
It was formed in December 1985.
The VCCI mark of conformance also appears on some electrical equipment sold outside Japan.
External links
VCCI English-language website
Certification marks
Electromagnetic compatibility | Voluntary Control Council for Interference by Information Technology Equipment | [
"Mathematics",
"Engineering"
] | 68 | [
"Radio electronics",
"Electromagnetic compatibility",
"Symbols",
"Electrical engineering",
"Certification marks"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.